The Stack Overflow Podcast

You can’t spell Zapier without API

Episode Summary

Ben and friend of the show Kyle Mitofsky sit down with Reid Robinson, lead product manager for AI at Zapier, for a conversation about AI and automation. Plus: NFTs and the dog behind the doge.

Episode Notes

Zapier is a no-code automation platform that allows users to create custom workflows for their critical work apps. Learn how it works, peruse the blog, or sign up to try beta AI features.

Check out Reid’s article about how to write more effective AI prompts.

Zapier built a natural language actions (NLA) API to enable AI models to independently use natural language to complete Zapier actions.

You know the doge, but do you know the dog? RIP Balltze.

Find Reid on LinkedIn and the social network formerly known as Twitter.

Find Kyle on LinkedIn, GitHub, and text-based social media.

Episode Transcription

[intro music plays]

Ben Popper This episode is brought to you by Qualcomm’s AI Stack. The Qualcomm AI 
Stack is a unified AI toolset designed to help developers optimize and deploy AI models quickly on their products. Go to qualcomm.com/stackoverflow to get started. Be sure to check out that link, and let them know the show sent you.

BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ben Popper, Director of Content here, joined as I often am by my colleague and collaborator, Kyle Mit. Kyle, how are you doing today? 

Kyle Mitofsky Hello, hello. 

BP Hello, hello. So I know you just got out of a war room. We're talking OverflowAI. We're building things, there's LLMs, there's APIs, and it's apropos of today's discussion because we're going to be chatting with Reid Robinson, who is the Lead Product Manager over at– is it Zay-pier or Zap-pier? 

Reid Robinson Always a fun question: Zap-ier. The little device to remember this is Zapier makes you happier.

BP Okay. 

KM Oh, that's great. 

RR There you go. One of my favorite stories I learned early on about Zapier is that if they wanted it to sound like that, why not have two P’s in the name? The reason, given and fitting for today's talk, is that it has API in the name this way. 

KM Ohh. 

BP So you can camelcase the API this way if you need to and it'll really stand out on the billboard.

RR The original logo for Zapier had API in a different color scheme, was what they chose to do. 

BP Right. This is very YCombinator. I get it.

KM This is right up there with “jif” and “gif.” The original authors had intent and the original authors were themselves not English majors or grammarians. Glad we could get it settled. 

BP But that's also doubly funny because it's “Zapier makes you happier,” but API is like Zay-pier. It's the other way. Anyway, we're not going to get stuck on this. We're going to keep going. So we would like to welcome to the podcast, Reid Robinson, who is the Lead Product Manager over at Zapier, and we're going to be talking about all the things that they've been building. They have this huge partner ecosystem that plugs into their APIs, and of course, these days folks are eager to take advantage of large language models, generative AI, and the ability to use natural language as your interface to build all kinds of cool things with code. So Reid, welcome to the show.

RR Awesome. Excited to be here, thanks for having me. 

BP So the folks who are listening are mostly themselves software developers, engineering managers, stuff like that. Tell them a little bit about your journey. How did you end up in the role you're at today and what's your day-to-day like in your current position? 

RR I mean, going way back, I started my career in hardcore cold calling sales. That's probably the best way to put it. I moved to a city I knew nobody in, got the first job I could which was sales. I had no computer as well. It was just straight up smile and dial as it's called. Great way to start. But eventually I transitioned my way into tech and I'd always had a bit of technical inclination and so when I joined a company called Hootsuite on the sales team, I moved around and ended up joining the solution consultant team, sometimes called sales engineering, that type of team that does the technical demonstrations and work with enterprise customers. Throughout that journey, I had bugged our partnerships team so often about asking for improvements to certain integrations with things like LinkedIn and Facebook that eventually I got the ask one day of, “Hey, you keep bugging us. You're looking for different things to do in your career. Do you want to try doing this?” And I was like, “Sweet, let's do it.” And that was my foray into product partnerships. Eventually I left Hootsuite to join in startup strategic partnerships at Zapier where I managed partners like Facebook, Salesforce, Google, so on and so forth. I left, very briefly it turned out, to do my own startup in the world of 2021. I got excited by what was going on with NFTs and had always in my mind wanted to learn about building a business, and I did a hackathon one weekend with a buddy of mine from Zapier just to build with this technology. I was really curious about it. And throughout that, we did a little hackathon. We ended up getting investors that were interested. We got users messaging us afterwards like, “Hey, this doesn't work,” and we won some prize money. So we actually went all in on it and sold the business late last year. And throughout that though in my world of NFTs I also got into AI generative art. That's kind of my journey into AI. I was really Midjourney/Dall-e heavy. 

BP Love Midjourney. 

RR Yes. I still collect a lot of AI art. And when that all went down, I was talking to a lot of the Zapier co-founders who were really excited about AI and wanted to go in. And I love the mission of Zapier. I love working with the businesses we get to work with and the thought of combining automation with AI got me really excited. And so what I got up to at this point was originally really about the research side of it, I think, because if you look at Twitter, you listen to any podcast, there’s so much just new information coming at everything going on right now. So one of the main focuses is just testing out some of this new technology what we're seeing people do as small demos and trying to see is this scalable? Can we ship this? Can we provide something valuable to users? Small example, an easy one, but when the ChatGPT API came out, I think we had our app live within 24 hours, and the AI apps overall became the fastest growing apps on Zapier ever, just the appetite was massive. So at this point, my role has transitioned a bit. I focus kind of across the org now on a lot of our different AI efforts and helping unblock, helping advance what we're doing. We have a lot of teams working with AI at the moment, and just trying to help move things along and keep everything somewhat together. 

BP Sweet. Kyle, I want to let you ask some technical questions because I know you're playing in the same world, but I just have to tell one story, which is that when I was a journalist at The Verge, I was pitched a freelance story about a person who found Doge –the dog, found the actual dog– and we did a story about this wonderful kindergarten teacher in Japan who owned Doge. 

KM Oh, the Shiba Inu dog.

BP The dog. So it was like, “You know the meme, but do you know the actual dog?” And we went and met this wonderful kindergarten teacher and how her life had changed. So then during the NFT craze I was like, “I remember we had three dozen photos of that dog that we never used. We need to go get those photos and turn them into collectibles.” And I even emailed the woman and I was like, “I think this would be a great idea. Blah, blah, blah, blah, blah.” Never heard back, but a month later, she did NFT new photos of the dog and they made like $4 million. So whether she got the idea from me or not, it doesn't matter. It all went to a Shiba Inu rescue. So it was great. All right, back to the topic at hand. Kyle, take it away. 

KM That's great. So LLMs have been the thing that has been on a lot of people's mind and are hot topic and really give us, I think, a very robust way to think about natural language in code in programming. Before AI, if I'm coding something, I want it as a zero or one. It needs to be this very strongly-typed thing and if somebody gave me the requirement five years ago of, “You have this string and you just need to do sentiment analysis on it if it was happy,” that would've been hard, probably a decade ago, really hard. I think five years ago we had tools in our toolkit, not in my toolkit personally of stuff that I was touching, but maybe some cloud providers have their own AI that has sentiment analysis, something like that. It's still doing this sort of natural language processing. Can you talk more about that transition over time from how we've treated natural language processing systems we've had historically to what we have now with what feels like a catchall in terms of LLMs just being good at natural language in general. They're not as hyper-specialized just doing sentiment analysis as that example. I could kind of do that, I could kind of do other things. Have you seen that transition from people using your platform to do task automations, they have some language, they want to interrogate that language in some way, they want to inspect it and figure out if it has certain properties, and has that evolved over time in terms of how people were doing it? Or how do you see that happening now? 

RR Wow, first off, it's funny you say that. Again, I started my career selling social media management tools and one of the hot things back in 2014/2015 was the whole natural language– what did they call it then? NLP. 

KM NLP, Natural Language Processing. 

RR And the ability to do sentiment analysis from that. And because this is pre-Cambridge Analytica, there was a lot more access to content that was on the world pre-Musk's Twitter. You got a lot of access to social data back then. And I remember talking to a lot of businesses, we were trying to do sentiment analysis on their brand, sentiment analysis on competitors, and NLP was such a hot thing to do that. And word clouds, word clouds were really big. Marketers loved seeing word clouds back then. Clicking into a word cloud, that was huge. But I think today, fast forward and that almost seems comical if you're thinking back then. But you called on one thing that I think is, at least in my mind, a breakthrough that when I think of our type of users, it's that catchall and the fact that it's not a specialized job. NLP and the way people were using it was typically in these extremely specialized jobs. And even then I remember when we did sentiment analysis, you'd have to do mini trainings of it where you'd have to teach it like, “Oh no, if someone's this, this is actually sarcasm.” And there was so much of that that went on that made it really difficult to use. And I don't know of any very small business adoption of NLP back in the day, and I think that is probably the most significant thing that I'm seeing at least from our user base, which is a lot of businesses from all swaths of these very small businesses up to enterprises who are adopting AI technology into their workflows. The second thing I'll say on that is, you talked about what type of jobs or really how people are using or trying to work with language in their workflows, and that's essentially everything. Everything you're doing with AI in Zapier is working with language for the most part. We don't see a lot with numbers, which is good because I'm sure everybody listening to this knows, they don't really do that well with numbers, the LLMs. 

BP Not primes, especially not primes. 

RR Exactly, and not great with arithmetic. But to that extent, tons of things. I mean that's the whole exciting use cases we start to see. There's exciting things like we've launched the ability to support the function calling API in a way that supports things like dealing with unstructured text and creating structure from it. I've even adopted this in really fun ways that are valuable for me. The other day I built one where I get emails from my library that include the due dates for my books. But it's not like, “Hey, do you want a calendar invite to remind you that the book is due, or do you want us to text you when the book is due?” None of that. It's just, “I'm going to send you an email once and you better remember.” And I built out a Zap that used the function calling API to grab the data from these things and create then in Zapier a delay that would then alert me 24 hours ahead of time, “Hey, these books are due tomorrow. Don't forget.” And that dealing with unstructured data, we see a huge amount of use cases across the board for. A big one for product managers and engineering managers is just dealing with all the requests that you get and creating PRs and stuff from them or creating Jira tickets. We see a lot of IT people as well just dealing with capturing all of those requests that they're getting. And with AI steps in Zapier, you're essentially treating it as if it came in through a form, which is wonderful because now you're not spending your time translating a Slack message into a Jira.

KM So I have to ask, is that the thing that you nerd out about? So I'm principally a web developer and the thing I love is inspect elements, and I have two extensions on my browser. I have Stylus and Tampermonkey, and one allows you to write arbitrary CSS on every single webpage and one arbitrary JS. And I can just run them on my own, and if I go on a webpage and I don't like anything about that experience, I just immediately go inspect element. I go figure out what's going on, I write a little script that just makes my experience on that better, and then I run that locally and I have just dozens of them just hanging out, not doing anything with them. Is that the thing that you nerd out about? “Oh, I was at my local library and they didn't send me a reminder. I'm just going to make a Zap.” 

BP There’s an automation for everything. 

KM “Of course, I can do some automation here and it bugs me that there isn't some, so that's just what I want to do is create a Zap.” Is that kind of your approach to it? 

RR It definitely is for me personally. The things that I've learned it helps me with, I try to adopt that mindset when I'm at Zapier because I find when I'm talking to a customer who actually has a real use case for that, I'm like, “Oh, I now understand how to help you here, because I did this weird library thing.” They're like, “Hey, we get these Slack messages from thousands of teammates across the world having computer issues and we need to get them into a proper ticketing system and not Slack. Can you help us do that and structure it?” And I was like, “Oh yeah. Very similar.” I don't know, that's my approach to tinkering, if you will, and I've always found that to work really well for me. It's probably one of the main reasons I'm in the position I'm in today. Two years ago I was interested in seeing AI art and was just really curious how these tools were creating what looked to be amazing pieces of art and what went into that. I'm always embarrassed to say this, but my tinkering back then was creating– I'm into collecting real stamps, and so I created an AI generated stamp album, and that's how I learned about a lot of those tools. I geek out on a lot of that stuff. 

BP Nice. So I want to ask quickly, we’re talking about natural language and how it can work. There is something y'all put out called the NLA API, and in sort of pitching for the podcast, one of the things that I thought was interesting was that there were two product experiments that came before that that didn't quite hit the mark. So given that we're talking to a bunch of engineers, I love that process piece of, “We took this fork in the road, then we took this fork in the road, and this is how we ended up here.” So just quickly up top for listeners, what is the NLA API? And then tell us a little bit about, from a product management perspective, how did you find your way to the right solution? 

RR So for everybody not familiar, we actually created what we call NLA API, or Natural Language Actions API, it’s a mouthful. It essentially enables AI models like ChatGPT to use natural language to complete the tens of thousands of actions that are available on Zapier on their own. And so you as the user, you can use this today in the ChatGPT plugin from Zapier. It uses the NLA API under the hood. That same API is available for any developer. You can go to our docs and can get started playing around with it in your own applications. But essentially the way it works from the user is you would go in, you preconfigure, “Hey, I want to give ChatGPT the ability to create draft emails on my behalf. I want it to be able to search my Slack. I want it to be able to create Jira issues.” You then, as you're talking to ChatGPT if you're working with it with something, you can say, “Oh, I love this email copy you created for me. Can you just go ahead and create a draft to X, Y, Z?” And it'll be able to go ahead and do that. There is a human review step that works from within the ChatGPT plugin. But how we got there was a really fun journey because when I joined, the product that we were playing around with was something called Zap Chat. The concept was that natural language is really exciting, the conversational AI seems to be a really great approach to try this, let's just test this out. And we were doing chain of thought prompting, the recent papers around that seemed really exciting, and what we discovered was that it actually was very frustrating. It felt like a constant game of 20 Questions because it was entirely chat. We even tried it via text. You can even text this thing to help you create this app, and it was like, “Okay, you said you use Salesforce. What authentication do you want to use? What object are you using? Oh, can you clarify?” And it was just a million questions that when you use something in the UI, you kind of take for granted how many little clicks actually would've been many other questions that led to that. So we then took some of the learnings there, which was that the models were clearly capable of coming to good conclusions of jobs to be done for the purposes that we were trying to do, which for that was creating a Zap. The next one we tried was around continuing to experiment with chain prompting together, but allowing users to create different prompts and give different tools for each step of that journey. So it kind of created a deterministic chain of thought type of thing. We called that Zap books, that was a lot of fun. And that was the first time we actually gave it different types of tools. We gave it the ability to search Google, it also had the ability to search Wikipedia. We gave it a calculator as well. And the last thing that we gave that kind of led us down a further path was giving it the ability to run Zapier actions. And it worked in a really hacky way at first. We had a separate app in Zapier that you needed to have installed in a Zap, and if the AI called it, it would then shoot out a webhook type thing that got it to run. From there, we realized the chain of thought stuff was just a bit confusing for users. When we brought it to users and talked to them, they were always confused on where to start. They were always confused what this thing did and it became immediately clear that this was not something we were going to be able to take to market. But just seeing those things take actions was what people kind of hooked onto and what felt valuable. We experimented then with something called Zap Maker, which was a single shot attempt at creating a Zap as opposed to the 20 question model, and that worked surprisingly well. Actually that evolved into a piece of technology that's now used not only within the Natural Language Actions API and the ChatGPT plugin, you can talk to the ChatGPT plugin and ask it to create a Zap for you, and it'll give you a link to a Zap that does what you told it to do, which is really cool to see. And that led us to when we were thinking about these other iterations we learned about the ChatGPT plugin opportunities, that's where we were like, “Look, it'd be really cool if you could give ChatGPT the ability to take those same actions that we had in the small chain of thought prompt examples and started playing around with what that meant from a developer ecosystem that could potentially emerge from that.” We saw obviously thousands and thousands of developers building AI tools that were trying to do these jobs, but we know that building with APIs is a pain. Everyone's like, “Oh, what if you just give it the documentation?” But I'm sure every engineer listening to this has built an API or read documentation that was not up to date, or the actual API call didn't exactly work the way it said it was going to work. So it's only too often, and sometimes honestly the context window is a huge issue. If anybody's built with Gmail's API, that thing spits out an absurd amount of text into there, and especially with threaded emails, it becomes really difficult. So the NLA API, one of the coolest things it does is not only is it non-deterministic in the way that it tries to map the values for an API call, but it also has intelligence that we've built into it that spits back the right amount of context for an LLM on the other side. So it doesn't just spit back the raw API response that you would get. It's kind of truncating that for an LLM, but it's not just truncating, of course. 

KM So earlier we were dunking on this poor little LLM robot for not being able to do math, and the way that's actually solved is through grounding that model and saying, “Okay, well it looks like you are asking a math question. Let me go defer that to Wolfram or Mathematica or something that can then go handle that and try to do that in a way that is really good at just the math pieces. So I'm just going to try to take this text prompt and go hand it off to the right service to go execute on this.” Is that kind of how you view the plugin architecture also, which is, if I'm just talking to ChatGPT and I can say, “Oh, go create me a Zap on this,” then almost this plugin grounds it in a sort of way that says, “Oh, a Zap. I know how to do that,” and then it can go work through that. Do I have the right mental model there? Is that what it's doing? Plugins are just grounding for that model just to give it superpowers in a very narrow subset of the world? 

RR Yeah, I'd say so. I mean that's also LangChain tools. Essentially you're just giving it the ability to call out to a more specialized service that's good at a job. And the Zapier one just happens to be an interesting service that can call thousands of other services. 

BP Right. There was an interview that I read in YCombinator talking similarly, as you said, we don't want to have developers have to deal with all these different API endpoints. You can wrap those, it said over 20,000 actions, and treat each one, like Kyle said, as a separate tool. And when you need to do X, you rely on this tool. And then like you said, instead of playing 20 Questions, I'm going to delegate to you. Come back to me with a Zap, and if it doesn't work the way I want, then we'll talk about it and we'll adjust this and this. But it's much better to just come back with something. You gave it your best try, and then we can sort of iterate from there. 

RR Yeah, absolutely. And that's why with the NLA stuff, if you try it in the ChatGPT plugin, that's one of the main reasons to have that human in the loop step so that you can be like, “Okay, I told it to do a calendar invite to X for Y time, and it kind of got it right, kind of got it wrong.” But before I go ahead and send that API call, I'm just going to change the title of the calendar invite. And now you can just edit that right there and hit send. 

BP So Reid, I'm curious, is there something that you've been hearing from customers or partners that they've really been asking for that you feel like is the next turn of the wheel, or something that's coming up in the next year when it comes to workflow automations that you're really excited about? We've been talking about what this technology is you've been playing with, how did you get here personally, and how did you know you evolve this tool. Now let's look ahead a little bit. What are people asking for that you're psyched to build? Or what do you know is on the roadmap? You don't have to give away all the juicy details, but this is what's going to really take us to the next level with these kinds of tools. 

RR Yeah, I’ll touch on two things. One, I think a lot of people, probably the people listening to this and probably most people that I talk to, are builders in the AI world right now, and that's very different than the people consuming this technology a lot of the time or who can likely benefit from consuming this technology. I think one thing to keep in mind is that as much as we all get really excited on the envelope-pushing state of technology that's out there, there’s some things that you talk to users about and they're just like, “Oh my God, that's amazing.” And you just showed them a really simple little piece and they're very pumped about it. For us, probably the greatest example is when we talk to people about how they use ChatGPT and they're like, “Oh yeah, I'm great at using ChatGPT. I go. I use it all the time.” And they're very familiar with Zapier. The same person, very familiar with Zapier, very familiar with ChatGPT, never put the thought together to try to use ChatGPT in a Zapier workflow. And I describe it oftentimes as having it run in your sleep type of stuff. And you've got to think about the right times to use something like that. I often talk about drafts– great time. You're drafting content, or if it's doing categorization for you, classification, all that type of stuff– amazing. But people don't a lot of the time connect those dots or think about that retrieval augmented generation, that's obviously a huge one. Probably the one I'm most excited about seeing get into the hands of people is being able to use those systems in a more simple way where they didn't need to know what Pinecone was, they didn't need to worry about choosing between Chroma. The average person is not worrying about those decisions, hopefully. And when we can start to see the concept of giving these tools a little bit more context and being able to ground those prompts with organizational information, that's very exciting. That's probably one of the areas I'm most excited about. We look at so much of what people like about LLMs in workflows today and what they don't like about it, and a lot of it does come down to, the ones who like it have learned how to give it context and prompts today. They're doing a great job prompting. You shouldn't need to be an amazing prompt engineer. 

BP We’re not all prompt engineers? No? Okay. 

RR Yes, exactly. I think everybody hopes that's not the future state. So that's one of the things I'm probably most excited about seeing a future state for, but I just want to do that first call out. There's some of the stuff that just works out of the box today that is known knowns that just is still extremely valuable. 

BP Yeah, but this is the Stack Overflow Podcast. We've got two blog posts up recently about RAG. We were talking about using Weviate in our stuff. We're trying to get into the nitty gritty of that world. I mean, that's where we're living and breathing. I agree with you– yes, there are many low code/no code, tons of people on the marketing team who could build great automations for work or for home that are going to be really cool and it would be fun if we could find a way to help people connect those dots, but at the same time, we're thinking once you get RAG really working and once some of these tools evolve, that's what I'm asking about.

KM The thing I like about RAG is it completes the story on what LLMs can do, which is either, “Here's a bunch of text, make it a small amount of text,” or, “Here's a tiny amount of text, can you make it a big amount of text?” Now it can do both things so we've hit 100% of use cases. 

BP Turing complete. 

RR Yeah, absolutely. There we go. The other stuff that I’m increasingly following along and interested in is what people are starting to prove out with evaluation suites and training. Those two things, I think a lot of tech today we're seeing is largely built on the foundation models. Some organizations have played around with some fine-tuning, but as soon as you start to do that, a lot of questions come up. There's a lot of terms of service style questions that you certainly get into real fast. First of all, on the tech side of things, you get into a lot of questions on how you're doing that, how you're training the dataset. And I think, Brian, one of our co-founders, had a good quote that I liked earlier. He said we're trying to move more people from being prompt engineers to data engineers when they're building with AI systems because really that should be your future state. If we're getting these things more and more data and we're building cycles that are good with that, that is a defensible moat for a business which is going to be really interesting. But I say that, and doing the evaluation of these models is extremely difficult in an organization. There's always a ton of questions on how do you evaluate what a good job was, how do you evaluate if what you just did made things overall better? There's a lot of tools being created these days that are trying to address that that are really interesting to explore. But still, it's very interesting questions on how you do that. I think one of the funniest quotes I heard from a large vendor in this space on building models was around it just coming down to a vibe check, probably my favorite quote so far. It's a vibe check is what it comes down to if it got better or worse. So I look forward to that being a lot more known and structured of a problem space. 

[music plays]

BP All right, everybody. It is that time of the show. Let's shout out a Stack Overflow user who came on the network and contributed a little knowledge. Lightness Races in Orbit, congrats on your Lifeboat Badge and coming in and saving a question that had a score of -3. With your great answer, it's got a score of 20 or more. “What are core files by Node.js?” What are those core files? Well, Lightness has an answer for you and has helped over 10,000 people, so we appreciate you coming on and sharing a little knowledge. As always, I am Ben Popper. I'm the Director of Content here at Stack Overflow. Find me on X @BenPopper. Email us with questions or suggestions for the program: podcast@stackoverflow.com. And if you like the show, leave us a rating and a review because it really helps. 

KM I am Kyle Mitofsky. You can find me on Twitter @KyleMitBTV. And as always, you can find me at Stack Overflow at User ID 1366033. 

RR Everybody, Reid. Great chatting. You can find me on Twitter @Reidoutloud_. And then learn more about Zapier AI stuff, I think it's just zapier.com/ai. I made a little page, you can follow along with what we're working on there. Reach out to me on Twitter, LinkedIn. I love hearing from people, so if you've been working with something on Zapier and you just have an idea of something you want to see or feedback, please shoot me a DM. I read almost all my Twitter DMs from people. 

BP And if you love AI art and you want to exchange, maybe they should find you. You're a collector. 

RR Yeah, absolutely. 

BP Sweet. All right, everybody. Thanks for listening, and we will talk to you soon.

[outro music plays]