The Stack Overflow Podcast

The world’s most popular web framework is going AI native

Episode Summary

On today’s episode we chat with Jared Palmer, VP of AI at Vercel, who says the company has three key goals. First, support AI native web apps like ChatGPT and Claude. Second, use GenAI to make it easier to build. Third, provide an SDK so that developers have the tools they need to easily add GenAI to their websites.

Episode Notes

Palmer says that a huge percentage of today’s top websites, including apps like ChartGPT, Perplexity, and Claude, were built with Vercel’s Next.JS. 

For the second goal, you can see what Vercel is up to with its v0 project, which lets developers use text prompts and images to generate code. 

Third, the Vercel AI SDK, which aims to to help developers build conversational, streaming, and chat user interfaces in JavaScript and TypeScript. You can learn more here.

If you want to catch Jared posting memes, check him out on Twitter. If you want to learn more abiout the AI SDK, check it out 

here.

A big thanks to Pierce Darragh for providing a great answer and earning a lifeboat badge by saving a question from the dustinbin of history. Pierce explained: How you can split documents into training set and test set

Episode Transcription

[intro music plays]

Ben Popper Maximize cloud efficiency with DoiT, an AWS Premier Partner. With over 2,000 AWS customer launches and more than 400 AWS certifications, DoiT helps you see, strengthen, and save on your AWS spend. Learn more at doit.com. DoiT– your cloud, simplified.

BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ben Popper, proprietor of the Top 100 technology podcasts in the United States. Hanging in there at number 94, the Stack Overflow Podcast coming at you. And with me is my co-host and partner in crime, Ryan Donovan. Ryan, how’re you feeling today? 

Ryan Donovan Hello. Oh, pretty good. 

BP So we have had Vercel folks come on the show many times, Guillermo Rauch and others. And I was at the Next.js conference in San Francisco a few years ago and I don't think any of those conversations really involved too much AI. Within the last year we had Lee come back on and there was some discussion of AI around v0, but today we are lucky to have Vercel’s VP of AI, Jared Palmer, on and they're going to talk about the acquisition of model fusion which is an AI integration tool, and this comes along with the release of Vercel’s new AI SDK, and it's the first step, according to the company, of creating a complete AI framework for TypeScript. So that's not something I've ever heard people really talk about before, putting those two things together, but Vercel is obviously used by tons of developers and lots of big companies, so we want to know how it's going to work. Jared, welcome to the Stack Overflow Podcast. 

Jared Palmer Thanks for having me. 

BP Tell us a little bit about yourself, first of all. How'd you get into the world of software and technology, and how'd you find yourself in this particular role at Vercel?

JP Sure. So I actually went to school for finance, but I'm a self-taught developer and designer, I guess if I started there, and then worked my way further and further up the stack. I ran a creative studio sort of dev shop that came out of my freelancing in New York for many, many years, and during that time, I pumped out a lot of open source, mostly in the React community, but actually a little bit in the Java too, but we don't like to talk about that as much these days at Vercel. But through my open source efforts, I basically gave up the agency life, worked on a couple of different products surrounding open source, and then in 2021, one of those products –Turborepo– Vercel actually acquired, and that's how I got to Vercel. I joined Vercel actually working on build systems and working on building out the Turborepo team. And for those who don't know, Turborepo is a build system for JavaScript and TypeScript codebases that helps you build, test, compile your code faster by using very intelligent caching and scheduling techniques that pioneered at Google and Facebook, but didn't really make their way into the front end ecosystem. So now Turbo is much, much, much more popular than it was when I joined Vercel. The team's been very successful. And then at Vercel, after building out the Turborepo team, I then became the Director of Engineering for all of our frameworks. So that includes Next.js, our core React contributors, and self kit, Turborepo, and Turbopack, or web tooling as we call the team. I did that for about a year and a half and then when this AI wave hit, we were like, “Well, we got to pivot.” So I decided to start our AI group and have been doing that for about a year and a half now and have grown that team up now pretty significantly and continue to push day by day.

RD I was just reading over the blog post about acquiring ModelFusion, and it looks like you all are basically adding AI to webpages through very simple code functions. Can you talk about how that works and what the thinking behind it is?

JP Sure. So stepping back, Vercel, for those who don't know, is a cloud infrastructure provider, a front end cloud as we would call it, that helps developers build the world's best products. And we also are the creators and maintainers of– I think we can say the world's most popular web framework. Is that fair to say, Ben, of Next.js?

BP I think so. We could fact check it, but let's not quibble. It's the most or one of the most.

JP One of the most popular out there. And Vercel is not just for Next.js. We host 30 other frameworks. We're I think one of the top Remix providers. We are infrastructure providers. We’re one of the top V hosts as well. Anyway, that's what Vercel does. And as you could imagine, AI is going to fundamentally change the way the web works and how users interact with applications, so stepping back level setting there. At Vercel our AI initiatives come down to three really key pillars. The first in our AI group is to make sure that Vercel is the home of AI apps so that these next wave of products and services are all built with our technology and our infrastructure supports it. So ChatGPT, for example, is built with Next.js. So is Perplexity, so is Claude, and so is Mistral's chatbot too. So just making sure that our frameworks and infrastructure are great for this new wave of AI native products and services is thing one. Thing two is to explore products and tools that help developers build better with AI, and that's where v0 comes in. We can talk about that later, but v0, high level, allows you to use simple text prompts and screenshots and images to generate copy and paste almost production grade code that you can copy and paste into your React applications very rapidly and you can iterate on it using AI. And then the third piece, our third pillar is what we call the AI SDK, and this is to provide tools and frameworks for developers to build these AI native experiences. So that includes mixing what we call generative UI, which is this combination of AI native text and component interactions that are fused together. So the AI SDK, its goal is to help developers, again, build AI native applications, chatbots, completion, whatever you want to call it, and its goal is to sit in that layer in between the web framework and the AI and make it really easy to use. 

BP So you mentioned that this technology is already being utilized at some of the big shops. Is there an open channel of communication there? Stack Overflow recently announced that we're working with folks like Google and OpenAI for our API licensing. Our data is going to help train some of their models. If they're building with your tools and you want to be able to help folks build with theirs, how does that work? 

JP We have great relationships with those. Some of them are customers of Vercel. So OpenAI.com, for example, is hosted on Vercel, and we have great relationships both out of the Next.js team and AI SDK with these companies. We talk with the ChatGPT team about Next.js; we talk with the Perplexity team about Next.js. We talk with all these companies about their services. The same way you would talk to any product management conversation, talk to your customers. We're constantly asking for feedback and iterating. Of course, these gigantic next-gen products and services are very interesting to us because we want to make sure that our platform is suitable for these new types of applications, so we talk with all these companies quite regularly. 

RD Is this a step towards having parts of the front end components entirely AI-generated? 

JP That's kind of where we think this is all going to end up one day. I don't think we're there yet. So v0 is actually kind of an attempt at this in some respects, but instead of actually building the full application, we kind of constrained the problem to, “Well, let's not have it fully be a full application that's fully working, but we can do the layout and the UI perfectly and then you can copy and paste that and put into your real app.” And just by sort of constraining the problem, you can actually have a pretty useful tool. Now, do I think one day you're going to have models spit out entire applications? Yeah, I do, but there's limitations right now that kind of prevent that. And we can get into that if you'd like to explore what those are, but they are significant for now, but I think they'll eventually be overcome within the next few years and you will see very generative interfaces. But I do think also that when we talk about generative interfaces, one of the things that's really critically important is that, while you want certain things to be generative, if you imagine a fully generative world, you might not always want that. For example, do you need a fully generative email client? And if you ask yourself some of these questions, you realize that there's a level of familiarity that you want with certain paradigms of interaction, and they tend to boil down to shared components. If you think about what's in an email client, it's a list view, it's got an editor, it's got the to, from, CC fields. That could be generated, but after you do that once, you kind of don't need to do it again. So there's this level of, after it's generated, if you think about a fully generative operating system maybe, you kind of want to cache stuff. It doesn't need to be fully on demand. It might, but some things need to be pretty hot, while at the same time when you think about certain apps maybe like TikTok or Instagram, they're kind of very similar. Reels and TikTok are almost identical in every way, so why couldn't they share a similar full scrolling newsfeed component and the data gets swapped out? There's fun games you can think about what would a generative operating system look like and what would the primitives need to be? And we think about this actually quite a bit. I don't think we're there yet. I think in the interim what you're more likely to see is a spectrum or a gradient of generative UI. And what you'll see is that you'll move beyond plain text– this idea of you're talking to a terminal like it's 1981 is just not going to make it. It's already on the way out. You're seeing that already, and we're facilitating that with the AI SDK. We literally have tools if you've seen our demos. You can mix text and components to build this sort of rich, visual, interactive, stateful experience with language models and use them to decide. But what you're going to see there is that you can have those components, you can have them premade, and just fire them and show them correctly. You can have the data be generated on the fly and send that to the components and have them be more dynamic there too. You could even imagine that you've got a subset of components that are allowed to be recycled but for different reasons. An example I like to give here would be just a table– a table and a button. Maybe the table is filterable and selectable and the button does something. It fires an HTTP request. You could imagine that you don't need a fully generative interface. What you need is to generate the data and the prompts for that table on the fly, and now it can be generative and dynamic towards the target whatever your task is at hand. We're working towards full generation. And again, on the far side of the spectrum is you generate a completely working user interface on the fly for the task at hand and it's fully operational for you and completely customized. I think that's maybe one future, but at the same time, we're not quite there yet. 

BP We talked about v0 and its ability to suggest things like a rough blueprint that gets you started and the exciting thing being, “Hey, let me show you this webpage. I want something like that.” And I hear you saying now that we want to move towards something where it's doing more complex things, and there was a great interview with one of the co-founders of OpenAI where he was just saying, “Today, what we're doing is we're adding a function for you or we're doing some code comments for you, we're building unit testing for you. In the future, you'll assign the AI a big project and it'll work on it for days and iterate on it and you'll come back and then you can take it from there.” Do you feel like, given the nondeterministic nature of generative AI, it would be interesting to use a RAG system where you want a newsfeed? Great. We know what a great newsfeed is, and this AI has learned it, and so it can pop it in where it needs it. You want these kinds of buttons? Great. Basically a component library, a library that this AI can draw on. 

RD Right, just putting NPM in a RAG. 

BP Exactly. And so therefore, the AI is running a search and doing what you're asking, but it's not generating fresh code each time and therefore risking mistakes or hallucinations or whatever.

JP I think it's probably some hybrid of all the above. I think that you don't need to reinvent that list. In fact, let's just play pretend here. There's north of 100, but less than 1,000 different layouts to websites and apps. Yes, there's variations, but at the core, maybe there are 30 or 40 that are mostly used. Look at the apps we use today. They're mostly newsfeeds in various ways. Newsfeeds and list views are a whole massive percentage of applications. So then add forms. Now add maybe photo editing, and now add maybe some video editing. And you imagine these primitives that would start to exist and you don't need to generate those on the fly. You just need to recycle them or slightly modify them if that makes sense. And by modifying them, you may not even modify their code. You just may need to modify the inputs, because there's a list view and list views could have multiple cells and you could recycle that. So you could imagine that there's this sort of hybrid approach where you have premade, highly optimized primitives that then get recycled again and again and again, and that's going to be really good for users too. It may look different with different branding or whatever it is, but again, you don't kind of want to necessarily have everything be totally bespoke. Now, everything being totally bespoke and customized to you, there's a trade-off there. You might want to have the font size be bigger because it knows you've got an eye problem or something like that, and you might want that level of super dynamism or something like that. Or maybe it knows that you like your buttons to be shifted a little bit to the right or the left, I don't know, your favorite colors, things like that. That's all possible, but I think you're going to see this sort of recycling of these core primitives. And then that's going to get you very, very far and you'll see these design languages evolve and change. Just like we shifted from desktop to mobile, you'll see another shift from whatever we have right now to the AI native experience, and that's going to look a little bit different. 

RD Continuing with the NPM on steroids idea, I've talked with developer friends about if you don't understand all the dependencies you have, you're not going to have a good time. And it's, right now, very easy to import something from NPM and catch all their dependencies. Do you think there will be some of that issue with the AI-generated primitives? Will people run into these problems where they don't understand something that they're putting into their front end?

JP Right now, we kind of have this problem. I don't know if you know this, but ChatGPT loves Axios, the HTTP request library for JavaScript. It just loves it. If you ask it to fetch something, it just loves using Axios. Now is that good, bad, or just it's on GitHub. You could use Fetch. Fetch is a node. You don't need to use Axios, but it's trained, and it currently just biases towards using Axios. I don't know how many more times people have installed Axios because ChatGPT has suggested Axios. So talk about alignment there, it's aligned to suggesting you Axios. And so I think that depending on how these models are aligned or whatever they're trained on, you're going to end up inheriting those dependencies. And it's actually a bit of an issue because code changes, it's dynamic, and these training sets, hopefully if they're using retrieval for some of this stuff, there's still some emergent behavior baked into the training data. And so a big open area of research is how do you keep the documentation fresh and up to date using retrieval or other techniques and have it formatted in a way that these models, through whatever systems they're employing, can deliver up to date information. And that's actually a really big problem. 

BP That is the premise of these API licensing deals that we're doing is that people are constantly answering new questions on Stack Overflow. If you have an API that can go there, you're going to be getting the latest and greatest information as frameworks and technologies change, and you're going to be able to do it in this RAG version that looks and only uses accepted answers as an example of the problem you're posing. 

JP Totally, and it's a big problem. So what libraries will you inherit? It'll be whatever this thing is set up to use is the answer. I don't think you're going to care that much, to be honest. I think maybe some purists will hate that. Maybe you'll have some setting where it will write it from scratch. Okay, cool. You're not really going to care I don't think. I'm sure you might. I'm sure in certain situations you could care. Obviously, I think you're going to care if you set something up to do something like a five day task and you're inheriting the code and you're actively working human in the loop with a system and you treat it like a coworker, then you're definitely going to care if it uses Axios or installs some extra library or does something your coworker wouldn't. But if you assume that these things get to coworker level, which I think is safe, then they're not going to use other stuff. They're going to work like your coworkers. And if they don't, they're going to be bad coworkers and you're not going to use them. So I feel like it's going to basically be the status quo. So if you use a lot of third party packages, then they're going to use the packages you have. Just like I would tell a junior engineer if I gave them a task or assigned them some sort of ticket and they installed like 30 deps, you'd probably be like, “Do we need all these? Are these the best ones? Why do we need these?” And you'd have a conversation and maybe you do need them all, but maybe you don't. And then you'd be like, “Wait, no, go back. We already have this in the code base and we use that one.” Same thing. I don't think it's any different. I think it'll be about the same. 

BP The difference, I guess, is the interoperability. You're like, “I don't know why ChatGPT loves this Axios library.” You could ask it. Maybe it'll tell you. 

JP By the way, it might be better. 

BP You can ask the junior dev to explain themself and they'll do their best. 

JP The other thing, too, is that maybe the AI bringing those 30 things in might be better. Why? Because the AI is really good at those 30 things, and should you be having your own built in-house version of said thing? I don't know. It should be worth looking at. And maybe there's something to it because you'll work faster within the bounds of the model or system or service that's aware of all these things. It’s already looking at the Stack Overflow for it. It's already in the retrieval set. It's in the emergent behavior. And it's just good at Axios. Maybe it's better at Axios than it is at Fetch. So maybe you actually want Axios, not because it's the most performant thing, which is again, net negative and I'm not a huge fan of that, but because you're going to be more productive. And so depending on your project, you might make that trade-off. I think that's going to be very interesting. Just like you would as a regular developer making trade-offs, you might also go with what the AI suggests every once in a while, or quite often. I think that's going to be more of the discussion, if that makes sense. Will I use this library because the AIs are really good at it is a very interesting area of research. 

RD And I've heard of security researchers discovering that some of the chat AIs really like putting in this fake one, and they created that library on NPM and people started downloading their formerly fake library. 

JP Totally. 

RD Are there ways, particularly in Vercel, to sort of limit that to make sure that they're not putting out janky code or code that is against your internal best practices? 

JP We have a service for self conformance that’s going to be renamed Code Checks, and this is available to our enterprise customers. That is a conformance alignment tool that helps you set rules for your code base. And it's almost like a super linter. It goes beyond what ESLint can do and it's deeply integrated into code ownership, who owns which area of the code base, and PR reviews. And you can set up rules in the system, like if you import Stripe, you need to ping the security team for PCI compliance. Or if you use import from the JS cookie package, or you work with cookies, you need to go through a GDPR review, and you could set up these workflows in our system. You could build your own workflows, we also include best practices workflows that are from our core engineering team from Next.js and Svelte and beyond that input what we consider to be best practices. And you could turn them off, but they're around security, performance, waterfalls, things that you would want to check for, and that's, I think, where you're going to see the line of defense happen, not necessarily at the generation point. I think that's always going to be semi-untrusted, which is also going back to what we talked about with generative UI. A big area of concern is when you fully generate applications that know your secrets and personal data and all this stuff, how are you going to constrain that to be secure? Same if you're evaluating code or installing code generated from a model, you probably always want to have some sort of safeguard check before you put it in production, is my hunch. 

BP I think that a chain of thought and a mixture of experts seems like the direction we're moving, and one of the steps along the way, or one of the agents along the way is going to be your cyber security expert, your checker. 

JP The first line of defense for sure. 

RD So I wanted to ask about the reasoning behind making this an SDK and not just a SaaS. A lot of AI stuff is kind of operating as APIs. What's the benefit of having this as an SDK? 

JP Sure. So the AI SDK started as– I think the original NPM name was AI Connector. And all it focused on was how do you do streaming properly. We saw a lot of people copying the same code again and again and again on GitHub when they were making a fetch to usually OpenAI and they were streaming down the result. And they were building chatbots and they were doing it kind of incorrectly. And so the first thing we just built was a project called AI Connector, and what it would do is it would work with a couple of different AI providers and give you perfect streaming and then give you some React hooks to build little chatbots. And that kind of exploded because it was just an annoying thing people had to do, and it wasn't focusing on prompting or anything like that. It was just focused on that weird part between the model result and the UI you need to build. And it turns out that's the perfect place for Vercel to innovate. And once it took off, we realized that there was a bigger opportunity to simplify the way in which you integrate with these AI models within TypeScript code bases, and it seemed that there was just this missing glue code that took you from, I'll call it ‘Python notebook’ to product, if you will. So you've got these frameworks like LangChain and LlamaIndex and maybe some other ones that have come up, and some of them are Python-first and some of them are TypeScript-first, but they're not deeply integrated into your web framework. And us at Vercel, being the maintainers of lots of web frameworks, we know how to integrate these things and so we set out to build this SDK to be that glue and it expanded from there. So now we support, with the AI SDK, you can work with Google, OpenAI, Anthropic, Mistral, your own providers, Llama, CPP, Olama, and build your own and you can work with Vue, Next.js, Nuxt, Svelte, Solid. And it's that glue code, again. If you're building an AI native chatbot or experience, this is that missing piece you've been looking for to help you wire up and build these new types of experiences. So we call it AI SDK UI. Then when we acquired ModelFusion, we worked on something called AI SDK core, which then focused on the data fetching part. So while most of these AI companies are very Python-first as you can imagine, and while they have developer SDKs, they're a little bit bloated, and a lot of them are auto-generated for what it's worth. And we know what you're doing. You're building chatbots. You don't need half this stuff. So we basically layered over a lot of the Rest APIs to give you a better experience when streaming text, generating text to build these sort of experiences. And then the last thing we do with the AI SDK is what we call ‘Generative UI,’ and that actually builds on top of React server components. And this is a very special area of the SDK that actually is almost like an extension of React/Next.js. And it's a set of streaming utilities for server components that, when paired with AI models, helps you mix text and components into a unified experience. So instead of you just chatting with a model and it being text or using RAG and it's text to text, what if when you ask for what the weather is, it's streamed down using a server component, a beautiful weather widget. Or if you ask what's the price of Dogecoin today, it gives you a beautiful interactive stock chart with no overhead whatsoever. That component could be loaded on the fly dynamically, no client-side JavaScript unless it's necessary. And again, it could be fully interactive. And then we give you that glue code to help you build these next gen experiences, which we think is where everything is sort of going. And what's really interesting about that is, that same technology is actually the streaming technology we use in v0. And so in many respects, we open sourced the core tech of v0. So the same tech that gets you to this generative UI that's from premade components will be the same exact sort of streaming infra that will get you these fully generative interfaces we talked about earlier in the show. 

BP I don't know if you consider them a competitor or a peer or whatever it may be, but I think this reminds me of a conversation we had with Matt Biilmann of Netlify just sort of saying, “We're going to come into the software 2.0 era where the UI/UX can adapt on the fly to what the user wants.” And you're talking about that the builder has this amazing facility to pull it down or maybe even in some cases to say, for the user, adapt as they do things or as they make requests or as you understand their behavior. 

JP Totally. I think that's where this is all going. And what we think is so incredible is that from what we've learned and built with v0, which is the closest thing to generative interfaces that we have I think in-market example of, we know that this can scale to full generation, to full dynamism if you want to call it that. I don't know the word you want to call it– fully generative. It doesn't have to. It can also be used for premade or constrained generation, but depending what you're doing, you can take this to pipe the model out. If the model is spitting out React, it can stream that down and be fully interactive. And so what we think is really interesting about the AI SDK is that we know it's like that building block that's the first step towards this future and we're really excited about exploring it. 

BP This is fun. Sometimes you listen to interviews with the folks at the big AI shops and they're like, “This is what we have now and I think it would be cool if in the future we had this or that, or it might be able to…” And you know that they're just playing with models. They're playing with the things that we're going to see in 6 months or 12 months or whatever. And for you, you've looked at it inside and been like, “Look, we could go full dynamism. Maybe that's not unlocked for end users yet, but you understand what it can do.”

JP Yeah. And it's been fun to prototype with it. It was actually a big security aspect that we have to figure out, too, when you go to full generation. There's probably a permission model that we need, but we're also really interested, too, in federation. So if you're familiar with module federation, things get very fun and weird when you combine server components and federation, which is something that we're actively researching, where you could federate the component maybe selected or generated by a different AI or different subsystem into your app, and that gets kind of a fun weird world of super apps and operating systems that are semi-generated on the fly. 

BP Obviously I know what federated is because I'm a genius, but can you just for the audience? 

JP Sure. So federation. So this idea of module federation has been, I think, around for a while, but the idea for module federation is familiar with this term micro front end, which is like, “Oh my God. That sounds like a horrible idea.” But this idea of a micro front end, and let me explain what that means because it means a lot of things to different people. In a world of big mega corp, which is the core use case for this, let's just say you're a gigantic bank and you've got thousands and thousands of employees working on the banking app. If you were to put this into one application, it would be gigantic, and it would maybe take an hour to deploy or something like that. So rather than improving those tools, what they decided was, “Let's just ship the org chart.” And what if we could just have every little screen, or even component, applet, widget, whatever you want to call it, independently deploy? You would see microservices but for the front end. And if this is already causing trauma, absolutely, I get it. It's a crazy idea, but actually it solves big core problems. This same sort of technology has been built into webpack using a tool called Module Federation, and there are a couple of people that are very active in this area and it's in production at a lot of companies. Next.js actually has not had great support for this, and I'll admit that humbly, in the past, but now we're actually tackling this problem. But instead of it being at the module level in Node.js where you federate to swap things out, we're actually interested in React federation, which would be, instead of the boundary being the module, it would be the React component. So what this ends up meaning is that we can stitch the React server component streams together and fuse them and cross stitch them together into one stream. And what this will allow is for you to deploy independent aspects of your application on one screen and deploy them independently. So you could have one main Next.js app and maybe your customer support widget is deployed independently or your sidebar or your nav bar or your call to action CTA could be remotely loaded and it could be a separate component that gets deployed independently. And that causes all kinds of other problems at runtime with feature flags and what am I looking at and what if it fails, but assuming we can figure all that out, when you combine this sort of federated approach to assembling interfaces with AI, things get crazy. So you could have super apps or applets where you've got a chatbot, and when you ask what's the price of Dogecoin, you get the E-Trade applet that's made by E-Trade or by Bloomberg, or you could choose one and federate out that for consumers in some sort of super app thing. So you can imagine maybe GPTs get UIs in the future, not just RAG just for data, but actually their own UIs. Or you could imagine that in a less crazy sort of scenario, you've got an intranet type of chatbot inside of your company and different teams are deploying different widgets and applets and components independently of the core main application. So by federating this out, you kind of get this internet, if you will, not of URLs, but of UI that's all routed through AI, if you will. And that's very, very compelling for this future generative operating system where it's not just that one person decides what the future should be, but multiple different people, potentially companies, teams, all get to have their own micro app applets’ functionality used together. And I think that's what you're going to end up seeing in the not so distant future, both in native and then also on the web. 

BP I like when you said things are going to get crazy.

RD Wild West. 

JP Yeah, should be fun. I think the big thing for us is just that our job at Vercel is to democratize the web and AI is going to be foundational to that so we're very, very excited about it. To go back to the SDK, this acquisition of ModelFusion takes the SDK from this little connector service that was helpful for building little UIs and moves us towards, “Okay, if I'm building AI in TypeScript, this is my go to toolkit.” And we're not there yet, but that's the journey that we're going towards. And I think for Vercel, we're still actively researching this new frontier of generative interfaces and our work with v0 and the AI SDK and React and Next.js are all working towards a better way for us to interact with applications, products and services, but also to build a better web, and that's where we're really, really, really laser focused on. 

BP I love it.

[music plays]

BP All right, everybody. It is that time of the show. We're going to shout out the recipient of a Lifeboat Badge– somebody who came on Stack Overflow and provided a great answer to save a question from the dustbin of history. A Lifeboat Badge was awarded two hours ago to Pierce Darragh: “How to split documents into a training set and a test set.” We appreciate the answer, it's got 20 votes and has been accepted and over 17,000 people have benefited from this little piece of knowledge. So Pierce, thanks for your answer and congrats on your Lifeboat Badge. As always, I am Ben Popper. I'm the Director of Content here at Stack Overflow. You can find me on X @BenPopper. The way to get in touch with us is to shoot us an email, podcast@stackoverflow.com. We take suggestions for guests or topics, we have listeners come on the show, we're open to ideas. Maybe we should have live events. Maybe we should have a Discord. Who knows? And then the nicest thing you can do for us, if you enjoyed today's program, is leave us a rating and a review.

RD I'm Ryan Donovan. I edit the blog here at Stack Overflow. You can find it at stackoverflow.blog. And if you want to reach out to me with hot takes, cool tech, or ancient lore, you can find me on X @RThorDonovan. 

JP My name is Jared Palmer. I'm the VP of AI here at Vercel. If you want to find me on the internet posting memes, it'll probably be on X @JaredPalmer. If you're interested in what we talked about today, you can check out either v0.dev, that's for v0, or sdk.vercel.ai for the SDK. Or if you just want to start with everything, go to vercel.com/ai and have fun. Thanks for having me, guys. 

BP Yeah, it was a blast. All right, everybody. Thanks for listening, and we'll talk to you soon.

[outro music plays]