The Stack Overflow Podcast

Is this the AI renaissance?

Episode Summary

Paul van der Boor is a Senior Director of Data Science at Prosus and a member of its internal AI group. He talks with Ben about what’s happening in the world of generative AI, the power of collective discovery, and the gap between a shiny proof of concept and a product that people will actually use.

Episode Notes

Prosus, one of the world’s largest tech investors, acquired Stack Overflow in 2021.

Check out the annual State of AI Report from Nathan Benaich and Ian Hogarth.

Read our CEO’s recent post on Stack Overflow’s approach to Generative AI.

Connect with Paul on LinkedIn

Today’s Lifeboat badge winner is suvayu for their answer to How to put a big centered "Thank You" in a LaTeX slide.

Episode Transcription

[intro music plays]

Ben Popper Don't let siloed security tech get in the way of protecting your business. Cisco XDR simplifies security operations, empowering teams to act on what truly matters faster. Discover Cisco XDR at cisco.com/go/xdr. 

BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I am Ben Popper, Director of Content here at Stack Overflow. I am flying solo today, but I am joined by a colleague of mine in a way, Paul van der Boor, who is coming to us from the Prosus AI Group. Hi, Paul. 

Paul van der Boor Hello, Ben. Good morning. 

BP So for folks who don't know, Stack Overflow was acquired by Prosus last year, and Prosus is an extremely large and well thought of tech investor. They have a whole portfolio of companies, including a number in the EdTech space of which Stack Overflow is one. And I've gotten the chance to hang out with Paul once or twice to talk about what's happening in the world of AI, which is fascinating to me. I wanted to have him on the show today to talk a bit about what he's working on, how Prosus sees the world of AI developing, how the companies in the portfolio are using it, and some of the stuff we see coming down the pipe, a lot of which is kind of mind blowing. So Paul, thanks for taking the time to chat today. 

PV Yeah, thanks for having me. 

BP So give folks who are listening a little bit of a background. How did you get into the world of technology and then maybe AI, and how'd you wind up at the role you're at today? 

PV So I actually started in the world of aerospace engineering. I started my undergraduate in that space. And at some point after doing my undergraduate I started working for Siemens for their hydro unit in India, actually specifically to try and build a software product that would automate their design. So they've got these design standards across these hydro turbines that was all done sort of manually by a small group of very smart and experienced engineers in Germany, and we wanted to figure out if we can automate this using some software. So that was when I actually started to get into more of the software ML decision making systems, and then at some point I decided I wanted to do a PhD. I went to the US to Carnegie Mellon. I did my PhD there, and at some point after that I joined industry, went into consulting for a little bit, and got very much involved in data science for Social Good, a foundation that I'm still part of today, to try and see how we could apply some of the AI and machine learning tools that we had already many years ago, 10 years ago, in the domain of social good problems. And after some time, I then joined Prosus to help build the AI team that we have now, set up a couple years back. 

BP Right, very cool. So Prosus, when I first heard about it, it was actually through the acquisition of Stack Overflow. I think it's perhaps a bit more well known globally than it is in the US. But in some ways it's an investor like a VC, in some ways it's a holding company, in some ways it has its own operations. So what does it mean for Prosus to have an AI team? What do you work on and how does that impact what Prosus does or how companies in the portfolio are able to use AI? 

PV Yeah, this is a great question because actually it's not one that has an obvious answer. There's tons of investors out there, and I think you'd be hard-pressed to find any with an AI team the size of ours to help their investee companies. But there's a very good answer to the question, and that is that Prosus, as you mentioned, is one of the world's largest tech investors, so we've got many large technology companies, of course, Stack Overflow being one of them, but many others in the space of food delivery, but also marketplaces, FinTech, all over the world, many of them than equal or larger than Stack Overflow with large tech and development teams. And with that portfolio of companies, already many years ago folks at Prosus realized there was a big opportunity for accelerating the use of machine learning throughout the group, and one way to do that was to basically build the AI team that we have today centrally, which is actually a relatively small team, about a dozen people or so that are all experts in the sense that they have quite many years of experience under their belt. And then through that, basically help the portfolio companies accelerate what they do in the field of machine learning bigger, better, faster. 

BP Yes, you were kind enough to invite me to a Prosus AI event recently, a bunch of portfolio companies get to hang out, and immediately I can sort of see what the value is because you hear companies, like you said, who work in classifieds or food delivery talking about how they use AI to optimize everything from delivery times to marketing. And then I say, “Oh, well I'm in marketing. Could we use something like that?” And then I go back and talk to our data science team, or maybe you can reach out to a colleague and ask them how they built that. So it was really interesting to get to hear from companies that are working in, like you mentioned, India and Brazil where they have maybe a completely different perspective on their local market or on how things are done globally, but all working to figure out how to build better software. And it was the first time for me that I sort of got to experience a Prosus family event, so that was kind of cool and obviously easy to see how companies can learn from one another. There were a lot of talks given that day about things people had productized or built and stuff of that nature.

PV Yeah, I'm happy you could join. This event is called the Prosus AI Marketplace, something we started only three years ago. At that time, by the way, the community was much smaller, but we realized this global community of data scientists, which was at the time maybe in the high double digits, globally was working on very similar problems, sometimes in similar domains, like the food delivery teams in India doing something similar to the ones in Brazil and other parts of the world, but also sometimes across very different domains or geographies like search and discovery, personalization happening across the marketplace and so on, and these teams weren't necessarily aware of each other and what they were doing. So we started to organize this event called The Marketplace, really in name also to kind of represent a marketplace. It's a vibrant community where folks exchange ideas, exchange things, ideas, products, knowledge, get inspired by each other. And so we did it again this year. I’m so happy and glad that you could join. By now the community is over 500 people, so that's grown quite substantially, including our Stack Overflow representations. It's great to have everyone part of the family now. 

BP For sure. So some of the things that were discussed there I would love to dig in with you a little bit. As you mentioned, some of what's exciting about AI now is the approaches when we look out to the different kinds of models and transformer models that are used that allow AIs to maybe learn in one domain but then easily learn in another, or to take learnings from one domain and apply it elsewhere. What are some of the things you see that are most exciting that are happening right now within not just our family of companies, but the space more broadly?

PV Yeah. As you know, the theme of the event was generative AI, and a lot of the models that we see today in generative AI are predicated on the transformer architecture that you mentioned. And while that's been around since 2017 when basically some of these papers and the research around this was published, we see that now recently there's been really a sort of breakthrough in terms of applications and the quality of the tools and models that folks are building across many use cases. And so there's a ton of things to be excited about. I mean, more recently folks will have seen work on diffusion models using text prompts to generate images. We see some applications of that within the group, but also specifically large language models, and we've been working with them since the beginning, since the early GPT came out, GPT-2 and so on. So playing with those, and now that set of models, and not just GPT, there’s a whole family. Almost every week there’s a new one being released now increasingly. By the way, the good news being that many of them are now open or open source. And the capabilities of these models, thanks to the size and the improved techniques that we have, are really mind blowing. And so tons of exciting opportunities across our portfolio, especially in the education technology space but also many of the others, which I'm sure folks will see a lot of in the next couple of months. 

BP Yeah. It really does feel like we're going through sort of an AI renaissance. There was an initial sort of turning of the wheel with the ImageNet challenges and the realization that this neural net approach which people have been striving away at for decades, but not able to quite get there. Now, we have reached a point in terms of computation and data where maybe we could start to make breakthroughs, and then more recently it feels like again there's been a turning of the wheel. And I think, to your point, correct me if I'm wrong, a lot of it has to do with –and this speaks to your background– this intersection of academia, open source and industry where people are able to share one another's work and build on it. And even if things are proprietary and can be productized, they can also be a stepping stone for others to kind of continue to take AI forward. Do you see this as something unique? Is this something we've seen before in software? How do you look at sort of this confluence of different sort of sectors that are powering the rapid evolution and growth of some of these AI techniques and models?

PV Yeah, well it's certainly as you describe. It starts to feel like a renaissance. We have people ask us all the time about the pace of development, is it slowing down, is it accelerating, and sometimes it's hard to convey the excitement that we see when we're in the middle of all of the different developments coming through. It really is mind blowing if you even look back half a year ago and then see what we have today on some of these generative models. It's quite impressive. But of course the work that set the foundation of a lot of these technologies isn't from the last six months. A lot of this work has been happening for the last decade or so. And truth be told, most of the work actually happens within a relatively small number of research groups, whether they're corporate or academic– increasingly corporate. And from our point of view, that wasn't necessarily good. You'd like to have that to be more distributed and in less of a centralized fashion. But what we have seen though in the last 12 months or so, is that I think to everyone's surprise, while we were kind of assuming AI and its research, and partially because of data assets and so on, was by nature a centralized endeavor, increasingly you actually have these collectives that are able to really make significant breakthroughs that are at the level of some of the state of the art work that was initially done by corporate labs. So think of the Stable Diffusion folks, Hugging Face obviously being a great example, Eleuther, which are also linked to Stability AI, which are really paving the way for a model that is decentralized and more open in nature, which in the end will benefit or is benefiting already many of us. 

BP Yeah, you'll have to remind me the name of the gentleman who came on who does an annual sort of state of AI report. 

PV Nathan. 

BP Yeah, we'll link to some of his stuff in the show notes that's public. But what he said echoing you that I thought was super fascinating was that we were used to seeing big breakthroughs from DeepMind and the Meta labs and things of that nature, OpenAI, but now what we see is, when they release something, everybody is in these open source communities, these sort of centralized groups that include academics and hobbyists and researchers, gets together and a week later they've recreated it, and then they're working on their own version of it, and then somebody's forking it. So I think probably to me what's so exciting, especially at a place like Stack Overflow, is that you could go choose to work at a big place like this or pursue a PhD, or you could just be involved as a hobbyist or as a startup or as a solo practitioner, and that kind of open access I think is pretty deeply aligned with what we do here at Stack Overflow, allowing people to be curious and give each other answers and figure stuff out on their own. So I thought that was super cool. 

PV Yeah, I completely agree. And what's also nice is that if you listen to or you look closely at some of the work that some of these collectives are doing, it really is innovative because by nature they're more constrained in terms of resources, both compute, but also talent. And so let's say if you listen to Emad, who is the founder of Stability AI, he describes how that team was really focused on making that initial gigabyte model that is doing this Stable Diffusion much more efficient, to the point that I can run it on my Apple MacBook. And so they're really figuring out how you compress this so we can do this, you don't need all the compute. So they've also done quite a lot of work on making this work more accessible to anybody. You don't need a GPU that costs you a lot of money to run it anymore on the cloud. So one of the big sort of contributions of these collectives that you see is the efficiency aspect of this work. 

BP Yeah, it has been a really interesting sort of new development to say that not only can this be accessible at home, but there is an approach that just says that more data and not necessarily massive compute can have results that sometimes are as interesting or as powerful. Before, we were just thinking about it in terms of how many teraflops are you able to throw at this, but now parameters become equally as interesting and so I think that's a very cool development and more accessibility. So I wanted to ask you specifically to talk about a few of the achievements that have happened recently that you see as very exciting and reflect a little bit on sort of the near term. You mentioned GPT, I saw something interesting come out just this week, I think maybe Galactica it was called, where it's kind of like a Wolfram Alpha but at another level. “Write me up an article on mitosis,” and it just spits it out, or, “Explain this formula to me,” and it gives you a rendering of it. It's kind of like a Jarvis for science and physics and things of that nature. Where do you see this technology headed and where are you excited to play with it yourself or to see companies at Prosus and elsewhere leverage it? 

PV Yeah. So one of the things that we see is that increasingly these models are becoming open. So Galactica was developed by Meta and was made open. And I'm not sure, but I think there's a developing story that they just shut it down because folks were prompting it with certain things that were giving results that were not really acceptable. I don’t know the full story. 

BP Uh-oh. You’ve got to be careful. The Microsoft chatbot thing. Don't give me a good argument for eugenics here. I don't want to hear it. Yeah, exactly. 

PV Exactly. So I'm not sure what happened there, but I did see some things come by that they temporarily shut down the model. But anyway, I think the fact that these models are publicly accessible, also to be scrutinized in such ways to see what we can and cannot use them for yet, because of course a lot of this technology is still developing and we need to figure out what we can and cannot use it for in a safe and responsible way. But if you look at the applications and things that are exciting, just first of all on the development of these large language models, if you see the things that they can do today, some of the latest versions of these models, it really is mind blowing. It's hard to describe what some of the state of the art in this field can do today in terms of answering questions, reasoning through certain problems, providing explanations of code, even suggesting code snippets. We've seen some of these models making their way into products like GitHub Copilot. So I have no doubt that we're just on the bottom of the S-curve and we'll see more exciting stuff come out in the coming months from some of these models that you've already seen. And in terms of the applications, we're trying some things in the group that are very much probably along the lines of what folks have seen. So one example is actually taking errors that sometimes you get in a code compiler or whatever that are not easy to understand, especially for learners, early beginners of certain programming languages, and trying to help basically use these large language models to help the learner understand where this error came from and then what next action they should take or how they should improve the code that they're working on. So this is something that we've actually put into production in one of our companies called Sololearn where it's actually been very helpful to learners. The feedback has been extremely positive. And there’s applications of basically creating more accurate descriptions of content, helping users generate content in more high-quality fashion. And so I think this will sort of continue to happen as these models come out of the research labs and make their way into products across our portfolio, and probably without any doubt, across many other businesses as well. And what's one of the most interesting things now is that while the language models were already doing really well and we're seeing good enough levels of maturity to make their way into products, you now have these multimodal models that are also showing the same capabilities. Stable Diffusion is the hallmark one, but you have Midjourney, Dall-E 2 and so on that are powered by things like CLIP, which you probably know is an open source model from OpenAI that does the embeddings of text and images together to be able to make that jump between images and text. And the next step that we're starting to see early examples of is giving these language models the ability to interact with other tools to actually be able to take actions, and I think that will be another step change. You've got a language model. Imagine you ask it something about a math question. Now, one of the emergent properties of these language models is that they can do some basic arithmetic, but they can't really understand more complex math. But now imagine that you ask a math question and the language model knows that this is related to some computation, and you give it access through some API to a calculator. And now it can actually run that calculation and retrieve the actual, factual, correct answer. The same is true for other things. If you're asking it for a fact, it might know it can go to a Wikipedia search engine and search for that fact in a reliable knowledge base. Maybe if you're asking it to do some action related to booking something on a website, you can give it a set of actions to choose from in that sort of user interface that a human would use on a website to actually carry out this action. So it becomes not just a large language model as we've seen it today with a prompt that gives you a return, but actually a prompt that has access to various APIs that can actually carry out actions through. 

BP Right. I'm imagining the Stanley Kubrick scene where the ape first picks up the tool here, so I think, yes, we could be making some big steps forward. One other thing I wanted to ask you before we finish up that I thought was really interesting– you and I had had a discussion before and you mentioned your beginning in the world of hydro turbines and thinking about design and fluid dynamics and things of that nature, was some of these AI models' ability to understand the natural world and the real world in a way that is almost disarmingly effective. Take a look at a closeup picture of a retina and tell me if this person is at risk for heart disease. Or AlphaFold, I'm going to give you some ideas about how this protein might be assembled and you're going to come back to me with something with a high degree of confidence that would've taken years in the lab with a mass spectrometer to figure out. What is it about the sort of current AI and their capabilities that allows them to interact with the natural world in such an interesting way? Is it the pattern-finding abilities? I don't know. To me, that's one of the things that sort of gets me excited with just the thought of it because it's opening up a window on the natural world in a way that might be difficult for us and in a way that they're showing us things that are new. Kind of like how AlphaGo was like, “People have been playing Go for 10,000 years, but you never tried this strategy before. We've been looking at these proteins for a long time, but you didn't actually realize that if you just turn it upside down like this maybe you have a useful medicine.” What is it about these systems that allows them to interact with the natural world in that way? 

PV To be honest, I think we don't understand it all that well, certainly not as well as we would like. When you think about language models which we've talked about now multiple times, this was just one of the sub-techniques that we had, or sub-model categories that we used to have in deep learning or machine learning more broadly. And that was basically a technique where you take a token out and you mask it and you ask the model to learn to predict what the next token was or the relationship between words and a vector and so on. And it turns out that with that simple task and enough data, like some of these large language models, eight years of the internet and large corpuses of books and so on, they can learn much more than just predicting the next token. You can start to see some of these emergent properties like basic arithmetic, basic understanding. If you ask it what happens to ice when I take it out of the fridge it will answer that it will melt. And how does it understand? Does it really understand the physics behind that? No, I don't think so. We don't think so, but it does certainly start to comprehend some of the relationships that we would describe as the physical world. And because this technology is relatively nascent, so a couple of years old, we're still asking and figuring out how exactly does this level of understanding work and how far can we push it? 

BP Yeah, exactly. It makes you at least feel that it's sentient sometimes. Not something you want to say out loud, but it is interesting. Very cool.

BP Hello, everybody. Welcome back to the special coda here at the end of the episode. I am back with Paul. We had our conversation about AI prior to the release, I believe, of GPT-4 and things that have spun off of that, and so much has changed that I wanted to make sure I invited Paul back just quickly to chat about what's happened. He predicted the next token in the sequence very successfully. We listened to the old episode and he had a lot of things to say about agents and APIs and AIs using tools. So Paul, I know you're very close to the subject so you got a lot of it right even before the news came out. But what has been your take on these sort of overwhelming last four or five weeks of news that has come out about the world of generative AI and the real pivot from so many big technology companies to say, “We now need to focus here? This is maybe the next platform shift and this is where we want to put our focus.”

PV Yeah, great. Thanks, Ben. I think there's a couple of things that have happened since we spoke last. I think this has really accelerated into the mainstream and in some ways the hype has really gotten even hypier. There's actually a few things that have happened as a result that are extremely cool for us that I think everybody else is also seeing out there, and one of them is this process of collective discovery that is happening across all sorts of teams, companies, countries, both inside Prosus and out there in a very public way, in trying to figure out how do we actually use these models for things we care about. And of course we were working on these things for a long time, but in some ways we are very limited by what we work on, the world we see, the world we live, the problems we face. And now having millions of people playing with these tools in this process of collective discovery has been extremely interesting and valuable I think for everyone that's looking to rethink how we build products, how we redesign the learning experience, and basically adapt the businesses that we have to use these tools. I think at the same time, a lot of folks have also gone out there and sort of launched products and so on that they're now trying to build on top of these tools that we spoke about last time that are now much more generally available and so it's exciting to see what those products can now do and the users that they can reach through all these companies that are trying to launch things on top of that. 

BP Yeah, I would agree with you. I think it's been interesting for people to say, “All right, we now know that there are some amazing capabilities for working with language, being a reasoning agent on top of a body of text. What does that mean for me?” And folks have applied it, like Bloomberg applied it to their financial data, and now you can speak to that data in a more natural language way. Other folks have applied it to their search engine. What if we replaced lexical search with semantic search? What would that look like? And there's been an amazing sort of surge of interest in what was a previously obscure world of technology. Vector databases are now the hot thing and everybody's talking about how they work and tokens and embeddings, and so for folks in the software industry and people like you and me who are interested in technology, it's a cool time. I also want to point out that I agree that some things have jumped the shark. Auto-GPT was the thing that was really big in the hype cycle over the last week or so. You can ask it to do something and it'll start spinning up agents to do different things. And I see people posting examples like, “Solve world hunger,” and Auto-GPT comes up with a few ideas and makes a Twitter account, but it doesn't really get very far in executing on its grand plans, so things that are exciting and also things that are overhyped at the same time. 

PV Yeah, absolutely. I think if there's one thing that I really take away from how we spend our time today it’s that there's a big gap between the proof of concepts that are really easy to get out there either through social media, through little demos and so on that you can just post and share with folks because they're very easy and cheap to kind of put together. The gap between those and actually building a product that users will use time and time again and isn't just a parlor trick and behaves in a way that's safe and responsible and adheres to sort of all the other quality and service levels that we would normally ascribe to the products we put out there at the scale that we have at least in the Prosus group. The gap between those two worlds is still very large, and so for me, sort of the subtitle of where we live today is, ‘the building is just starting.’ So we are really just starting to think about how these tools will need to make their ways into products that operate at scale, with not just tens of people 1 out of 5 times, but with millions of people 9 out of 10, or 9.999 out of 10 times in the way that we expect. And so doing that requires a tremendous amount of work on essentially redesigning the software engineering stack that we need to put in place all the way from this model, whether that's a commercial API or a model that we build in house, to all of the middleware, whether that's routing between different models, the evaluations of the outputs, basically managing how you call different models at different points in time, to the steering of these models through of course the prompt engineering, the prompt chaining, but also the moderation and so on. Then the piece that has to do with how these models and this whole workflow interacts with the external world, whether those are knowledge bases. You talked about vector search databases. We know that there are ways to interact with other sorts of tools that actually allow you to take actions, whether it's Zapier or things like that. And then sort of the application interface –and people sometimes underestimate this– but we need to be really clear to users of these tools how these answers and how this content is being generated and making sure that the user still applies a judgment on top of the things that they're seeing and the content that’s being presented to them, capturing their feedback, keeping them alert on where they need to pay more attention to validate the answers of the tools that are sort of working under the hood. So that entire workflow, the entire stack needs to come together for these things to move from a very catchy demo to something we can reliably put in production at scale. That's a lot of work and that's the kind of stuff I get excited about because we need to build things and tools and frameworks and teams and organizations to be able to do this. And all of that is unexplored territory and so we're spending a lot of time on figuring out how do we do this within our teams, within the group in a way that we can learn from each other. I think this is also, by the way, the exciting part of being part of an ecosystem like Prosus, because we've got hundreds of thousands of developers in the group that can collaborate and share knowledge on these topics and how to build things in the way we think is right. 

BP Yeah, this must be an exciting time for you. I'm sure you're in high demand. All the portfolio companies want to sort of pick your brain and hear about what you've been working on. You mentioned in the previous one that you have been close to the GPT-type models for a long time and so I know you're traveling the world now trying to help all different Prosus companies in different sectors, as they do hackathons or things of that nature, figure out how to work with this. And as you said, maybe one of the most exciting things that's happening is everyone's figuring out how to realign, “Can we be a Gen AI-first or Gen AI-powered company,” the same way people said in the past, “Hey, we need to be an internet company of some sort now. Hey, we need to be a mobile company of some sort now.” And eventually it will just be table stakes. Nobody will talk about it that way. It'll just be like, “Well, every company is mobile to the degree that it needs to be.” But right now, people are trying to figure out, “How do we realign around this powerful new set of tools, this powerful new sort of mode of operating so that we can make the most out of it,” and it's been pretty exciting. 

PV Yeah, I mean, you're right. The folks are really trying to spend a lot of time on understanding what all this is, what it means for them, what it means for the teams that are working on products. Do they need to all of a sudden start learning how to do prompt engineering or the so-called hottest new skill? I think for me it's very important to distinguish between all the hype and the things that we actually are ready to start shipping. And with that entire new stack I just described, we’re building the bridges. We're crossing it essentially. That's one thing that we're doing. We also need to be aware that there’s lots of other things we need to figure out and proceed with caution and wisdom and judgment on what are the use cases we feel confident we can already go to the users with, and which are the ones that we sort of operate more in the background? And I think this is something that, because we've been playing with these tools for many years into the early incarnations, have developed a useful amount of intuition around, whether it's generating content for learners and so on. All of these things, we always have you insert this kind of stuff into workflows that have subject matter experts that can exercise judgment on whether the answer is actually ready to be shared with external users. So while it's maybe interesting and tempting to start using these tools to basically create output that immediately gets sent to users, I think we go back to the things we've been doing for a long time and make sure that you have this human in the loop as we say, look at this generated content so that in that way you can help these teams, you can augment them, you can amplify their knowledge, because they don't need to spend the time creating stuff sometimes from scratch, but instead can really spend the time on using their expertise, sometimes creating answers of their own, sometimes evaluating and theorying the answers generated by some of these tools. And so summing this all up, it's also about setting the expectations right. The building has just started. While we do have an intuition on how these models work, they change all the time, so starting with the use cases we think are the ones that are ripe for basically presenting to users, and then over time, maturing our thinking, and then eventually also how these businesses start using these tools and in every way that we think make sense.

BP Yeah, it's been a real privilege for me, because I'm passionate about this subject, to get to chat with the folks on the Prosus AI side. And like I said, I think it's really helpful to us as an organization, and I'm sure all of your portfolio companies, to have folks who have been in this world of GPT for years who didn't just start thinking about this when ChatGPT came out but was involved with it closely over its longer development. And that's, I think, what led you to be able to sort of accurately predict a lot. You and I had the conversation a couple weeks before and you said, “What's going to be really interesting is when we see them start to become APIs and use tools and act as agents,” and that has proven true. Those are some of the sort of developments that have spiraled outwards as people started to play with this stuff that have really made a lot of news. And as you said, what's really interesting is that we kind of knew where it was headed, but now that things are open source and the whole world gets to experiment with them and play with them, we'll start to see use cases and new functionalities develop that we never would've thought of just on our own. So it's going to be a wild ride. Buckle up. 

PV I think so, absolutely.

[music plays]

BP All right, everybody. It is that time of the show. We are going to shout out someone who came on Stack Overflow and shared a little bit of knowledge and helped the community. Awarded two days ago to suvayu: a Lifeboat Badge for saving a question with a negative score, giving it an answer, and now it's got a score of three or more. “How to put a big centered "Thank You" in a LaTeX slide.” If you've ever wanted to thank somebody in a LaTeX tile, we have the solution for you. I'll put it in the show notes. I am Ben Popper. I'm the Director of Content here at Stack Overflow. You can always find me on Twitter @BenPopper. Email us with questions or suggestions, podcast@stackoverflow.com. And if you like the show, leave us a rating and a review. It really helps. 

PV Great. So my name is Paul van der Boor. I'm Senior Director of AI at Prosus. My role here is to help the folks in the portfolio of Prosus do machine learning bigger, better, faster. If you want to have a look at what we do, we've got an AI and tech blog that you can check out through Medium, the Prosus AI tech blog, where we write about our articles and you can find us there. Any questions, we'd love to get in touch. 

BP Awesome. All right, everybody. Thanks for listening, and we will talk to you soon.

[outro music plays]