In this episode of Leaders of Code, host Ben Popper, Stack Overflow CEO Prashanth Chandrasekar, and GitLab Field CTO Lee Faus explore how GenAI is reshaping software development practices, highlight the importance of critical thinking and problem-solving, and share challenges and lessons from their AI journey.
They also:
Episode notes:
[intro music plays]
Ben Popper Hello, everybody. Welcome back to the Stack Overflow Podcast, another edition of our series, Leaders of Code. I am Ben Popper, one of the hosts of the Stack Overflow Podcast, and I'm here today with Stack Overflow CEO, Prashanth Chandrasekar. Prashanth, nice to have you here.
Prashanth Chandrasekar Good to be here, Ben.
BP We are also going to be chatting today with Lee Faus, who is a Global Field CTO over at GitLab. Hi, Lee.
Lee Faus Hey, Ben. Hey, Prashanth. Thanks for having me today.
BP So first things first, I'm pretty familiar to our podcast audience. I think Prashanth has been on a few times, but Lee, they don't know anything about you. Give them a high level overview. How'd you get into the world of software and technology, and what brought you to the role of Global Field CTO? I don't know what a Field CTO is to be honest, so tell us what that is.
LF Sure. So I started off my career actually as a high school teacher, so I taught math and computer science in Raleigh, North Carolina. I ended up going into doing professional training, and then I ended up working for my first startup back in 2000– a company called Together Soft that got acquired by Borland. That was sort of my first professional coding job as a developer. I worked part in the field, part writing code. Followed my career through Red Hat, did a lot of work with open source, ended up starting my own consulting company doing a lot of stuff with DevOps, automation, DevOps, whatever you want to call it today, platform engineering, all of that good stuff. Ended up going from there to work at GitHub, so I was at GitHub for almost five years.
BP You've been at all the Gits, okay.
LF I've made it through all the Gits. And then I spent two years working at a startup that recently got acquired by Harness called Armory, and that's what brought me to GitLab. And so what intrigued me about GitLab is a field CTO sort of does the role of an internal CTO, but for our customers. I get pinged probably two or three times a week about this fractional CTO mentality. A lot of small startups can't afford to hire a full-time CTO. So you could think about me going in and working directly with other executives inside of the GitLab customer base and helping them understand best practices, helping them learn from other customers. And then I bring feedback back to our product and engineering teams around what our customers, where they see gaps, things that they would like to see us focus on, things like that.
BP So Lee, talk to us just a little bit about that feedback loop. People inside of GitLab are building new products. They feel like they have ideas for what customers want and they have ideas about what's coming next, but then you're out in the field actually hearing from folks, “Well, this is what I want,” or, “Before you give me something new, just make sure that this is fixed,” or, “If it ain't broke, don't fix it.” And then you bring that back and try to align with the folks who are not as much in the field. How does that feedback loop work?
LF So the feedback loop at GitLab is very transparent. So because we're an open core company, we share a lot of the issues directly with our general public, completely transparently to people to be able to see what we're actually planning on working on. So I'll go and sit with a customer and I'll ping a product manager in the meeting room while we're having a conversation and, “Hey, are we thinking about doing anything along these lines?” And a lot of times the product manager will be like, “Hey, you know what? This has actually been something that's been top of mind,” and they'll shoot me over a link in Slack and I'll pull it up in front of the customer. I'll be like, “Is this sort of what you're thinking about?” and they're like, “Ah, you know what? That's close, but what if it could also do X, Y, Z?” And I'll go directly into the issue and start making a comment and say, “Customer X, Y, Z is also interested in providing additional capabilities.” And that feedback loop helps us create more well-rounded PRDs– product requirements documents. And then a lot of those things, it's my job to follow up with the product teams to sort of understand where are we in the actual engineering cycle. So a lot of times, just like any other software company, I've got to be sometimes the bearer of bad news and, “Oh, you know what? I know that we talked about that possibly shipping in summer. We've had some things change and we're hoping to get to it sometime September/October.” But a lot of the time, what our customers really want is they just want to see slow progression.
BP And they want to be heard. They want to know they’re listened to.
LF That’s right. And what they don't want is they don't want to be surprised. All of a sudden a feature disappears, all of a sudden a new capability shows up and they've got a whole bunch of automation built around the way things used to work, or an API, the data changes inside of an API. So slow and steady has always been the course of making sure that we get the right things out to our customers, which is very interesting when we start to talk about things like generative AI, because generative AI has a tendency to want to build big bang type things rather than that slow, incremental type mentality.
PC One question I did have for Lee, actually, and I'm curious, in his Field CTO role– given the rapid changes that are happening in the industry, Lee, I'm curious how GitLab is notorious for, in a good way, for being heavily almost remote heavy and fast, et cetera, but also very, very good documentation culture. How does GitLab prioritize all the feature inputs coming from CTOs like you, from all the other sources, and ultimately, how does it weigh what you are sending? Because you are out there, literally every week there's a different use case being discovered in the enterprise for Gen AI, and how are you able to get your engineering leader to go and prioritize that or a product leader to prioritize that relative to some other feedback that they believe is a little bit safer, let's assume, than a large number of customers?
LF I think it comes back to anything else in this world– it's an aggregate, not just an individual customer, so if we hear it over and over again. So Bill, our new CEO, he's heard a lot from our customers over the last three months since he's joined around the things that our customers really want to see us focus in on. And those are themes that we're focusing in for fiscal year ‘26 this year. One of the other things that we've initiated– Sid, I remember hearing him talk about this last year– and has been our huge win for us is something that we call ‘co-create.’ And co-create allows us to bring an engineer from GitLab, we bring an engineer from a customer, the two of them align on a fix, on a change, on whatever they want to see the product do, and they work on that particular feature together and they bring it all the way through to a release. That has been a way for us to increase the contributions from an open core model. So obviously we have to make money so there's things that we charge for, but there's a lot of things that our customers want that don't require payment, and if they're willing to contribute those things to the open core model, man, that's just a win for everybody in the community. So finding that happy-medium across all three of those, so from paid to co-create to pure just open core, open source delivery has been a huge win for us.
PC That by the way, what Lee described I think makes a lot of sense and it sounds like you've got a good process, obviously prioritizing more obviously the predominant feedback that's across customers, which is great. I was also thinking about in the context of just the rapidity of change, Lee, to your early observation, just how many things are possible seem to change, or what possible things can be unlocked seem to change every week. And so that just sort of is moving at somewhat of a rate that I think is somewhat unprecedented, so it does keep it very nimble and dynamic, in my opinion, around some of the prioritization discussions. I think there are a lot more of that happening now than ever before.
LF There's something that we're playing around with internally right now, which is imagine a world where you're using generative AI and you're going through your sprint planning and you say, “Hey, we're going to go work on ticket 1234.” You pull that into the sprint planning and generative AI, an assistant agent, whatever you want to call it, goes off and says, “By the way, I found 36 other tickets that are really similar to this one. Would you like me to just automatically add those in as a corollary to the first one?” “Yeah. You know what? The developer is already going to be in that section of the code trying to fix the one. Why not just try to do all of them at once?” And that's just going to make it easier for us to be able to start to comb through the backlog and get through things in a much faster pace rather than one ticket at a time.
BP Let's head over to the topic you brought up– generative AI, non-deterministic. So Lee, I'll let you start. What do you see as some of the best practices or some of the things people are actually adopting, and then, Prashanth, we’ll throw it to you.
LF The thing that is amazing is it seems to change weekly. I mean, something that I'm like, “Man, I really wish generative AI could do X, Y, Z,” I wake up on Monday morning and I'm reading my latest newsfeed and I'm like, “Oh my gosh, and there it is.” I feel like somebody's got generative AI now reading my mind as I'm sleeping and producing features as I'm thinking about them. When it comes to generative AI, one of the things that we're seeing a lot with our customers is the challenges around how do you do things incrementally without doing a large set of breaking changes. So starting a new project with generative AI, and this is the thing that probably frustrates me the most is, even with our customers, they start their POCs with a brand new project, and I try to explain to them, “This is not where you're going to get the most productivity,” and I have to walk them through. And the power of GitLab is around being able to share best practices across the entire platform, and when I go in and show somebody, the most expensive change to any software product is when it's in maintenance. Being able to go in and create a branch in Git and opening up an MR in GitLab and letting me take a junior or a somewhat seasoned developer and allowing them to be able to go and select some text and say, “What does this code do?” or “Could you write me a unit test ahead of time?” I'm not a huge fan of writing unit tests after I've already built the functionality. I would much rather follow coding best practices, so, “Hey, go write me a BDD test for this particular issue,” or “Write me a unit test for this particular issue,” and then have me take a first stab at writing the code and allowing it to do inline suggestions as I'm writing it, but making sure that I've got the tab open for the BDD test and the unit test so it infers that context as it's writing the implementation, and then being able to use basic Git capabilities. We have not done a great job teaching people how Git really works. I come from a time when we used ClearCase and SVN and CBS and I see people going almost immediately from a local commit to a push. And I go through it and I'm like, “You're not understanding the power of Git.” What if I go create my BDD test first and I'll go do a commit because I'm at a good stopping point, and then I go ahead and I go do my unit test. Okay, so now I do another commit and I'm stacking my commits on top of one another. Now if I start going down the path, I start doing an implementation and all of a sudden my generative AI just takes me down this really weird use case and starts generating a whole bunch of stuff that just makes no sense. I can go back in to Git and do a Git reset and go back to the last known good head version and immediately start from scratch. I don't have to worry about trying to do a whole bunch of undos and trying to figure out where I was. So if I can really use the best features of generative AI with the best features of Git, I can make these small incremental changes. Then when I get ready to do a push, I can go through and analyze all of the change sets that I'm about to push which allows me to make a much richer push into the server that has a lot more detail. And that's something that I think is missing as we're trying to use generative AI for everything, so, “Hey, write me my commit message. Hey, write me a description for the push,” all of these things, and it's like, “Do you actually know what you just pushed, because somebody's going to come and do a review, and if they come and do a review and it's something that does not look like something that you wrote, there's going to be some people that are going to have a lot of questions.” So we want to find that happy medium and that's what I'm seeing with a lot of our customers. They're still trying to figure out where that happy-medium lives.
BP You make a lot of good points there. Prashanth, talk a little bit about your experience. I think for Lee, it's very much in the code base, in the IDE, like he said, talking about push and commit. But from a Stack Overflow perspective, what are some of the things that you see being successful or adopted in the gen AI space either internally at Stack or among our customers?
PC Lee characterized it really well and opened up a lot of the topics of consideration within companies and enterprises, so not surprised considering he spends that much time with customers. So I would say in concert with what he's describing, there is a lot of, obviously, excitement about using this to automate the simplest of tasks. And I went down this path of writing code recently using a bunch of Gen AI tools just after a long break of doing that, and I was quite pleasantly surprised how quickly it's able to get stuff written in a basic sense, which is exciting, especially if you are, let's say, a more senior developer looking to sort of automate away a bunch of the simpler tasks so you can go a lot faster on the more complex mentally intensive points. And to Lee's point, it's actually somewhat of a double-edged sword for a junior developer inside these companies and enterprises, especially because all of them are considering what kind of person should they hire, how do they know they're hiring the right kind of person? Because everybody's able to come across no longer as a junior developer, everybody seems to be somewhat competent because they're using AI tools so I'm sure interview processes are changing. But there's a lot of, I would say, trepidation within companies as they leverage these tools en masse. I think we're seeing a lot of people who are using it in the early pilots and getting good benefits based on the things I just described, but when they think about broadening the usage across these enterprises, let's say at a big bank– we have pretty much all the banks as enterprise customers of ours on Stack Overflow for Teams– there is a significant amount of change management work to make sure people are comfortable with all the potential downfalls of privacy and security and compliance, which obviously GitLab plays a very nice role making that point in the ecosystem. But that, I would say, is a predominant point, and so efficiency and productivity seems to drop beyond that initial pilot group as people tend to kind of have introduced more humanistic elements to leveraging these tools across the company. Having said all that, I would say the places where we see the use cases most prevalent are obviously in software development, but also the more easier and kind of obvious use cases where you're seeing a ton of repeat information. For example, things like customer tickets. I think the customer support flow or the customer ticket flow I think is one that is ripe for disruption in this space because there's so much repetitive information. That's an area that's been around actually for many, many years, so it's not exactly novel, but I think we are seeing a lot more competition there and obviously efficacy and the quality actually be pretty high based on what people are building. I think coding is still a TBD based on what I described. I think depending on the companies or the organization’s sort of adaptability and ability to sort of really drive change across the company, it seems to be stuck on the change management points and of course the complexity cliff that some of these tools hit when you really push them to do things that are more advanced when you're thinking about multiple variables and different contexts to keep in mind at the same time. I think obviously then you're forced to sort of go back to an early start in what you were doing there before you progress. So there are clearly limitations, which is why we are very excited about our knowledge as a service strategy, which is to effectively surface Stack Overflow content at the right place, right time, wherever the developer is in their workflow, so they can trust what they're actually building. Even if they're using a Gen AI tool, they know it's grounded in truth, whether that's in the enterprise knowledge base or from a public dataset.
BP So Lee, I want to ask you a question. What do you see in terms of non-technical or less technical staffers outside the engineering organization in terms of adoption of AI tools? And are they able to contribute in a way that's meaningful to, say, the velocity of code generation or product releases or product fixes now that they can have a conversation in natural language and come away with their own web app, or a change maybe to a landing page that before would've required some of an engineer's time?
LF I think it's actually twofold, Ben. We're finding the ability to get our developers to be more strategic and working with people, let's say, on the product side or on the marketing side or even on the documentation side. So somebody goes ahead and writes some docs for the website, and the docs are just wrong. It gives me more time to be able to go in and be able to review it in more detail because I'm not so concerned about the change that I've written in the code. And then on the flip side is also allowing people who are non-technical to be able to review things that are technical. I've seen a lot of our customers creating safe places for people to be able to experiment with things like you described– “Hey, we're going to create a new pricing tier. Can I get somebody on the marketing team to go ahead and add a new column to the webpage using whatever the frameworks we're using internally to go ahead and add that new tier with the little bullet points and stuff like that?” And it's pretty interesting when you look at things like basic HTML. I mean, HTML looks like Microsoft Word at the end of the day. So when I look at GitLab specifically and our customers, the place where we see our customers wanting to take the non-technical use cases are specifically around things like security remediation. A lot of times I've had to have a security-specific engineer coming in to be able to remediate an issue, and I'm not just talking about the basic, “Hey, let me bump my NPM from version 1.12 to 1.18,” and magically four criticals go away. This is more around the idea of, “What if I've got a segment of code that is related to a vulnerability? Oh, and by the way, I've got 10,000 projects inside of GitLab, and 50% of them all have the same critical vulnerability. How do I make it so I can apply that change, that best practice to all 5,000 of those projects at once?” And so this combination of being able to create reusable AI agents, AI assistants, things like that, that now we're seeing developers do things that are a little bit more focused that can now give people sort of those superpowers that I would normally expect a security engineer to do, I can now give it to somebody who might be a junior developer to be able to apply.
BP Prashanth, I think the question I sort of set out for Lee there at the beginning was kind of what do you see in terms of empowering less technical folks outside the engineering organization? I know Stack Overflow, for a lot of its customers on the enterprise side, does extend beyond just developers to a wide swath of technologists or other knowledge workers, so tell us a little bit about what you're seeing in terms of how AI is empowering them and maybe allowing them to collaborate in a meaningful way with some of the developers on their teams.
PC I think it's a pretty big boon, and I would say with some level of caution, as I previously mentioned, I think in the context of some of the junior devs using these tools as a crutch to get work done and not really understanding, to Lee’s point, when something breaks what exactly happened, because ultimately they're on the hook to fix things or to build things. So with that caveat put aside, I think it's a huge boon. We even see this in our own organization where historically some of our best people are actually those who can actually be in a certain sort of functional area but actually have the ability to code. I think being able to code is actually a huge boom because you can get a lot of stuff done. Let's take, as an example, marketing. To be able to actually understand brand, but ultimately be able to design something that ultimately takes an idea and breaks it all the way to concept where let's say we want to rebrand a company's site or its logo and all these sort of things, all that now is at the fingertips of the brand marketer to be able to do very, very easily. I suspect we will see a lot more of that where you have these kind of 10x brand marketers, 10x et cetera, content writers, who now they've got this additional machine-powered super intelligence that they can use that makes them a lot more productive and can be a lot more effective in their roles, because they can actually showcase what's actually in their minds and go faster to market with their ideas and test things. So for the non-developer, I think this is a huge boon because it, again, lowers the barrier to entry and really requires you to put on more of a thinking hat versus just having to do it previously where we were just shuffling a lot of paper around sending it to dev teams, re-create multiple drafts, send that back to you, you're editing. This is all now at your fingertips, so I think it does unlock a lot of potential for folks which is exciting.
BP So we've spent a little bit of time here talking about what we're seeing out there and some of the ways that we feel folks should be careful about using AI. And we just talked a little bit about how empowering it can be for non-technical teams and some of the increases in productivity there. Lee, I'll start with you. Are there challenges that you see within your own organization, or obviously you're out in the field with many, many other large organizations. What are the challenges they're facing in their journey to adopt AI and what recommendations would you have about these areas of friction?
LF When I was a professor at NC State University, one of the hardest things to do as a developer is to do maintenance, to take somebody else's code, understand what it was supposed to do, and then being able to fix a bug or something else. One of the things that I think AI is bringing forward is we're realizing it's not about the syntax. JavaScript, TypeScript, Java, Rust, the syntax doesn't matter anymore, and we're figuring out that the real power of computer science is around people's critical thinking skills and problem solving. And when we sprinkle a little bit of creativity in there, we're able to do some really magical things with the computers that we have in front of us. And we're seeing a new generation of startups, not just in the Gen AI space, but we're now seeing things, we're seeing progress. A customer I was at last week, it's shocking to me how quickly all of a sudden with generative AI we're starting to see progress in things like self-driving vehicles and how we're able to reverse engineer protocols from sensors and tying that into people who are figuring out ways to design new chipsets for them to be able to run more efficient ECUs inside of vehicles, and then how that relates to how we're able to share that information in a sharing economy using tools like GitLab to be able to go out and show how we do embedded development with people like healthcare, and now what does that mean for me being able to use those same chipsets, those same things for me to be able to drive communication inside of an ER room? Those are the things that I think, when I see where development is going to go, is we're seeing a push to the edge, and nobody really knew how to build for the edge. We got stuck in a Kubernetes world and building containers and building full-fledged SaaS applications and now we're trying to figure out how we take SaaS applications and attach them to the edge. Home automation, self-driving vehicles, the connected ER, those are the places where I think if I was talking to somebody who was, let's say, in college and looking to be a computer science or a computer engineering major, those are the areas I would tell them to focus in on. You're going to have to learn Rust, you're going to have to learn C++. Being able to use generative AI as a learning tool and then being able to be creative around how they use prompting. The one thing that I found very interesting is we're not doing a great job. I see so many people that I work with with GitLab Duo, and they'll say, “Oh, GitLab Duo is not generating the code that I want,” and I'm like, “Well, walk me through the prompt that you used.” And they show me the prompt and I'm like, “This isn't Google. You don't sit there and say, ‘Show me the best selling shoes for running.’ That's not how you build a prompt.” So when I show them how to build a proper prompt and I talk about differences between a system prompt and what a user prompt is and how you use RAG and how you can feed additional information from Stack Overflow, being able to pull in those knowledge worker information and attach it to your prompt, you get so much more clarity in the response. And those are the things that I'm realizing with the people who really excel at using generative AI today are figuring out how to connect all of these things together and use it in a way that you're not just treating it like a Google and just asking questions. It's really understanding how to assemble information together.
BP I think one of the most interesting things that you said there was that it's no longer about what programming language are we using, it's about critical thinking. The one thing that I haven't seen Gen AI do yet, and this is kind of getting to your Google point, is provide insight or provide suggestion that's like, “Where am I going to find product market fit? Where should I go next month? I have these features, which one if I prioritize is actually going to make my company?” Those questions are still too much chaos in the world for them to be able to solve a question like that, and so that's where great human judgment and great human creativity still comes into the mix if you want your company to succeed. Prashanth, what are some of the areas where either inside of Stack Overflow or with the folks we're working for you see areas of friction, and then after that, what are some of the areas looking out to the future, as Lee said, that you're excited about and some of the skills you feel like developers are going to need to succeed in that coming era?
PC I think the friction point, I think it comes to the earlier change management point that I was making, especially in large companies. I think that they are not necessarily ready to adopt it full-fledged, only because of the concern around can they trust the output, et cetera. A lot of our own surveys suggest that people are very enthusiastic about using AI tools, but they doubt very heavily around the quality and the accuracy of the information that is being leveraged. So to Lee's point, by augmenting or using things like RAG and indexing with quality real time data sources, and especially when it's rooted and attributed back to those sources, I think is a really great way to mitigate that risk. And I think we've been able to do exactly that with many enterprises with our enterprise product with Stack Overflow as well as our API offerings so they can have direct access to the data. And in fact, by the time this podcast launches, we would've launched publicly our internal Overflow API data feed, which is for all enterprises to use to do exactly what Lee is talking about. So that I would say is the predominant hesitation, which is people's concern about accuracy and trusting the data that they're using for their AI models. In terms of where the world's headed, I think having said all that, if you think about the scaling laws and the fact that they seem to be even breaking to a degree with things like DeepSeek’s announcement, I think it doesn't take a lot of imagination to think about where this is all headed, and so I think you have to be building for the overused phrase of where the puck is going. And I think with that in mind, I think it's going to be a phenomenal unlocking of people's potential because you're going to have potentially these super intelligent agents. They're not going to be infallible, but I think they're going to do maybe 80 to 90% of what you need very effectively. And with that comes the need to be actually very good at, as Lee was describing, things like prompting which is a very specific skill that we also now have a space on Stack Overflow, because we are the place where people learn from each other around various software development topics and programming languages, so we actually created a prompt engineering Stack Exchange just for that reason, because we do believe a lot of people will need to be skilled at that to be able to be successful. And it takes a little getting used to because back in the day when we first started using Google, I think the art of actually searching for information, this is different obviously, but even that took a little while for people to really master. But when you got it down, you could get a lot done. And so search being one of the fundamental benefits of Gen AI in this wave, I think there's a huge benefit to prompting the right way to produce exactly the right input. Especially if you augment, to Lee's point, with the right information sources and knowledge sources, then you can cobble together or assemble and orchestrate, whatever word you want to use, an outcome that is closest to your idea. And I 100% agree that the education system probably will change and that people will focus on fundamental problem solving skills, critical thinking skills, and this naturally will come to the more senior developers now only because they've gone through the fundamentals of learning to write great code with all the old school topics like object-oriented programming that we all learned about, but the junior developers, I do worry for them. I hope that they do in fact go down ‘the right path’ and do things for themselves and don't take the shortcuts and ultimately get to being effective, because at some point they’re going to be dealing with a lot of complexity and code, and they're going to have to reason out of it with their human brain versus just using a machine to do it. I think that's kind of where we should all be assuming the world is going and make sure our junior developers are well-prepared for that world.
BP For folks who are listening, as Prashanth said, the internal Overflow API is super interesting and gets to one of the things I heard Lee mention, which is, how do you create a RAG system so that the data that your AI agent is pulling from, you know the ground truth, it has your context, so it's able to be more effective. That's kind of the dream. It's everybody's dream. Prashanth, the platonic ideal of enterprise search. We finally got there.
PC Indeed.
BP Just look through every document that was ever inside this organization and every email and every Slack thread and just find me that thing that I know we talked about once, and I want to get that back.
LF The challenge that I'm seeing is, when we take all of history, we realize that there are some things in history that we would like to keep hidden, and unfortunately it comes back, and if somebody doesn't know the context of why that was written the way that it was 5, 6, 7 years ago, all of a sudden it comes back and all of a sudden it comes back in it's bad form instead of in the form that it morphed into, which is the new good form.
[music plays]
BP All right, everybody. Thank you so much for listening to another episode of Leaders of Code. We hope you learned a lot and enjoyed it. Obviously, if you have questions or suggestions, you can always email us, podcast@stackoverflow.com. But what I want to do right now is shout out someone who came onto Stack Overflow and shared a little bit of knowledge or a little bit of curiosity, and in doing so, earned themselves a little bit of reputation. To Rafsan Uddin: “How to compress a string using GZip or similar in Dart?” a Populist Badge. That means Rafsan's answer was so good it got more upvotes than the accepted answer, so congrats on your badge, and thanks for contributing a little bit of knowledge. As always, I'm Ben Popper, one of the hosts of the Stack Overflow Podcast. You can find me on X @BenPopper, and hit us up, podcast@stackoverflow.com. Or the nicest thing you can do is leave us a rating and a review, tell somebody about the show, and suggest they check it out too.
LF So thank you so much for having me. My name is Lee Faus. You can find me on LinkedIn as Lee Faus. I am the only one on LinkedIn. You can also find me on X. You can also find me in many different places under the same handle, so feel free to hit me up. I love to learn from other people, so if you've got ideas, things like that, please, let's have a conversation online.
PC Great to have you on, Lee. Thank you for joining. My name is Prashanth Chandrasekar, I'm the CEO of Stack Overflow. You can find me on LinkedIn under my name. You can find a lot more about our products on our website, stackoverflow.co, which is where all our products are listed. In addition, of course, the .com website, which has all our 60 million questions and answers that hopefully continue to help you. Thank you again.
BP All right, everybody. Thanks so much for listening. We appreciate it, and we will talk to you soon.
[outro music plays]