The Stack Overflow Podcast

Let’s talk large language models

Episode Summary

The home team unpacks their complicated feelings about AI, the Beyoncé deepfake that got kpop hopes up, and the pandemic’s ripple effects on today’s teenagers. Ben, the world’s worst coder, tells Cassidy and Ceora about building a web app with an AI assistant.

Episode Notes

Our recent Pulse Survey showed how technologists visiting Stack Overflow feel about emergent technologies. The consensus is clear: AI assistants will soon be everywhere, and developers aren’t sure how they feel about that. Check out the podcast here or dive into the blog.

Learn more about the emergent abilities of large language models (LLMs)

For more on the intersection of AI and academia, listen to our episode with computer science professor Emery Berger or read his essay on how academics are coping with AI that can ace exams and do everyone’s homework.

Catch up on the adventures of the worst coder in the world.

Congrats to user d1337, whose question How to assign a name to the size() column? won a Stellar Question badge.

Episode Transcription

Ben Popper Listen to season two of Crossing the Enterprise Chasm, hosted by WorkOS founder, Michael Grinich. Learn how top startups move up market and start selling to enterprises with features like single sign-on, directory sync, audit logs, and more. Visit workos.com/podcast. Make your app enterprise-ready today.

[intro music plays]

BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I am Ben Popper, joined for the first time in a while by my wonderful co-hosts, Cassidy Williams and Ceora Ford. How’re you doing, y'all? 

Ceora Ford Good, how are you? 

Cassidy Williams Hello!

BP Hello! So Ceora, I know you need to start out as always with something K-pop related. Tell me the latest and I'll see if I can find a segue into something else. 

CF Okay, cool. So I was scrolling through Twitter just earlier, and I obviously get a whole bunch of K-pop updates there, and I came across this one tweet where somebody tweeted a video of just an audio of Beyoncé being interviewed. And she's talking about a BTS member and how awesome she thinks he is and she's so excited for his new album, and it's all AI. 

BP Deepfake.

CW Oh, dang! 

CF It's all fake. 

BP It got you. The voice didn't sound computerized at all. 

CF Yeah. And the voice certainly didn't sound computerized. I could tell that it wasn't Beyonce, but it definitely, after some refactoring, could be. I can see the potential of this actually being realistic and so I just thought that was interesting because I think on our last episode together we talked a little bit about AI art and music and stuff like that. So I don't know how to feel about it or what to think or whatever, because I don't know. I was just kind of blown away. 

BP It’s overwhelming, yeah.

CW It's wild. I recently experimented with Eleven Labs, I don't know if either of you have heard of them. They're in this space specifically for generating audio, and I experimented with my own voice and it was trippy. I put in just a bunch of audio samples and stuff of my own voice and then what I had to do is I gave it a blog post that I wrote and had it read the blog post, and then I recorded myself reading the blog post and just compared the two of them. The weird thing was that the cadence was very similar. The timing of the audio was within a second. The cadence was perfect. The only thing that it was really missing was the dynamics of my voice. There was no emotion in it. But it got the sound of me right, where if someone didn't know me very well, they'd be just like, “Oh, dang, that's Cassidy.” But if either of you heard it, you'd be like, “This isn’t her.” 

BP But also, like you said, how much data? You gave it a few minutes or a few seconds?

CW Less than 10 minutes of audio, yeah. 

BP Exactly. So if you took every audio track from here and just fed it a couple hours worth of podcasts, it would start to get better and better. I tried to do the voice cloning thing back in 2017/2018, and with a couple of hours of audio it got close, and now it's this one shot thing. Like, “Give us 30 seconds and we'll put out something that's a passable fake.”

CW Right, and it generated the audio in less than 30 seconds too, which is wild. 

BP Wow, it's so crazy. It's so crazy. 

CF It's so trippy. It really is.

BP It's trippy. We had a crypto wave and it was going to change the world, and Web3 and VR and AI, and it’s like we've been down this road before, but I guess the difference here is that the results are a little bit more tangible. It's not like, “This will change banking someday and this is how we'll all do it, but we're not there yet.” It's like, “Try out this magic trick and see if it works,” and it does work and there's no disputing the results. 

CF Yeah, I think this is a little bit different than the other trends or fads, whatever you want to call them because of that. Because there are concrete results that you can see or read. People are writing whole articles, resumes, all kinds of stuff with AI. They're making audio and music and art and stuff. I don't know how to feel about it because I see the potential for good and bad. And I feel like I say this all the time, but there's no regulations. There's nobody putting any restraints on any of this, and so the possibilities are endless and that's a good and a bad thing. 

CW For better or for worse.

BP Right, right. So I wrote an article a while back called, “Ben Popper is the Worst Coder in the World,” and I've been at Stack Overflow for a long time and dabbled in learning to program and learned a lot about it on the podcast, but just always felt like it's something I'm not naturally good at and so trying to invest a lot of time to get to the next level is not worth as much as other things I could work on. But the rise of these AI coding assistants got me thinking again, like, “All right, it's time to go back to the well.” So I was trying to make my dog park app. Super simple, go to the webpage, tell it what time you're going and what your dog's name is, and that's it. And the thing that's crazy is, on the HTML/JavaScript side, it's become a natural language interaction. I say, “Make a webpage that does this,” and it does it. And then I say, “Change it so the buttons are like this and so it displays like this,” and it does it. The question I have for both of you if you’ve played around with it is the backend. So then I was like, “All right, I need a database. Write me some code for a database,” and it did. It was like, “The best thing you could do is this in PHP.” And I said, “Oh, I'd rather use this other database,” and it said, “Okay, I'll rewrite this but I'll write it in Node.js.” Even that was a little bit scary where it's just like, “Oh, no problem. You want it in this language or that language? I'll do it.” But in the final phase where I had to connect the database to the web app, there was some error and the AI assistant couldn't debug itself. It couldn't tell me why this error was happening. I had to Google, I ended up at Stack Overflow. That's not a plug, that's just what happens. 

CW It's just what happens. 

BP It is really striking how far it can take you. And then of course there's still this sort of plateau you reach that it can't get past. I don't know, what do y'all think? 

CF I have a question. How much control does it give you over the code? Do you have any access to the code or is all that gated and they just say, “Here's the product.” Or can you make changes to it if you want to?

BP Yeah, that's a great question. So you can say, “Rewrite the code above to do x,” or, “Can you rewrite the code above to do y?” Or you can go in and say, “I rewrote the code above with these changes and I want you to do x,” and it will try to do that. 

CF Okay. Because I'm wondering if the database is not really working out if you could go in and fix it and basically have an app that it was partially created by. That feels so crazy to say. 

BP Yeah. Well I was always copying the code over and running it separately on a webpage through a CMS. So it was generating the code, but it wasn't running inside of the assistant.

CF Okay, that makes sense. 

CW I think it's really one of those things where it'll always work to a point until things get complex. And so kind of kind of like what you said, when it started hooking things up, you started to need to be the human figuring out, “Okay, why is this actually not working?” A friend of mine is on the board for a university and she recently went and gave some lectures there and had a bunch of meetings. And in this Econ 101 class –Economics 101– she was giving a lecture on generative AI and more than half the class was doing all of their homework with AI. Their homework was just completely done by it and they were saying, “I don't understand why this is a bad thing.” All these students were just like, “It's nice because I don't have to do it, great.” And meanwhile she's like, “Oh, no.” But the thing is, what she had to really explain to them is that it works to a point. They are in an Econ 101 class. Most people could probably say, yes, supply and demand, and macroeconomics versus microeconomics– very, very basic stuff. But once you get beyond a certain point, it can't answer certain ethical questions or larger questions, just because it's not a model that knows everything. It's a model that knows what is the likelihood that the next word is correct? That's what these large language models are. And so these will get better, but I don't see them fully replacing anyone for a long time. Someday we're probably going to look on this episode and laugh, but there's only to a point where these things can really help us.

CF Yeah. I wonder if the students in the Econ 101 class were quizzed and it was like, “You can't use anything. It's not open book,” I wonder how much they would know, especially the students who use AI to do their homework. Because I'm thinking the accountants and all that stuff, stock workers of the future, are they going to know their stuff? Because for me it's a little bit different if you're a professional using AI. I know some people use it for writing articles and things like that because the likelihood of you actually knowing your stuff is pretty high. You got the job, you're already working, you have working knowledge and stuff. So I get wanting to use AI to do your job a little bit because you know you know how to do it, but for whatever reason you don't want to.

BP No, I think your point is really rock solid, Ceora, which is like, how much retention is there and how much learning is there? Like Cassidy said, this is Econ 101. You read the book, you have the homework, you input the questions, it gives you the answer, maybe you look at the answer. If you then had to do a quiz with just a paper and pencil, would you remember it as well as if you'd done the homework yourself? I’d guess probably not, but what do I know? I don't know, it's an open question. 

CF And then I wonder too, because with AI, once things get complex it's a little harder to actually use it to do things and so that's when we have to step in. That's when you actually have to step in with the knowledge that you have and fix the code or whatever it is. If you're using AI to do the foundational basics, the homework and the stuff while you're learning, when it comes time for you to pick up the slack of the AI and deal with the complex stuff, will you be able to? That's kind of what I'm thinking about.

CW That's the big thing. This turns into a whole larger conversation too, just about the timing of this with freshmen in college who finished high school in the pandemic and a lot of very, very formative years. And what was particularly interesting– again, my friend was telling me all of these stories about it– at this school were so many students that just got used to being very, very isolated and not being around classmates and stuff, remotely learning, and then also kind of just getting away with a participation grade rather than actually learning the material because that's just what happened in the last two or three years of their schooling. 

BP Wow. That’s so intense.

CW And so between that and then the timing of the AI, they're just like, “This is just how it is. I don't understand what the problem is.” And the disconnect there she was saying is that it was really challenging for all of these board meetings to process where there's all these students saying, “I don't understand why this is bad. I'm showing up to class. Who cares if I don't do the work?” 

BP Well, yeah. I guess to play the devil's advocate, it's a question, and I have this with my kids now who are in grade school where it's like, “You’ve got to learn how to do this long division.” And this is so painful and nobody wants to do long division, and honestly, in life you will never have to do this. And that's the thing, you'll have a calculator and a spell checker and nobody out in the world expects you to do long division in your head. It's not a useful skill. And so with the AI assistants now it's that to the nth power, where it's just like, “If a computer can do this for me, then it's not something I need to learn because it will always do it for me in any situation. I need to focus on doing things it can't do,” because otherwise, all things being equal like Ceora said, it's not like you're not going to use these tools at your job. 

CF This is really interesting because this is a topic I feel like I've thought about so much, specifically, especially about the next generation of professionals who spent formative years in the pandemic. Because I graduated high school a couple years before the pandemic happened, so I missed that phase of being a high schooler and not doing all the typical high school things because you're living in a pandemic. So I didn't have to deal with that; I was trying to get into tech during the pandemic. So I see the difference, of people just a couple years younger than me, between me and them. I read a tweet once –I spend way too much time on Twitter, but anyway– I read a tweet once where someone was saying how that age range of like 14 to 20, they're stuck. And so many people were in the comments saying, “I feel like I haven't aged since the pandemic happened. It started and I was 14 and now I'm an adult.” They're stuck in time. So it's like, “It started when I was 15 and I'm 18 now and I'm technically an adult but I don't feel like I've progressed at all.” So all of this just makes me think that we have no idea the long term impact that the pandemic is going to have on our future as a society. And it does worry me a little bit. There are some jobs where you can get away with not knowing too much and having automated things happening and Googling on the job or whatever, but there are also a lot of jobs where you need to know stuff. You have to know your stuff because people's lives are on the line, stuff like that. So I just wonder how this will impact that kind of stuff.

CW Yeah, it's spooky to think about honestly. 

BP I guess Cassidy, what you said rings true. I was talking to somebody the other day who is a social worker at a high school, and I hadn't even really thought about this but she was saying that she plans a lot of social events for them because they don't know how to do that. They didn't do it for two or three years and now it seems alien. And she's super popular at school, the best guidance counselor, because she will plan the party and they'll all put away their phones and hang out. And it's hard to put yourself in those folks' shoes, because you lived that part of your life, it was formative, you did it. It's hard to imagine missing that and being like, “I don't get how to do this.” But I guess they're going to have to spend the next few years experimenting if they want to, getting back to IRL and seeing what that's like. 

CF Yeah. I just think the long-term effects the pandemic is going to have on the world, I'm interested in seeing how that turns out. I'm kind of sad that I have to live through it. It would be better to read about it. 

CW Yeah, I wish this were like a movie instead. 

BP Yeah, yeah.

CW That was something that this friend of mine has talked about, and then just other friends of mine who are in academia, they've said, for example, a lot of Greek life or the sororities and fraternities normally recruit super heavily from the freshman classes, and this year and last year were the first time where they said, “Actually we might need to ban them from our group,” because they don't know how to act and they resort to vandalism instead of talking out problems, and so many just social cues that you normally would just pick up in high school or interacting with people, and they've truly just been at home isolated and watching YouTube.

BP Oh my God, this is the most dystopian episode yet. Oh boy. 

CW I know, and unfortunately it's real. 

CF Yeah, and that's the thing. I can't believe how much two or three years in the pandemic has affected especially that teenage age group. And I have no idea how it is for children because I don't interact with kids that much, but that teenage age group is so, so affected. I can't believe it.

BP I think I have a tiny bit of insight there. Not to say that there weren't people who had problems with screen time addiction or video game addiction or internet addiction before the pandemic. There were and there probably still will be, but I definitely think there's a cohort of kids unfortunately, like these college students, who spent most of the pandemic on a tablet or a phone and now don't know how to operate outside of that, and built deep social connections there, spent eight hours a day playing Minecraft with friends, and now that they're being asked to go back to school and do stuff without it, are really struggling.

CW So many people I know who actually transitioned from being teachers to being tech workers just because of pandemic things, they were saying that it was almost like they were taking their rights away if you said, “Put your phone down.” The students were just freaking out about it because it's like you said, they just got so used to it. And even some younger people that I know, we were talking about just our screen time where I was saying, “Yeah, I'm trying to reduce my screen time. I've got four hours a day on my phone, not a huge fan.” And then they show me and they've got like 13 hours a day on their phone. That blows my mind, but that's just what they have been growing up with. 

CF Yeah. They have a whole life on the internet. And that also makes me think about keeping your kids safe on the internet. That's a totally different topic but I just thought of that just now. It definitely affects children and teenagers as far as their social development goes and emotional regulation and stuff. 

BP I heard a venture capitalist talking the other day. He was like, “What I wish is that you had a little slider, and so my kids are going to spend x hours a day on social media, and I can tell the algorithm that 25% of that is going to be math and science videos.” This is a nice thought, but– 

CW Good luck, sir. 

BP Yeah. You can lead a horse to water. I guess to bring this back to some of our earlier conversation, there are people now who are having very intense relationships with AI bots, and I forget the one it was recently, but they basically tweaked a bot so the bot could be flirtatious, it could be playful, because it read the whole internet. If you wanted it to be somebody you met on Reddit, it could be somebody you met on Reddit. And they tweaked it, they put some guardrails up, and people were very upset. They lost this partner that they had gotten to know and really into communicating with and enjoyed the conversation.

CF When did that movie Her come out? 

CW I know, right? I was just thinking about that. Gosh. 

CF 2013. Because I did not expect 10 years later and it's reality. I did not expect that to happen. 

BP Yeah. And then again to bring it back to this show, out of all the technologies that we polled developers about, AI-assisted tools was the one they said was most likely to be a part of everybody's lives within a year or two. It was not, and this is an important qualification, their favorite. They felt much more positive about technology aimed at sustainability, open source, machine learning, which is a component of it. So it wasn't like AI was the thing that they hoped and dreamed for, but it was the thing that developers, at least who visit Stack Overflow, were like, “Yep, no getting away from this wave that's about to crash on us.” 

CF Yeah, I think I agree with that response, and I think the reason why it's not a favorite is because –I know for me anyway this is what I'm thinking– there's no regulation. There's no end to all the things you can do, so that to me is kind of scary and I'm sure other people feel the same way.

BP All right, so maybe this will be my savior. I saw this, it was Andrej Karpathy, who's thought of as an excellent programmer, tweeting out some inspiration from a hackathon about how these new large language models are all you need, even for a back end. So you can say, “Hey, I want you to store some data like this,” and then you can just start sending it stuff. And then you can say, “sort it by this,” or “time rank it by that,” or “delete it like this,” and it will do it. So maybe my dog park app in its next iteration will have a large language model, not just as the front end, but the back end too. We'll see.

[music plays]

BP All right, everybody. It is that time of the show. We want to shout out someone who came on Stack Overflow and helped to spread a little bit of knowledge. Today we will give out a Great Question Badge. “How to assign a name to the size column,” awarded eight hours ago to d1337. Thanks for asking a great question. You've helped 85,000 people learn something with your curiosity, so we appreciate it. I am Ben Popper. I'm the Director of Content here at Stack Overflow. You can find me on Twitter @BenPopper. I don't know if you'll be able to see what I share there. We'll see if the API endpoints are fixed by the time this podcast airs. But you can always email us, podcast@stackoverflow.com, with questions or suggestions. And if you like the show, leave us a rating and a review. It really helps. 

CF And I'm Ceora Ford. I'm a Developer Advocate at Auth0 by Okta. And you can find me on Twitter, my username there is @Ceeoreo_. And I also have a website, you can reach me there: ceora.dev. 

CW Nice. Whole other point of contact in case the API goes down. 

CF Yeah, exactly.

CW And I'm Cassidy Williams. I'm CTO over at Contenda. You can find me @Cassidoo on most things. 

BP All right. Thanks for listening, everybody, and we will talk to you soon.

[outro music plays]