The Stack Overflow Podcast

The creator of PyTorch Lightning on the AI hype cycle

Episode Summary

The home team chats with William Falcon, an AI researcher and creator of PyTorch Lightning, about developing tooling for the AI ecosystem, open-source contributions, what happens when widely hyped technology needs to scale, and why he’s bullish on experienced developers using AI but not so bullish on new devs doing the same.

Episode Notes

William is the CEO of Lightning AI and the creator of PyTorch Lightning, the lightweight PyTorch wrapper for high-performance AI research.

Dive into their docs or explore the developer community.

ICYMI: Across tech, layoffs are boosting share prices.

Follow William on Twitter or connect with him on LinkedIn.

Shoutout to Brian61354270, who earned a Lifeboat badge by answering ModuleNotFoundError: No module named 'distutils' in Python 3.12.

Episode Transcription

[intro music plays]

Ben Popper Intuit developers are using AI, data, and open source to power prosperity for millions of consumers and small businesses around the world. Learn more about how Intuit is building an AI-driven technology platform at intuit.com/stackoverflow.

BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ben Popper, joined as I often am by my colleague, Ryan Donovan. Today we have a very special guest: Will Falcon, who is the creator of PyTorch Lightning. Ryan and I have been learning a bit about PyTorch as we work on getting ourselves up to speed with everything Gen AI and where it might fit into the future of the Stack Overflow business and public platform knowledge community, so we’re excited to chat today about someone who's building tools that lots and lots of developers are using. Will, welcome to the program. 

William Falcon Ben, Ryan, thank you guys for having me on the show. Very excited to hopefully share some fun AI facts with you guys today. 

BP Sweet. So Will, for people who don't know, just tell them a little background. This project came out of work you did both as a student at NYU and a researcher at Facebook? Is that right? 

WF It started much earlier when I was an undergrad at Columbia around 2014-15, but I didn't open source it until 2019, and I really got it scaling when I was at Facebook. We were training models on thousands of GPUs and there were like 10 people there. Half of them are at Character.ai today and the others are in different startups, so I think most of us have left except the hardcore researchers. But we were pushing the boundaries of scaling models back then and put all that knowledge into PyTorch Lightning and it's kind of become the standard for that today. It's funny because in 2019 we were training models on like 2000 GPUs in a Facebook cluster and today people are like, “How do I train on 60 GPU?” It's like, “Well, you can't just build that on your own.” And you guys as developers know, Lightning is kind of like React and then PyTorch is kind of like JavaScript. We're at that stage. If you guys remember when React and Angular came out, everyone built their own versions of it. That's kind of the stage that we're in today, although people are now realizing that they shouldn't build their own version all the time. 

Ryan Donovan Right, just get the right tools. Get something that makes it simpler for you to train it up.

WF It's just a natural thing that engineers have, which is that I need full control of the system. It's like, “You do, but do you really need to implement Redux from scratch every time because you want to build a new website? That's kind of silly.” 

BP Right. So when you were creating this, you were solving some of your own pain points. How did you scratch an itch that led to something which, as you said, now has been touched by so many people through the open source community? 

WF I hope that more people start things this way. I was a grad student, and ever since I started writing software I've always wanted to reuse everything I build. So pretty much I've open sourced pretty much everything. So for me, my reasons are always really organic– give back to the community. If I'm going to take the time to solve a problem once, why not let other people reuse it? And so it just happens to be that this happened to be a problem that a lot of people had, but I've open sourced thousands of things that didn't take off either. What I find sad now is that, because there's VC money and other stuff involved, a lot of people are using open source as a distribution mechanism for business reasons instead of an actual innate want to give back to the community. So for better or for worse, that's how Lightning was born, and largely today, that's still how we operate because we make money in other ways on our product. We don't need to monetize the open source side of it. It's just how we want to enable you guys to get the same scaling and not have to do a PhD like we all had to do to learn these things.

RD So you founded a company that's doing tooling for the AI ecosystem, and we talk a lot about AI here and I think there's a little bit of AI fatigue going on. Where are we in the hype cycle do you think? 

WF Man, I hope it's over though. We're all fatigued by it. I'm not going to throw shade at people. I think there are some cool new entrants in the field and I welcome everyone to join. I think a lot of us have been doing this for a long time. I got into AI probably in 2013-2014, the first time before any of these things existed. And I think today the AI hype cycle is at the solution phase, where –and by the way, I think these models are great– they try to use them in something real and it didn't really work. Because it is very hard to scale these things. I've deployed systems before I was in AI as a software engineer, and deploying a regular web app is not terribly complicated. You have microservices, you do horizontal scaling, you beef up instances when you need to, but in AI, it doesn't work that way because AI has different patterns that you don't have in regular software. Your code could still work but the model could still crash because there's a gradient or something weird. There's math involved, maybe your math is wrong, the data is wrong. So it's less deterministic than software. So I think that software engineers coming into it now are a little bit upset about that because they're like, “Oh my God, it's not translating.” And it probably won't because it is a very different paradigm, 100%. 

BP That makes sense. One of the things we wanted to discuss with you, along with where we sit in the hype cycle, was how you think about evaluating the quality of output. So something Ryan and I have been thinking a lot about is, let's say you're at a company where your developers have embraced code generation inside of the IDE. They have assistants who are doing stuff for them. How do you go about evaluating if this is adding to productivity or developer experience and making them happier, versus, is this introducing security risks or flaws and bugs in the code that mean we're actually going to have to work harder, or we're not saving any time at the least?

WF So I'm very bullish on very good developers augmenting with AI. I'm not super bullish on okay, new-ish developers augmenting with AI because they tend to just get lied to by the model. I use ChatGPT all the time for random things. I'm still CEO, but I still code and everyone at Lightning does, so I'll randomly jump on the front end or back end or ML. And I'm not up to speed with the latest documentation of everything, so if I want to do something weird in React, I'm not going to sit here and read about it. I'm going to ask ChatGPT to give me some sample code. But the code that it gives me I know when it's good or bad because I know how it's supposed to be done. But if you're a new developer, you're just going to copy it, and I see it sometimes with our newer engineers. I'm like, “I know this is not written by you because it's too over-engineered and a little bit too complicated.” So I think you can probably measure it like, your 10x engineers, how much X do they get because of this? I think a lot. You can measure that in more PRs and faster time to land and the kind of standard engineer metrics that we measure, and hopefully reducing surface area, minimizing the code, making things simpler. You can measure most of those things through GitHub, I think very objective measures there. On newer developers, I think if you can teach them to use it as a way to mentor them, then you can teach their ability to get caught up in a new system faster. If you had a developer that joined and didn't use it versus one who did, how much quicker were they able to know the system and be able to be productive, PRs that land without regressions. So there are a few of these metrics there. My favorite is always tracking things based on pull requests at the end of the day– that's what matters. You care about your planning, any of this, it's like, “What did you ship, how quickly did it land? That's it.”

RD What do you think the knowledge gap is between that okay engineer and that senior engineer using ChatGPT? What's the thing that gets the senior engineer to use it better? 

WF The experience of knowing how it's done in other languages. If you're a good engineer or experienced, you may not need to know the details about a language, but you know the basic structures. You know that there are control flows, you know that there are bad practices around global variables. There's all these standard things that we all know. And so they'll be able to just translate, maybe, is a good analogy. It's like an English lawyer using a translator for French. They're going to do a great job because they already know the law. But having someone who speaks English try to do law in French that's not a lawyer, you can see how that's not going to work. 

BP Yeah, I think I like that analogy. It would sort of be like, “Listen, I went to law school and I work in commercial real estate law, but if my cousin needs some help with a contract, I'll be able to look over and understand it because I speak legalese.” I understand certain things.

RD You already know the domain.

BP Exactly. So one of the things that you had suggested we talk about which I thought was really interesting was the idea that AI developers at companies need to be open on the data that their models are trained on. So there are models obviously that share the data and sometimes share the weights, sometimes not, and then there are models that are completely closed. What's important to you about understanding the data, and why would that be important to developers using it as a tool to produce new output and why do they need to know what's under the hood?

WF I think there are two sides to that– the open source versus within a company. I think it all comes down to liability at the end of the day. If you're using an open source model and you don't know what that data was trained on and then you use it in a company and then they get sued and then it turns out that they get subpoenaed and they scraped a bunch of bad data and now that model is void and you built a bunch of enterprise systems, you could be liable too. So transparency at the end of the day is the thing that's going to keep your business from going down. I know a lot of enterprises that won't use open source models. They don't know the data or how it's trained because of that liability. It may not matter to a startup, but it matters to a JP Morgan Chase or a massive enterprise. They're not going to expose themselves to that liability. So that's the first thing. Then the second thing is around internally at your company, let's just say that you have full control over the data and you're training your own models, which I think most people should do, then it's going to come down to the biases that are introduced into the system. You don't want racist chatbots or sexist or any of these kinds of things. So I think transparency across the work for this will allow more eyes on what's actually going into the model and help those who know, like researchers, tell you, “Hey, this is how you should be curating these things.” It's funny because data scientists have been saying that is all that matters forever and deep learning people were like, “Nah, models are all that matter.” I think deep learning people were still like, “Models are all that matter,” but now we're like, “Oh, actually, data is all that matters,” to some extent.

RD Where do you get that model, right? 

WF Exactly. And I saw it during my PhD as well. There was a model we were working on at Facebook and it was contrastive learning, and there was a model that got pushed by Google, something called SimCLR, and I ran very thorough hyperparameter searches on a competing model to it. And a hyperparameter search is to test the different configuration of the model– like batch size five versus ten versus whatever. And I used thousands of GPUs to figure this out, and I found the exact values of the transform coefficients and the normalizing coefficients for the RGB values or whatever. And I tried thousands, if not maybe hundreds of thousands combinations, and the values that I got were exactly the values that they published in the Google paper. Whoever did that at Google must have done thousands of experiments to get there, and if you drop that out of your model, your model will not do well. You needed that particular transform pipeline –not a different one– a very specific one to get that model to work. 

BP What you're saying is super interesting to us and we hope will be highly valuable to Stack Overflow going forward, which is that the quality of data matters maybe more than hardware or algorithmic optimization, at least at the moment. I heard the guy from Microsoft Research who created the Phi models talking about this, and he said, “Look, I spent years trying to crank out advancements in other ways, tuning models or coming at it from different angles, and we found that if we train the model on high quality data, we could get 1000x and nothing else is giving us that kind of jump. So until somebody shows us a different way, data quality above all.” To get back to what you said earlier, I thought it was really interesting. We are in this Wild West stage, and Ryan and I have done some interviews with folks. We talked to IBM. Their approach is, “We're going to tell you exactly what's in this model, and that way, if you use it, you have that governance and that trust,” which I thought was super interesting. And then other big companies are saying, “We're going to assume the legal risk for you. We'll take the liability if you get sued,” which is kind of muddy. We're not really sure how that's all going to work out, but people are aware of this challenge and confronting it in different ways. Your thoughts?

WF You can make all the promises you want, but can you actually keep that even financially when it comes down to it? I don't know. In the crypto world, you have FTX that took on a bunch of risk and now they're all in jail.

BP That's quite a comparison, okay. 

RD I think it's different when you have these behemoth companies that have been around for decades as opposed to upstarts. 

WF Oh yeah, sure. If Microsoft takes on the risk, I trust it more 100%. But would Microsoft not just like– I'm not saying they would, but if they have a subsidiary that's taking on the risk and then they get sued hundreds of millions and they have to pay, wouldn't they just be like, “Sorry, that thing's bankrupt now and you're all screwed.” I don't know if Microsoft would even try to pay. 

BP I do think one thing that is going to be super challenging and we're hoping to talk to people about it as you're pointing out is that we haven't really crossed the bridge too much because there hasn't been a huge M&A deal or a public offering where suddenly this became the hiccup, but a lot of very intelligent people who are looking at this are saying that people are going to get caught up in this. If you can't speak to where your code was licensed or how it was created, you're going to come to issues in due diligence that are going to bite you. And so I actually think it's interesting. For me, originally I was thinking, “Okay, that means you probably shouldn't use Gen AI,” but also an interesting approach, like you pointed out, is to just either have built the model yourself and know where the data comes from, or go with somebody who's willing to open the curtain that way. 

WF It's interesting, because from the research side, I create content and I would want my content to be used to train models, but there are times where I don't also, so I see both sides. And for example, Stack Overflow to me is a really good resource. Obviously, we all learned at some point using it, and I don't think that it's right to just grab all that and train models and then do different financial benefits. Maybe it's the financial part of it that kind of bothers me. I feel like for science I would have no problem with it, but when companies then use that to exploit and financially make money, that's an issue where it's like, “Well, why not the company that has that data? Why don't they benefit from it? They worked hard for it.” 

BP You make an interesting point. There's nothing for us to discuss on this front except what we've said already publicly, which is that we hope that as this industry matures, the companies that train on our data would utilize some of their resources and invest that back in the knowledge community so that Stack Overflow can keep growing and people can keep contributing here and the AI can continue to learn. If it doesn't have humans generating novel ideas, will it continue to improve or will it actually degrade over time? 

WF Code is being outdated too. I see this all the time with our open source libraries. If you try to use ChatGPT for PyTorch Lightning, it's not going to be the most accurate because it's using old versions. So code is changing all the time and I guess you could keep re-scraping, but it's kind of annoying. And I think one nice thing about Stack Overflow is that as long as people are interacting and they're asking the right questions, they're pushing the new kind of code and they're testing and stress testing and finding issues with it. If you stop that dialogue and all you do is rely on Chat API, I could see you having issues. We have plenty of Lightning users asking questions on Stack Overflow, then other people answer and they keep exchanging that. If all you're seeing is ChatGPT now, then are we going to get those problems sorted out? I don't know if we would. 

RD It's just going to be a closed loop. No new data, no new solutions. 

BP I've described it as a sort of tragedy of the commons. If a lot of people are getting faster answers whenever they want from an AI assistant as opposed to having to work hard on a question and wait a bit on Stack Overflow, it feels great for them, but if as you said, the knowledge about something like Lightning AI isn't updated because suddenly everybody's not contributing, then on the next iteration of the model, everybody loses out. 

RD I kind of wanted to jump back to something you said earlier, that everybody should train their own models. Do you think they should train them from the ground up or should they fine-tune existing models? 

WF It just depends. The researcher in me says you should train it from scratch if you have the data, but it's not realistic for most people. So when I say ‘train,’ I mean it loosely, like anything in training, fine-tuning. What do you fine-tune– only one layer, all the layers, or pre-training? The answer really comes down to your budget and how much data you have. If you have a ton of data, you should be pre-training if you can afford it. If you don't have that much data, you can fine-tune, and if you can't really afford it, fine-tune as well. But it doesn't get rid of this liability problem. I think –I haven't seen a lawsuit go to the Supreme Court yet– but if I take LLaMa 2 weights and then I fine-tune it and then I overfit it to my data –LLaMa is not a good example, but let's say there's another model that was trained on bad data and they might get sued– and then I fine tune on that, did changing the weights change the liability? The data is still embedded in the weights and I used it to jumpstart the next model. That's a legal question to me. I'm not actually sure.

BP Right, is that transformative in some way so that you sort of had cleared that hurdle. It's a super interesting question. There are lawsuits proceeding through the courts about whether or not it was acceptable, like you said, for big AI companies to train on open source data, and so we're going to have to wait to see. In the meantime, I want to get back to what you said– Ryan and I were trying to work out a flowchart the other day that's sort of like, “All right, you want to add Gen AI to your organization. Do you have a data science team or ML expertise? Great, maybe you should build a model. You don't? All right. Well, maybe you should work with one of these third party providers that lets you fine-tune, or go on Hugging Face and see what's available. Okay, you don't even feel comfortable with that? Maybe you should just take an existing model and do some retrieval augmented generation with your data.” How do you think about that sort of build versus buy analysis these days? 

WF For sure. Look, obviously what I'll say here is biased because we have a platform to do a lot of these things here, so I want to preface it with that. 

BP No, no, talk your book. That's okay. Well, first tell us you're unbiased and then feel free to talk about how you solve these kinds of problems for people.

WF Sure. And I think on the open source, we try to open source as much as we can, so we're also trying to get back here. But even the flow that we suggest to our users is, if you can get started POC something with an API, you should. So go use ChatGPT, go use an open source model. On Lightning, you can do all of those and you can deploy your own endpoints with Mistral and all these different models. So I think that's a v1. You should do that to begin with just to sanity check what you're going to do and if it's going to solve some business problem or not. And that's where you don't even need data scientists, you just need an engineer. On Lightning you just press a button, just run it and it's there. It's like a fast API server. So you can configure it so you don't need a lot of expertise to do this. Now, if you then need to elevate, then you need to elevate because either the model is not working well for you. So you have a few avenues: is the model wrong, is the data wrong, and then how do you fix it. So you can kind of prompt your way into things, which is what a lot of these assistants are doing nowadays. You just preface your query with a bunch of problems like, “Don't do this, do this,” or whatever. It's fine. That's the fastest, cheapest way to do that. And for everyone to have a mental model between pre-training and fine-tuning, just think about it this way: how much of a model do you want to change? From zero to one, zero would be pre-train and one is fine-tune, if you're changing nothing about a model, that's one. It's fine-tuning. Literally changing a prompt is the simplest way to fine-tune where you're not changing anything about the model. If you then take the last layer of a model and you unfreeze it and you only just change that one, now you're fine-tuning a little bit. If you then do 2 layers, or 10 layers, or 20, or embeddings, or whatever, there's an open science field about this, that gets to fine-tuning. If you unfreeze all the layers, that's pre-training. Do you see what I'm saying? So it's just a spectrum of how much you are freezing or not. And then it comes down to the data. If you're going to unfreeze all the weights, then the question becomes, do you unfreeze all the weights or do you just create a new model from scratch that has random weights, and which one is going to give you better performance? To get random weights to work well, you need a lot of data because the model has to iterate a lot. So if you have enough data, you can get it to work. Now, if you don't have that much data, maybe you have a small model and maybe fewer weights can get you there. But I think people need to build this mental model that it is not pre-training or fine-tuning, it's how much do you want to change of the model? The more you change, the closer you get to pre-training. The less you change, the closer you're going to get to fine-tuning. It's a technicality. So it has to do with those parameters, but I would say that the simplest thing to not do anything is to use an API or deploy your own model and then start deciding, “Okay, do I want to change the data? If so, how much do I have?” If I have a little bit, then go ahead and change a little bit of a model and fine-tune it, or take a small model and pre-train it on that. It's hard to tell, but those are the main variables that I would say. And then there's costs, obviously. I'm ignoring costs, but the more you change a model, the more expensive it will be because you have to train more parameters.

RD Right. So we hear a lot about AI taking jobs, people are going to be replaced by AI. But what's on the other side? What are the jobs that would be created by AI? 

WF Was prompt engineer a job a year and a half ago? No. There's at least three or four jobs in AI that have been created because of this. There was no notion of vector databases before. Two years ago, none of us were thinking about vector DBs. If you're doing basic ML before, you were definitely comparing vectors and sorting them, sure. Now there's more efficient algorithms and that's called the vector DB. So there are these things that are coming up and new technologies that's employing more people. There are all these curation people. They're like, “Hey, is this model bad or not?” Red team. So there's a lot of safety stuff happening. Those jobs did not exist years ago. So I would argue that, at least in the tech side, we've created more jobs than we've lost. And I think where we have seen a loss of jobs is editing-type jobs where you can use a ChatGPT to sanity check a sentence or a paragraph and probably those people are no longer doing that. We've had massive tech layoffs as well in the last few years, and I'm not a financial expert, I just dabbled in finance for a bit, but I think that all of that comes down to just improving your bottom line and being more efficient with who you're using. I argue that without AI, probably most of those jobs were not even needed in the first place. The developers were just kind of coasting and not really doing a lot anyways. So you have the financial system that's putting that pressure on there. I think it will create more jobs in the long term and it's already proven to do that today.

BP It would be interesting to think about what does your end-to-end team look like, or what does the head count look like? Like you said, do I need a ML and a data quality specialist? Do I need someone who knows embedding? Do I need someone who knows vector DB? Do I know someone who needs RAG? I need safety and security, I need red teaming, I need a lawyer who knows this stuff. There's a lot of people all along the pipeline from ideation to production that need to have new and specialized skills. 

WF Yeah, but I think this is where open source can come in. I think a lot of these tools are keeping people from having to build a lot of these things in-house. I think the problem that I see is that, as engineers, we’re curious, so we want to know how things work, and the researchers, and this need to try to build everything yourself is kind of what's causing a lot of this bloat. You don't need to, and I think we all learned that. I went through the mobile boom having to code iPhone apps and that kind of stuff to the web boom to Amber and Angular and all these weird frameworks, although I still love Angular. Everyone hates it, but I don't know why. 

BP Shout out Angular.

WF And then everyone's kind of converged on these few tools, but at the time that I was at Goldman, there were plenty of engineers who were sitting there writing their own Redux from scratch. And you maybe needed it at the time because you didn't trust the systems that were in place, but those jobs don't exist today. It would be crazy if you were like, “You need your own Redux engineer.” I'd be like, “No, you don't.” 

RD I don't think there are many shops that create everything from scratch, except for a particular financial company, I think creates everything from scratch. 

BP It's interesting. I think you mentioned early on in the conversation that people got really excited. Obviously ChatGPT kind of took the world by storm and captured its imagination as a consumer-facing app, but when people try to put this stuff into practice, they found out how easy is this to use? How often does it work without causing errors? And how much value am I really generating out of this? So when you think about it, what percentage of companies do you feel like are growing a little bit disillusioned, and maybe since you obviously have clients who are trying to do this stuff through your platform, where do you see it working in the enterprise and where do you think it's going to get traction over the next year or so?

WF I think companies are still excited by the prospect. I think they're growing disillusioned by the inability to make it happen and to put it into production, and largely it's because it's kind of a new paradigm and reeducation of the system. There's a term called software 2.0 that Karpathy coined, I don't know if you guys have read his blog. The way I think about it is, this is my interpretation of it, not his– to me, software 1.0 or app 1.0, I guess, would be web development and the stuff that we all know how to do. And then there's app 2.0 and software 2.0, which is deep learning. And he means it in the context of differentiable programs, I mean it in the context of how you should develop systems in general. And I think that software 1.0 is something where your laptop was okay. You could do it on your laptop, and we all do our workflows like, “Hey, you code locally and then you submit to a server and it runs.” In AI and the approach that we've taken on the platform is you code on the cloud, everything. Everything is on the cloud. You may type the keys from your laptop on an IDE on your laptop, but it's all remote servers, a virtual desktop. And we did that because what we found is that putting terabytes of data on your laptop and GPUs on your laptop was terrible and you trying to replicate a local cloud environment doesn't work. And so you're like, “Wait, it works here, but it doesn't work,” so you submit a job, you wait 30 minutes, it crashes, you debug, and you're in this loop for hours. So we said to just do it on the cloud. And so I personally think that's going to be the future and I think that's how people are going to work in general. So that solves a lot of these putting into production problems, but I think the more that people are still trying to be wedded to this local to cloud workflow, they're going to keep continuing to be disillusioned. So I say all that because, back to your question, you can't take practices from software 1.0 and try to apply them for software 2.0. It's not going to work. It's a different paradigm. That's it. Altogether you're trying to use tools that we all learned from a different time to solve a different problem that just requires different tooling. 

BP All right, that's interesting. I think what you say certainly makes sense to me. I’m not here to evaluate it, but the idea that in software 2.0 if the sort of engine behind what we're doing is AI/ML, then it is way more hardware and data intensive than what we used to do and you're going to want, like you said, terabytes of data and a dedicated GPU cluster. And I'm sure somebody is working on it, but that's not on your laptop yet. In fact, I know people are working on specific silicon that's built from the ground up to be great for these kinds of jobs or whatever, but we're quite far away from that being something that's in mass production.

WF And I also argue it's more collaborative. To your point, even if you had a laptop, you still need to work with a data engineer. You still need to work with a data scientist. You still need to work with a product person. You still need to work with a researcher. 

BP So many jobs, so many jobs for people.

WF More jobs being created. I'm telling you, it's a net positive. 

BP No, I have to agree with you that the layoffs I think have way more to do with the end of the zero interest rate period and the inflection back to 5% and companies just responding to that and wanting to please the stock market than anything else. And like I said to Ryan the other day, I have talked to people who have been engineering managers at large companies and when interest rates were zero and what people wanted to see was growth, it was okay to have engineers on staff making six, seven figure salaries who took six months sabbaticals between projects basically. They were just on the shelf until you wanted to assign them to the next feature within an app that you had and that was okay for those companies, and they realize now is the time to sort of change the dynamic within a corporate organization like Silicon Valley.

WF Yeah. I come from the military. I was in special operations before doing the civilian thing, and in the SEAL teams, you're obviously trained to have tiny teams and very, very, very good people, and you don't ever have access to slack, which the military has no money so you’ve got to do what you can. It was a shocker to me to go to the civilian world and I'm like, “What are these people doing? They're doing 30% work. Why are they here?” So I hope people adopt more of that mentality. I think it's generally a good call in the long term.

BP I think SEAL team and Silicon Valley, those are two polar opposites on the workplace. So maybe you should write a book about that.

WF A lot of ideas translate. We run the company internally like the SEAL teams, which is good. It seems to resonate with the developers. 

BP Cool. I'd love to have you back to talk about just that, or maybe we'll do a blog post together someday or something like that. That's an interesting idea. 

WF Yeah, sounds good.

[music plays]

BP All right, everybody. It is that time of the show. Let's shout out somebody who came on Stack Overflow and shared a little knowledge. Today, I want to give a shout out to Brian61354270. The module was not found in Python 3.12, but Brian knows how to make the module appear for you and has helped over 17,000 people. So Brian, you're a lifesaver. Congrats on your Lifeboat Badge. I am Ben Popper. I'm the Director of Content here at Stack Overflow. You can always find me on X @BenPopper. If you want to come on the show or ask us some questions or just rant and rave, email us, podcast@stackoverflow.com. And if you enjoy the show, then you can leave us a rating and a review.

RD I'm Ryan Donovan. I edit the blog here at Stack Overflow. You can find it at stackoverflow.blog. And if you want to reach out to me on X, my handle is @RThorDonovan. 

WF I'm William Falcon. I'm the creator of PyTorch Lightning and founder of Lightning AI, which is the company behind PyTorch Lightning and Lightning Studios, our kind of cloud products. And you can find me on Twitter @_WillFalcon or GitHub. Sadly I have to be on social media a lot these days, but I prefer to just talk to you guys through GitHub, but it's fine. So you can find me @_WillFalcon on Twitter. 

BP All right, everybody. Thanks for listening and we'll talk to you soon.

[outro music plays]