In our very first episode, Stack Overflow CEO Prashanth Chandrasekar talks to Don Woodlock, Head of Global Healthcare Solutions at InterSystems, about the challenges in their AI journey and the critical role of a robust data strategy in any successful AI initiative.
Welcome to Leaders of Code, a new business segment of the Stack Overflow podcast. On this show, we chat with business, tech, and engineering leaders from forward-thinking companies across industries about their business strategies, the challenges and opportunities of building high-performing teams, driving innovation, leveraging the power of AI, and other key topics. Tune in every second Thursday for real-world success stories, actionable strategies, and fresh perspectives to help you navigate your leadership and growth journey.
In our very first episode, Stack Overflow CEO Prashanth Chandrasekar talks to Don Woodlock, Head of Global Healthcare Solutions at InterSystems, about the challenges in their AI journey and the critical role of a robust data strategy in any successful AI initiative.
They also:
Episode notes:
[intro music plays]
Ben Popper Hello, everybody. Welcome back to the Stack Overflow Podcast. It is a new year– 2025. I am Ben Popper, one of the hosts here of the Stack Overflow Podcast, and I'm really excited to announce that we are bringing a new series to our podcasts. It is called Leaders of Code, where some of the leaders from Stack Overflow get to sit down with leaders from some of the most innovative and forward-thinking companies across a wide variety of sectors to have conversations about some of the challenges and opportunities they're seeing within their organizations, and more specifically, as that relates to how to optimize a high-performing engineering organization, how to work with technology, how to make the most of all the hype around Gen AI, or ignore the hype when it doesn't apply to you. So without further ado, I'd like to introduce our two guests for today– one I think you know very well– Prashanth Chandrasekar, who is the CEO here at Stack Overflow, and Don Woodlock, who is the Head of Global Healthcare Solutions at InterSystems. Both of you, welcome to the Stack Overflow Podcast and to the first episode of Leaders of Code.
Don Woodlock Great! Thank you, Ben. Nice to be here.
Prashanth Chandrasekar Thank you, Ben. Amazing to kick this off in the new year. Don, thank you again for taking the time to talk to us and being a partner with us for many years.
DW Of course, happy to do this. I'm looking forward to this conversation.
BP So Don, the first thing I'd like to do is just introduce you a little bit to the audience. How did you get into the world of healthcare and technology, and what brought you to the role you're at today?
DW It was actually a little bit accidental. I graduated from college, I graduated at MIT, which I can still see from where I am here, and I was interested in technology but wasn't really thinking about what industry I wanted to work for. So I interviewed with consulting companies and financial services companies and a couple different things, and I just joined a healthcare company and I've been in healthcare software ever since. It's just a great domain and it's a great vertical because there's a lot of social good you do in healthcare. It's a pretty stable industry– people get sick regardless of what the economy is doing, it's kind of a steady growth type of industry, and it's pretty high tech. Physicians, all of us are interested in really leveraging technology to make it better. So that was my origin, but I've been in healthcare software for 30 years basically, and love it.
BP And for folks who are listening, what does InterSystems do and what's your role there?
DW InterSystems, we call ourselves a ‘creative data technology company.’ It's a 2,000-person company. We're focused on data platforms that other vendors build solutions on a lot in healthcare, but also some other industries, and then we have an electronic medical records product line that we sell around the world to help hospitals kind of run themselves, that sort of thing. And then we also focus on healthcare data interoperability, so we have a product line around that as well.
BP Very cool. And Prashanth, I know you mentioned this at the top, but we've been working at Stack Overflow with InterSystems for a while. What's the connection there?
PC Well, even before the company connection, I just want to take the opportunity to let Don know that in my initial foray into computer science, one of the earliest things that I did was actually to build something for my own mom who was a medical doctor, and I remember writing a software program which I think was in the eighth grade, which was a hospital management system that would basically have all the patient records from my mom to manage her medical practice. She’s an ENT, or an ear, nose, and throat doctor. So I have a personal connection with a lot of the work that you do. Obviously I built probably one of the most arcane versions of what you're building, that was my earliest story. But to answer your question more directly, Ben, we've been really blessed to partner with Don and InterSystems for many years now– this is our fourth year of our partnership. And InterSystems is a customer of our enterprise product called Stack Overflow for Teams, which is the private version of Stack Overflow that companies use inside their organizations as an accurate knowledge store for information.
BP Very cool. All right, so I'm going to sort of offer up a prompt here and then I'd love to have the two of you discuss it. We're thinking a lot at Stack Overflow about an AI journey, about data, about engineering organizations at scale, a lot of things that Don mentioned in his introduction. So I guess I'm curious to hear first from you, Don, what challenges are you facing in the AI journey? What use cases do you see as being maybe some of the most appealing or the most valuable?
DW We build software for resale, basically, so our primary focus is how to embed Gen AI into our software to help hospitals and physicians have a better experience, basically. We also have a number of projects internally to kind of help our engineering organization, but most of our thinking is how do we make our products great and take advantage of the Gen AI capability. Most of what we’ve found so far– and this is the first inning, let's say, with Gen AI– but most of what we’ve found so far is really helping with the clinician user experience. So what happens with an electronic medical record is the physicians are somewhat turned into data entry people. They're using the computer, they're clicking around, they're entering information– things that they didn't really go to medical school to be doing. But in the effort of digitizing healthcare, we sort of brought them into that world of interacting with the computer and it's a very frustrating, unnatural experience. So the potential of Gen AI is to really change that to be a more natural, a more human experience, so asking questions and having the system answer questions about the patient, answer questions about medical knowledge that's out there, write documents like discharge summaries, surgical summaries, things like that kind of automatically. So it's an unbelievable potential actually of really helping the efficiency and that whole experience that clinicians have with computers in the healthcare setting. So what we've been really focused on is finding those use cases and applications.
BP I love that. I think everyone can relate to the experience, maybe at least in the US, of going to the doctor's office and having to answer the same questions once on a paper form and then you sit down and answer them again and then the doctor comes in and opens their laptop and you answer them again. You wonder, “Why does it have to be this way?” And it would be extremely valuable and far more human to have a conversation which is recorded and then a Gen AI system could turn that into a document and summarize the key points and answer the doctor's questions, so I see what you're getting at there. Prashanth, maybe you could touch a little bit on what we've been thinking about, and I think in some ways, similar ideas about where can we produce human-centric AI.
PC So I think a lot of our focus, Don, to your point, I think we are in an early, early, early phase of Gen AI adoption, and we can talk a little bit more about challenges next if you like, but then also sort of tee it up. But generally speaking, what we have seen both within our own company as well as with our thousands of customers of our enterprise business is that they're all looking to make the end user experience exceptional. And like you mentioned in your case, it could be the patient experience and the patient outcome, and as a function of that, the physician who's in there doing their work and their experience, so it all sort of works backwards from the end user, of course. So a lot of what we focus on, of course, is serving technologists and developers. So when we talk to all the folks in your position in the CTO role or technology or just leadership roles, they're all trying to empower their people to do amazing things. And the question is, how do you leverage Gen AI, much like anything else, to give them all the tools and the ability to sort of really deliver an outstanding end user experience for their end customers, as an example? And so a lot of what we have zoned in on is that, yes, in this current phase of AI, the level of quality, accuracy, et cetera, a lot of things that I know, Don, you care about are so lacking, and in that world, having a human-centric or human in the loop to complete that entire set of processes being automated through Gen AI I think is a core component. I can talk about very specific problems that we solve in that in a bit, but generally speaking, it's about how do you make sure that we are bringing together humans and the power of Gen AI to holistically deliver an outstanding experience to the end user, and finally most importantly, doing it closest to where they are in their work, as in if they’re already in their workflow, how do we integrate into that workflow in the most efficient way in this human-centric AI delivery model? So that I think philosophically is what we've been focused on– get closest to the user and bring together humans and AI at that point.
BP Don, I'd love to hear your response to that. And I know we touched on how do we want to use it and what are the challenges– a great video of yours, Don, that I watched was about retrieval augmented generation which is something we've utilized here at Stack Overflow, and specifically how to do that in healthcare where there's so many concerns around privacy and so much regulatory risk and burden. So I'd love to hear a little bit about how does a company like yours employ some of the best practices like RAG while also making sure that your governance and your approach to patient data is respectful, is legal, is careful, is accurate?
DW I do see the whole healthcare environment when I talk to my customers as being very cautious about AI– both optimistic and excited and that sort of thing, but very cautious about it, I think for good reason. There are issues of accuracy, safety, personal health information that all are solvable but need to be managed, and so there are a couple key solution spaces. So one is what Prashanth already said, which is to focus on humans in the loop and humans plus AI as much as you can, and so one of the killer use cases in healthcare is the one that you kind of mentioned, Ben. We call it ‘ambient listening’ where a physician and patient can just conduct an office visit and not worry about the computer and that sort of thing, just record it and then turn that into a note, turn it into changes to the EMR. But the end of that process is a human reading that, signing the note, making any edits, that sort of thing, so there's a human in the loop. And actually that process has been something we've tried to automate for decades in different ways: scribes or transcription services or that sort of thing. So it's kind of a well-worn workflow which makes it also really great as a first use case for Gen AI because it's a workflow people know, there's a human in the loop. Even if it's 95% accurate, that's 95% I didn't have to write from scratch as a physician, so it's got a lot of great benefits and allays some of these concerns about safety and stuff like that. Governance is a huge topic with all of our hospital customers. One of my CIO friends and customers gave a great analogy. It was a quote from Mario Andretti, the race car driver, and he says that a lot of people think that the brakes are to slow you down, but they're wrong. If you have good brakes and have that confidence in them, you can drive faster, and that's what I found with governance. Once customers figure out a governance style, how they're going to monitor these projects, how they're going to approve these projects, then they can start to speed up in their AI journey. But in the beginning, it was really quite slow as customers were nervous and trying to figure out who needs to approve what, and what is AI anyway, and all that stuff. But as that gets put into place, then that really enables some speed.
PC Absolutely. And Don, I love the Mario Andretti quote there, plus as a racing fan, that resonates. Your point absolutely is spot on in that we see a lot of folks looking for the ability to sort of be really freed in their organization to go faster, and I think when we talk to a lot of companies– recently I was in a room with all the key leaders at all the banks, effectively, so the senior technical leaders, so CTOs and the like, and a lot of them effectively shared that everybody in that room is running a Gen AI pilot, there's no question. They have already spent a chunk of the year 2024 thinking about how to roll out these pilots, and they're all seeing great uplift in productivity in a very specific pilot group in each of their companies. Obviously, you never pilot it across the company, you pilot it in a very specific area, a group, et cetera, that is willing to sort of experiment, and they're seeing dramatic improvements. And that's not surprising when you have, as you pointed out, workflows defined, things that you've been trying to solve very specifically, and the most, let's call it automation-first, type teams have already sort of embarked on this. And so for them to leverage these new tools to sort of take their work up 10 times is not surprising that you're able to get let's say a double-digit productivity gain. But what I found interesting about adoption when talking to those leaders is that they're all struggling with the change management of rolling out this adoption company-wide beyond just that pilot group. And to your point, the speed limits and the breaks may not be very apparent across all the groups, only because I think organizations ultimately are complex, especially when you think about very large numbers and big matrix organizations, big financial services, banks and those sort of things, or even large healthcare companies, some of the biggest healthcare companies. So what I found to be fascinating is that my realization through conversations like those are that ultimately it is a human problem to be able to unlock productivity. It's actually less about the tool. I mean, I think we can go and define the rules, we can define how fast and what you use it for and what you don't use it for. Of course, there is the element of the foundation, which is that you better have your data in good order and in a high quality state before you start, for example, doing things like RAG and indexing. So I think it brings to the surface a lot of hard questions for a lot of people across the organization to answer when maybe one or two groups have got their act together and they're able to sort of move very quickly and I think it puts a lot of pressure on people to adapt very quickly. So that was my observation, or has been my observation over the past, let's call it, 6-9 months of speaking with senior folks at some of these companies about challenges.
DW That makes sense to me. The way I think about the piloting, the phasing of a rollout, the building momentum is a process of building trust relative to AI relative to other technologies. There's another way to think about it, but I think about this as a trust issue because people read the newspaper, they all know the hallucination word, they're concerned about bad things happening, what if this says bad things to our patients, what have you. And so the naysayers are useful, I would say, in this process. It's a useful group to work with and manage, but you need to think about the rollout as a building of trust project, basically, and think about how you might build trust in the organization as you roll it out farther and farther.
PC Spot on, spot on. I think there's so much we can talk about on that subject. One of the biggest statistics, Don, we conduct a developer survey every year to the millions of our community members, and typically we have something like 50-100,000 people that respond every year so it's a pretty decent sample set. And we have asked Gen AI questions for the past few years, specifically people's interest to use these tools, what's holding them back, et cetera, et cetera. And there are two statistics that stand out above all the other stats, which are, number one, the enthusiasm to use Gen AI tools is very high. It is increasing or at least it's consistently high. Over 70% of users care to use or want to use these tools, are enthusiastic by these new tools, plan to use these tools, and so on. That's stat number one. Stat number two is that the trust, as you pointed out very correctly, is a staggering only 40% of the users trust what's coming out of these. And the reason for that, of course, are hallucinations. Can I roll this into a production-level application in my critical company, healthcare/bank, et cetera, all those things where ultimately you can't blame the tool. You have to blame yourself if something goes wrong and you really understand what is being generated in this code for you to feel comfortable if it's not rooted in things that you recognize or rooted in data that you generate as a company or even attributed back to the content from the open web. So I think that is absolutely spot on– the trust issue and it's validated by our statistics.
BP So, Don, we're touching there briefly on one of sort of the obstacles specific both to healthcare and, I think, code generation– trust, keeping a human in the loop. Are there any stories or anecdotes you'd like to share about how your team is preparing to sort of stay innovative or being successful at addressing issues of data scalability and integration? And here I'm talking about InterSystems, but if you have an example of working with a partner, happy to hear that as well.
DW Let me just talk a minute about the role of data in AI, because you mentioned RAG. The concept of a RAG architecture is you draw upon your information intelligently in order to help the LLM do a good job or to help the AI agents do a good job answering the user's question, and a lot of data in healthcare is pretty messy. It basically hurts the AI system's ability to work. So the tool might be good on the AI side, but if you feed it junky data, it's going to give you kind of bad answers. So a lot of our work at InterSystems is actually on the data foundation, and we encourage customers to think of a data strategy as either a precursor to their AI strategy or at least part of it so they realize the importance of data. And I'll give you one easy example, which is if you bring together patient data from multiple places, generally speaking, each of those data sources will have their own ID for the patient, and they might have their own phrasing of the name– ‘Don’ versus ‘Donald’ or my old address versus my new address. So unless you have a sophisticated patient matching algorithm, let's say, or solution as part of your data strategy, then you'll be bringing together your data but not really bringing it together. It would be side by side but not integrated. Now that can hurt a model. So if a model is predicting some future disease state or whether you're going to show up for your next appointment or whatever, your comorbidities, your complete experience across the health system are the key features that make a difference to that prediction. So if you're not bringing it together, you're not normalizing your data, your model is going to be way less accurate than it could be. And so a lot of people really just jumped to the AI stuff– let's get the platform, let's build a little RAG system, that sort of thing, and don't realize the importance of having a really nice, clean data strategy as part of the whole. That's been a very active kind of discussion in projects with our customers, so as they prepare for their AI journey, they realize they're in the first inning also. As they plan out the next 5-10 years, they realize the importance of having that good data strategy for them.
BP I'm sure Prashanth is chomping at the bit to follow up. I'll just say that I shared a quote with him recently from the CEO over at Clarifai, Matt Zeiler, sort of saying the most common experience is somebody who is very excited to get started on their AI journey. We get to the proof of concept or the pilot, they bring their data, and that's when we realize it's not what they thought it was. It's not all where they thought it was. It's not organized in a way that, like you said, is productive. Sometimes it's counterproductive. And so it's interesting to me, I think the emergence of these Gen AI systems has almost to me clarified the purpose and the utility of Stack Overflow, which is how do you create a knowledge artifact using the wisdom of the crowd that everyone can trust and adapt and improve on over time? But Prashanth, do you want to talk a little bit about ‘answers are not knowledge’ and kind of our perspective on knowledge as a service?
PC Absolutely. So I think, Don, your data foundation is such an important point. We identified beyond the trust issue that we both talked about that there are two other sort of very large sets of challenges that I think need to be solved, especially in this Gen AI era as we keep moving forward. The first one, what we like to call ‘LLM brain drain,’ which is all this aspect of that if you don't have a place where people are actually generating new knowledge and where it is actually curated and cleaned up and useful and high quality and accurate and so on, then you're in trouble because a lot of these AI models, LLM training, pre-training, even in the world of synthetic data, you're going to need really high quality human-generated input to come from people's imaginations and their knowledge to be able to do things with AI. That's the first issue. The second issue is what Ben just alluded to, which is what we call ‘answers are not knowledge.’ Because depending on the complexity of the question being asked, to your point, it may be, as an example, matching on a patient history or a set of historical records and provide maybe a straightforward use case, but there may be much more complex questions that the user asks and the AI taps out on its ability to answer it completely. I know we all are experimenting with AI tools where that happens. It just hits a limit where it's unable to give you something beyond what it's got kind of at the moment in terms of its reasoning. So I think this ‘answers are not knowledge’ is our view on that there's got to be a place where that user completes his or her request, even if that AI flames out in that moment, at least based on its current data training and parameters, and again, you need a place that could be a humanistic environment like Stack Overflow where you ask the question and complete it. And then again, that allows you to capture that question and the humans are providing new knowledge, hence preventing LLM brain drain, and that knowledge is then used for future model training or RAG indexing and so on and the world goes round and round. And what we have now called this circular motion is knowledge as a service at Stack Overflow where we're effectively integrating with every tool that's closest to the technologist or the developer so that they are able to, when they're in their favorite AI tool– could be GitHub, and we just announced a partnership with Github Copilot as an example– you could ask a question of Copilot, and if you don't get the answer, you can post it on Stack Overflow, the community can answer that question, and that question being answered on our platform then creates new knowledge which is then useful for future LLM training and also answers being served up at the right place, right time. And so that is one example of us saying, “Let's complete the loop of this issue,” which is continuously creating high-quality knowledge and data for these use cases where you need increasingly accurate LLM answers.
DW Let me reinforce your point by a counterexample, actually. We use Stack Overflow within our development developer community, and you guys have been fantastic partners. It's really helped our development teams be productive, uplift their knowledge, engage with each other, all that good stuff. We have a customer service system, which is sort of directly facing our customers, and that's filled with years of, “I have this problem,” and then our service folks have solved that problem. We have a Gen AI project on top of that to help the service people get answers to similar problems and it's not working too well yet because of the quality of the service issues. You have outdated issues in there, something was a problem that's fixed now. It has that uncurated ocean of information style and I really have always applauded Stack Overflow for the notion of curation as a natural course of using the systems. And I think this notion of exposing to LLMs the right information, the curated information, that sort of thing is going to be so critical to have these systems take our industries to higher and higher levels. I think some of the thinking that you guys have built into your platforms and what you just kind of walk through are really critical to doing that.
PC Absolutely. Don, thank you for that. I think that your usage of StackOverflow for Teams is exactly why that product has been created, and it just is sort of the right place, the right time for it in some ways, because it is highly structured knowledge. Obviously the tag framework that's so prevalent on the public platform that's been used for 15 years is now able to be used within companies like yours, and that allows you then to leverage it in all sorts of ways inside the company, whether that is training your own AI models or augmenting through RAG and indexing other sort of use cases I'm sure you're trialing. And on top of that, one thing I did not mention which has been a huge part of what we've been doing more recently, is that companies would love access to our public data set, very interestingly, inside their companies for their AI needs. We launched about nine months ago a product called Overflow API. We even haven't talked to you about this, but this is for companies who leverage exactly this– the public data set for their own internal needs in addition to having the private Stack Overflow Teams instance within your company. So that's just as an addition to this entire data discussion.
DW I have this analogy that I sometimes use on data. I don't know if either of you play guitar.
PC I do!
DW As soon as you walk into a room, let's say you're going to a party and there's a guitar in the corner. What's the first thing you're going to do when you pick up a guitar?
PC Probably tune it.
DW Exactly. And why? Because there's not really anything you can do with an out of tune guitar that's going to sound good. You can be an awesome guitar player, Jimi Hendrix or whoever, but an out of tune guitar is just not useful. So step one is really to get that tuned and then you can layer on top of that some great playing and songs and that kind of thing, and that's the way I think of data. Step one is really to have a good set of data that you build everything on top of. And if you don't, there's not a lot of places you can go and be successful unless you have that platform really well-honed.
PC That's a great analogy.
BP I think one of the maxims we have here for AI is ‘garbage in garbage out,’ and so the tuning of the data, kind of what I would call a new type of ETL pipeline, is emerging in best practices of how this should work for AI. But Don, I want to move on a little bit here just to say that Prashanth and I both checked out your Code to Care series, got a chance to learn a little bit about how you're sort of teaching other folks in the healthcare industry some of your thoughts and best practices. From your perspective as a leader in the healthcare IT industry, what's a misconception about AI that you hear from clients or partners or peers that you'd like to challenge?
DW Well, first of all, I'm glad you liked the Code to Care series. As a small commercial, these are 10-minute videos that explain something AI-related like what RAG is or what agentic AI is I recorded yesterday, so things like that. Perhaps a misconception or a misconception is AI success is about AI. It's really not. It's workflow and trust and value. There's a lot of people walking around looking for nails with this hammer and some cases AI is not a great solution or not that valuable. So I think it's best, like anything in technology, to start with the user, their problems, their opportunities and work backward from that. And I think it's understandable what we're doing. We're feeling around to what AI can do for us in a lot of different aspects of healthcare, but ultimately, the success is whether it adds value to a user or an organization and it's easy to adopt. And those considerations are incredibly important and being rather overlooked in our kind of love of the technology itself.
PC Don, I totally agree with you that it's not a panacea for all things yet. I would say that the phrase that comes to mind or the adage is, as is often said, people overestimate the impact of this in the near term and underestimate the impact of this big technology shift in the long term. And I really believe it applies here because obviously, without a doubt, I think AI is going to be a huge tectonic shift. It’s one of those platform shifts, we've all been through the three other ones that we had over the past few decades. And so just given I think the hype cycle plus the attention, plus the invested dollars, plus the temptation that it's going to drive that much productivity so fast, especially in a world in which we want to do a lot more with less and it is about producing a lot more, no doubt, as functioning economies, I think there's this element of, to the earlier point around all these pilots being run, et cetera, I think the misconception is that even if you see great productivity, I think you should not immediately assume that applies to your entire organization overnight. I think there is going to be a lot of the foundational work to your point, nice and clean data set, nice and clean knowledge base, knowledge as a service, all the things you need to make sure that you're setting up our end users and ultimately our customers for success and great outcomes. I think people underestimate perhaps the effort that's going to go into making sure all that is set up correctly and how long that's going to take. I think it's going to take a little while for that. There's no doubt pressure in the system to get it done. It should be rooted in, as you said, workflows, real process, and ultimately as my earlier point suggested, it comes down to people, comes down to people trying to do great things with the tools that they have access to. And that's really what I believe is going to happen over the next few years.
BP Great point. Looking ahead, how do you see the engineering organizations within each of your respective companies navigating the challenges of AI adoption?
DW Let's see, I think maybe a challenge for this year will be the plethora of choices that we have within the AI world. I guess it's the problems of a kid in a candy store. I think Gen AI started with one good company, one good model. We all sort of learned and got enamored behind that, and it's gotten a little bit wider. But now as there's a lot of good open source models, there's big models, there's small models, there's on-prem, there's on-device, there's in-cloud, there's a lot of different options. And then if you throw in agentic AI workflows, there's different sequencing and combinations you can put together, and so I think the choice, the decision process and the adoption process is going to be important and a little bit challenging this year. And part of what we're doing here at InterSystems to try and accommodate that is to get a really good accuracy measurement process. When we're using it to summarize a patient chart or document a conversation between a patient, how do we measure the accuracy exactly of these models, how do we run it in an automated way as new models come out so we can compare different models, we can compare different approaches, and we can make faster choices about what's best and what's next. And I just think that there's a division now of choices and approaches and companies and that kind of thing, we just need to be ready to work through that productively and not have that slow us down.
BP Prashanth, looking ahead to 2025, what are you thinking about in terms of the engineering organization inside Stack Overflow and along with our customers and partners navigating the challenges of AI adoption?
PC I think certainly from a customer lens I think it's probably the most appropriate thing for me to reflect on it from that lens only because we serve them. Given all the focus in ‘23 and ‘24 especially on customers trying and investing in these AI tools and ultimately getting a taste of what's possible, now I think 2025 is the year of reality on multiple dimensions. Within companies, it's all about all the things we talked about on this call, it's about how do you really identify the right workflows, leverage these technologies, not use the hammer nail point that Don brought up, all those things I think apply and reality sets in, especially since there's pressure in the system to do a lot more with a lot less, because especially since all these are big investments, much like any wave is all about. So I think that is, I would say, the crux of them and the downstream impacts of making sure you've got your data strategy right and all the things that have to then get shored up in a productive fashion. I think there's a lot of work that pretty much all companies at various stages have put off or just have to reboot or start from scratch and I think there's a lot of that work that needs to happen. I'm not surprised that consulting companies are seeing record revenues because this is the time to bring in and clean up what has been decades of, call it process debt and technical debt and all the things, especially given now you even have more choices on technology tools to figure out. Now, that's all from a customer lens perspective. I think from an AI industry perspective, if I could offer one up, I do think this notion that there is going to be a realization that the data that has been vastly available to train models to date until 2024 have now been exhausted because the entire Internet has been used. And we really see this because I see this signal from our own partners that have reached out because they need ongoing mechanisms for new knowledge and new data creation, even in a world of synthetic data, which has its pros and cons. No doubt there's a cost element, but obviously we know about all the downsides of synthetic data. But even in a synthetic data world, you still need human data to be used for that entire data corpus to be useful in the context of all the LLM activities of training and fine-tuning, et cetera. So that I really believe will come to a head, even though we've seen the compounding effects of more and more powerful chips, cloud compute, and data accessibility for the LLMs to keep progressing towards AGI and ASI and all the other acronyms. I do believe there's going to be this question about where are they going to get new information for the future. And I think that is going to come to a head this year.
[music plays]
BP All right, everybody. Thanks so much for listening as always. I want to shout out a Stack Overflow community member who came on to our platform and contributed a little knowledge or curiosity. A Populist Badge, awarded to lucasgcb for helping to answer the question: “Is it possible to generate an executable (.exe) of a jupyter-notebook?” Lucas gave an answer that was so good it outscored the accepted answer and earned a Populist Badge, so appreciate it, Lucas. As always, I am Ben Popper, a host of the Stack Overflow Podcast. You can find me on X @BenPopper or shoot us a question and a suggestion, podcast@stackoverflow.com.
DW This is Don Woodlock. Thank you very much for having me on the program. If you have any needs regarding health data interoperability, getting AI going in your organization, please give us a call. We're at intersystems.com. And I also have a video series on AI called Code to Care. You can find that on my LinkedIn profile, but great to be here.
PC Well, thank you. This is Prashanth from Stack Overflow. Really appreciate our community who is on this journey with us in an AI context, and it's going to be a really fun one in 2025. You can learn a lot more about all our products and programs including Stack Overflow for Teams and our Overflow API data program that I explained today on stackoverflow.co. So looking forward to meeting many of our community members this upcoming year, and thank you, and thank you, Don, for taking the time with us.
BP All right, everybody. Thanks for listening, and we will talk to you soon.
[outro music plays]