In today’s episode, Ryan sits down with Richard “Spencer” Schaefer, cofounder and CTO of Lunar Analytics and a federal AI officer, and Caroline Zhang, cofounder and CTO of Knowtex, which provides AI-powered voice technology to automate workflows. They talk about safeguarding patient privacy, how AI changes doctor-patient interactions and healthcare delivery, the challenges inherent in rolling out AI technology, and the importance of quality data to fuel AI initiatives. Also included: a chat with Jeff Berkowitz, cofounder and CEO of Delve Deep Learning.
Lunar Analytics specializes in AI-powered claim optimization and healthcare solutions.
Connect with Richard on LinkedIn.
Knowtex allows healthcare providers to automate clinical workflows with generative AI.
Connect with Caroline on LinkedIn.
Delve Deep Learning provides artificial intelligence solutions for public affairs.
Connect with Jeff on LinkedIn.
This episode was recorded at HumanX last month. Next year’s conference will be April 6-9, 2026 in San Francisco. Register today!
[intro music plays]
Ryan Donovan Welcome to the Stack Overflow Podcast, a place to talk all things software and technology. Today we've got two recordings from the HumanX Conference. We've recorded these on the floor, so the audio quality is not quite as pristine as you may be used to from the regular podcast, but hopefully you'll enjoy them and they'll provide you some interesting insights. Please enjoy.
RD We have two great guests today. We're going to be talking to Richard Schaefer, Chief AI Officer at Lunar Analytics, and Caroline Zhang, founder of Knowtex. We're going to be talking about AI in healthcare. Welcome to the show, everyone.
Richard Schaefer Thank you. Great to be here. I'm really excited.
RD At the top of the show, we like to have a little info about our guests, how you got into and software technology.
RS So I actually graduated in 1999 with a Doctor of Pharmacy degree and spent the last 26 years kind of evolving from early on technology and data analytics in healthcare and just tried to, in the process of solving problems in healthcare, stay on top of technology. I'd say about three years ago, I was able to really take advantage of the opportunities the federal government offers with partnerships through major industries and so forth to grow more into an AI role, to the point of running our first multi-agent models about 18 months ago. I think Caroline can talk a lot about this too, but that's kind of how we really began to work together well with a lot of the companies out there, just because we have such good experience in that role. Currently I’m kind of working in the process of transitioning from a health and government job. Very excited to speak a lot about the experiences with the federal government.
Caroline Zhang So my journey into AI, I'm from San Diego, so always interested in healthcare, biotech research. I used to work on biocomputational research, I published some major papers there. So I started there. My team is all out of Stanford, so our AI background and masters in AI and work. My advisor is actually Professor Mykel Kochenderfer who leads the Center for AI Safety, so AI safety and healthcare, that is really what our team is about. I met Dr. Schaefer through the VA Tech Sprint of 2024, all about AI following the former executive order on that. We competed and we're a winner of that in 2024. I’m really excited about all things AI, LLMs, voice, agentic AI.
RD I'm definitely glad to hear that you're focused on the security and privacy of it. Definitely have talked to a bunch of companies who are doing AI-related healthcare solutions, but I'm always worried about the safety and security of the health stuff. I’ve read so much about HIPAA. Do you come from a safety, privacy-first background, or are you looking to find the fit of the tool before you think about or apply security privacy?
RS I think there's some nuances to that. I think you have to because your entire architecture of your solution can go quickly awry if you miss certain requirements. I think secondarily, it's been really interesting because everything is changing every week right now. And I think there's been such a massive change, even in three years, four years from traditional data storage in the cloud to now like vector databases, knowledge graphs, so there's just been this massive shift. I think that people that aren't in a cloud services type environment are probably finding it pretty hard to maintain those standards. I think that one thing our industry partners are happy to talk about is the complexities of being able to meet the federal standards. I think that it's been really exciting through my work with the National Health Institute as a solution architect with them, and through that we've been able to start toward the end of the [inaudible] and so we had the opportunity to really have a lot of feedback and say with executive orders. We live by the handle of trustworthiness. Especially with physicians and patients, you're not going to get anywhere if they don't trust it.
RD Especially at the intersection of health and government, it's such a regulated area. You have to be very careful.
CZ Couldn't agree more. I think going through the National VA tech sprint, the federal government has the most strict safety and privacy regulations there are, so I agree with Dr. Schaefer that [inaudible] for healthcare solutions for building patient privacy and safety, and working with providers to understand exactly how to customize by organization. So at the VA level, a lot of enterprise-grade federal requirements, and thinking through everywhere else, hospitals and individual clinics to work with their data policies.
RD Now that we get the bad news out of the way, what's the good news for AI in healthcare? I've seen a lot about AI vision models being very good at identifying tumors or things like that. What area are you tackling in healthcare?
CZ We're tackling a very simple thing– doctor, patient conversation, not as much on the vision side, but everything, human voice, text, all kinds of unstructured data, that is what's really exciting to me today that AI can do, because this is novel information that prior we relied on human memory, human documentation, typing, dictation. Those are all human error-prone. And AI, Gen AI, I would say we've got a ways to go, but today already we can automatically generate from the conversation medical notes in a structured format, detailed by specialty, diagnostic and billing codes classed by that data, and get it automatically ready for the next step, which is billing approval for vital procedures, insurance claims. Dr. Schaefer orders a medication where a clinician has to repeat everything three times. So now replace orders, replace the billing, have them just focus on talking to you, that's the power of the conversation and it feels more human. So at HumanX, it's really about AI that makes the healthcare interaction more human, helping on all those three. That's what we do for clinicians and for healthcare.
RS Thanks. I think that's a really good example of, we've always been striving for decades in healthcare to help the patient have a better experience. How do we prevent our pharmacists and physicians and nurses from being burnt out? I think a lot of people speak about resistance that they think physicians and nurses will have to AI. I think people will be surprised, because we're trying to do things in a very thoughtful way, and I think if we do things in a thoughtful way, there may be a more rapid adoption of AI in healthcare than you may see outside, because there's so much [inaudible]. So I think that if we just kind of move from that direction, I think that my entire time as a pharmacy innovator, it's really been focused around I always have one North Star. My northern star has always been, how do we let our highly trained professionals in healthcare operate at the very top of their sphere? And we've tried for 20 years, we've tried robotics, we've tried clinical disease support tools, we tried electronic health records, and every time we really fail. At the end of the day, we ended up making things not better. I could probably say more, but we'll leave it as possibly introducing more difficulties than were before. And it gets to the point where everyone's just like, “Can we just go back to paper charts?” It was so much easier. But we know that that's where in 2001/2002 they released To Err is Human. A huge report came out and opened everyone's eyes to how much of an issue medication errors are in the hospital. And so we can't go back there and that's been the thing that's been really difficult. So really where I focused on in federal government and as I transition out into the future is how do we get people back to [inaudible]? One of the big things we've worked on is computer vision, using computer vision to better interpret diagnostic images. Within the VA, we've been fortunate to have the largest health data set in the world, and we have almost a hundred petabytes of diagnostic images. We have all the computing power anyone could ever want, but it's the government, so we're working through the bureaucracy of utilizing all that computing power or transferring the money from health side to IT side. But I would say that computer vision is a big one, particularly in preventative health. Another process we're working on is supply chain management. That's a big project we have going on right now that's super exciting with how we're going to be able to do AI and seasonalities, very micromanagement seasonality changes in drug. Another project that we're leading is overall end to end utilizing new intake processes to automatically connect an idea and a project to approval through governance committees, but have that automatically link into our inter-ops processes. So it would automatically link into all the other IT things because we have all kinds of organizations and so that's exciting. I think that's really going to nail down that trustworthiness, also going to really nail down all different stakeholders in an organization to be able to have their unique lens into the models that are running. And I think if you look at the rest of it, there's some really cool technology with CalmWave and others are looking at taking all the nasty signals that are flowing around and ICUs and all the beeps and the buzzes and the sounds and leveraging AI and agents to create personalized settings. It starts with giving recommendations to nurses to change the monitors. Once we get trustworthy and once we get parameters in place, to have them be automatic. So silencing of the rooms, that's something we've been trying to. We got a lot of stuff going on with videography and computer vision with falls. Massive technologies out there right now, which is super exciting.
RD It sounds like a lot of this is sort of automating the boring stuff, which is essential of computer science in general. I think most people think of AI in healthcare and it's like you're going to have a robot doctor cutting you open at some point or making the decisions.
CZ If that's what people want.
RS If that’s what they want.
CZ And we ask you, would that be robot doctor [inaudible]?
RD Absolutely not. I think people want to know the intent behind it, and it's very difficult to determine AI intent. I’ve done note summarization and it'll often take out the details. It'll say, “You talked about this thing. What did we say?” So how do you get a model that doesn't leach the details out of it?
CZ I love speaking to this. We worked on the same problems because it's not simple and you need generic models, that absolutely is not the answer. It's all about specially trained models and continually reinforcing, learning, and fine-tuning it and working with human experts in this space. We have special models trained by organization and by clinical specialties, so oncology, cardiology, primary care, orthopedics, et cetera, et cetera. I think there's a lot of good options out there today, there's a lot of open source models, but the data set is really important. Speaking of the VA, they have that largest data set ever, and then from the conversation, we're actually capturing new data that we are utilizing to deliver a more exceptional product for the end user. So that combination is garbage in, garbage out, so you’ve got to have quality data. The models, I think foundationally, they get better and better by the day, so it's who can really bring together that final product and then work with the human evaluators. And in our case, it's always going straight to the clinical leaders, clinical and user, any doctor, nurse practitioner, physician assistant if they’re working with medications, pharmacists working with billing, rev cycle leaders and having a look at the final output and be involved in the initial customization and occasionally doing spot checks. And we all have developed our own fine-tuning evaluation model metrics in-house. And that's something I think is an open space, you have to define what does it mean to have cohesiveness in the note, to have style, those details. How do you not leave out the important things? We define what's important first, having faithfulness to which medical entities are being captured and then going beyond that to understand, actually, they don't want to have note bloat either.
RD Do you have to train in the important words in a fine-tuning step or something?
CZ Yes, I would say there is a lexicon of maybe new medications that pharma companies come out with that in oncology is critical to capture. So yes, but we also want to make sure those models are dynamic. What we are doing now is little mini reinforcement learning, fine-tuning environments, so we can auto learn from new things, new style behaviors, new medication terms that come out.
RD We are at time here. I want to get a real quick take on the data that you'll have. What’s the importance of it, how do you make sure that it's good data?
RS Yeah. I've always made the statement that healthcare has incredibly good data. It has incredibly reliable data and has incredibly factual data. The problem is that no one's ever been able to access the data in a way that is useful, and we have been able to make great strides in that with agentic. My name is Richard Schaefer. I'm in the federal government working as a Chief AI Officer in the Department of Veteran Affairs with the VISN 15 region for healthcare. In my transition, I’m the Chief Technology Officer, Chief Officer for Lunar Analytics.
CZ And I'm Caroline Zhang, CEO and founder of Knowtex where we have voice AI workflows. I empower clinicians in healthcare organizations and you can find us at www.knowtex.ai and on all social media– Facebook, Instagram, LinkedIn, and X, and we have a newsletter too.
RD Thank you very much, and we'll talk to you next time. And now for our second conversation, a brief chat with Jeff Berkowitz, CEO, and co-founder of Deep Delve Learning.
RD Hey, Jeff. So curious today about how important is the attribution of sources to your customers in AI?
Jeff Berkowitz I think it's hugely important, especially for our customers. They're all using our platform to stay ahead of and engage with elected officials, regulators, the press, the public, and understand what's happening in the world, and they've got to know where that information came from. They can't just trust a model to say that some development happened. They need to know the source, who said it, is this a source I can use? It's got to be high quality, high trust, and aligned with their mission and interests.
RD There's a legal ramification if the AI hallucinates, right?
JB They can't give false information to the public or to the press. The trust is important for them in those relationships. They're relying on us to make sure that they're getting insights that are real and have an impact. They're building their strategy and actions they're taking to advance their company's policy regulatory interests based on what we're telling them from the platform, so if that's wrong or it's from an unworthy source, they lose their credibility.
RD And obviously AI is a very fast moving area. How are you thinking about improving your product as the AI ecosystem matures and improves?
JB Great question because, from the beginning, we came in with two assumptions. One is the models are going to keep getting better. Sam Altman said this is the worst it's ever going to be, which I repeat to our customers anytime they give us feedback. The other assumption we came in with is the cost of compute is going to go down. What that means is that you can do more and have more advanced models. We built the system to really be able to swap in and out models and fine-tune new ones, see how different ones are working. We've got a series of open source and enterprise models to give out like Sweetgreen. You show up and you're getting a salad and it's going to be delicious, but that chalkboard is going to have different providers and farmers. It's the same with the way we're approaching the platform. We're going to find the best providers of models for each of the different components of the platform and the use cases, and we work with Databricks as a great partner to experiment and fine-tune quickly, do any of the pre-training, post training, other things when they make sense to make sure that we're always putting in the most advanced, effective models in the background for users, but they just know the experience is getting better. They don't need to know that we're continually bringing in new models and improving them.
RD Thank you very much. If people want to check out your company more, where can they go?
JB You can go right to delvedeeper.ai on the internet and we're early stage, mostly in pre-release with firms, but always looking to talk to more people who are trying to stay ahead of all the craziness and uncertainty in policy and regulation. Particularly AI with people tracking AI policy and stakeholders on the platform, it's becoming increasingly important for startups to engage on what's happening and how government is enabling or hindering innovation, and we're proud to be a part of helping them do that.
RD All right, thank you very much.
[music plays]
RD Well, that's it for the show today. Thank you so much for listening, and a question for you: What industries do you think will be impacted by AI the most and will it be positive? If you have an opinion, please email me at podcast@stackoverflow.com, and if you want to reach out to me directly, you can find me on LinkedIn. Thank you very much for listening, everyone, and we'll talk to you next time.
[outro music plays]