The Stack Overflow Podcast

Why call one API when you can use GraphQL to call them all?

Episode Summary

Ryan welcomes Matt DeBergalis, CTO at Apollo GraphQL, to discuss the evolution and future of API orchestration, the benefits of GraphQL in managing API complexity, its seamless integration with AI and modern development stacks, and how it enhances developer experience through better tooling and infrastructure.

Episode Notes

Apollo GraphQL lets you orchestrate APIs with a composable, declarative, self-service model. Apollo's MCP Server is now available.

Connect with Matt on LinkedIn.

Today we’re shouting out a Famous Question badge winner, user jkfe, for their question How to hide/show thymeleaf fields based on controller condition?.

Episode Transcription

[intro music plays]

Ryan Donovan: Hello everyone and welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I am your host, Ryan Donovan, and I'm joined today to talk about the API orchestration, joined today by Matt DeBergalis, CTO from Apollo GraphQL. So welcome to the show, Matt. 

Matt DeBergalis: Thanks, Ryan. Good to be here. 

RD: Top of the show, we'd like to get to know our guests a little bit, find out how they got into software and technology.

MD: Oh gosh. I was always into the computers as a kid, but I went to MIT for school and found myself sitting with all the free Software foundation folks in the 90s, and that's really the culture that I grew up in. So it's the early internet. It's BSD and Linux. A lot of hacking, a lot of using the internet for how to connect things together, connect people together and this ethos of collaboration and open source that I think probably explains a lot about how I think about software and what excites me about it so much.

RD: Yeah. That's awesome. And now you're at Apollo GraphQL and speaking of connecting people together, right? When I get folks on here to talk about APIs, I think, most of the time they talk about REST or maybe RPC. GraphQL has a different sort of paradigm. Is that right? 

MD: Yeah, it's really a query language for those APIs. That's what GraphQL is. So the story is you've got lots and lots of these REST APIs and other protocols too. That's microservices and SaaS and cloud and all these trends over the last couple decades. And software, when you build it at the end of the day, has to talk to all that stuff. So the old way of doing that is do it by hand. You call one API and you call the next API, you call the next API and like so many other parts of the stack, it's nice when you can do that declaratively instead of writing the code for it, you describe what you want. That's what a GraphQL query is, and GraphQL is all about looking at those APIs as a set of objects that are connected together so that you can write that query and it, it has all the benefits that you get from other declarative technologies where you don't have to write the procedural code. And what you build is a component that you can, you know, combine in different ways to, to build new experiences. 

RD: It's interesting that declarative calling APIs because APIs are blowing up, especially as AI agents turn everything into an API. What do you think the future of all APIs will be? More of a declarative future instead of just one by one.

MD: Yeah, part of it's just about the number of them. If you think about what's inside software in the simple times where you called one or two APIs, it's no big deal. APIs are just a way of asking another piece of software for a result, right? Really simple idea, but when you've gotta talk to 10 or 20 or 50 of 'em, and I think that's now what's typical if you're building anything interesting. There's all this interplay between them. You gotta call them in a really specific order. You gotta handle errors and throttling and back pressure, and you want to take the output of one and feed it into another. And if you just add all that up, that's the kind of intricate work that I think accounts for a lot of the time that we spend when we're writing apps, we hardly even think about it. Because calling APIs hasn't changed in 30, 40 years, but that's what it is. And it's like all the other parts of the stack, if you look at React, that's a great example of another declarative replacement for the old way of doing it. You know, what 10 years ago people wrote all this jQuery and you had all this crazy spaghetti code to update your screen. And what React says is, no, no, no, no, no. You write a description of what you want the screen to look like, that's a component, and then there's a piece of machinery that's gonna make sure the screen reflects the configuration you need. And if you just look at the cloud native stack, really the whole software development stack, just pin by pin, they're all falling down and getting replaced with some kind of a declarative approach. And you have to have that just because of so much software that we're writing, and developers want great tools now, and I don't see why API should be any different.

RD: I've definitely seen orchestration fell on containers, like you said, the sort of declarative ness. How does that declarative machinery work for APIs?

MD: So there's a query language and a query planner. If you look at the way GraphQL works, we call it GraphQL because you describe these APIs you've got as a graph. It's just a fancy way of saying the APIs are returning objects, and those objects have connections between them. If you think about a retail website, you've got microservices probably for your product catalog, for your shopping cart, for your product reviews, your inventory, your pricing system, all these things. You can imagine the sorts of objects that are contained in there, and you can imagine the connections between them. So in Apollo, when you write that GraphQL query, just like in a database, the query gets turned into an execution plan, which is the infrastructure figuring out, okay, which APIs do I call in exactly what order? How do I chain them together? And it can get pretty complicated. It's really common on the web, for example, that you want to render a page as quickly as possible, and developers go to great lengths to lower that time to interaction. So often what you end up is this pattern where you want to return a bunch of data, but you don't wanna block the screen on that. And that's a gnarly thing to do by hand, but with graph kill, you get it for free. You just mark part of that query that you wrote as deferrable, as we call it. The system sends it to you later so that just the part you need right away to render the screen comes. Or you mentioned AI before, you know, agents are more of a streaming experience, right? We see the text come, you know, word by word. And so what you want to do in a lot of cases now is have the agent talk to your APIs. You wanna do stuff that's useful, right? That's the useful stuff. It's your APIs. But those APIs weren't built to be asynchronous or streaming. They're just traditional rest APIs. And so the orchestration layer can do the task of turning that into that streaming experience that's a better fit for AI. It's another example of what we mean by all the intricate code you're writing. 

RD: The describing the APIs and then just sort of figuring out, that's a really interesting paradigm to do, especially since I think a lot of services are just light CRUD apps on top of a database. When you describe 'em, are you just describing individual data pieces and can it call the specific APIs based on those data pieces? 

MD: Yeah, it's such a visual language too. It's really pleasing to be able to see the graph as you build it. And the way the experience works in Apollo is each time you bring an API to your graph, you can imagine it's adding more objects. It's making some of the objects you already have richer. I. This is not a one-to-one map from each API endpoint to each object. It's a lot more complex than that. Your APIs might return information about more than one object. You've got lots of APIs that all return information about the same object, so they mesh together. It's such a vibrant way of understanding, you know what you've got. The thing we hear so often from developers that are trying to build software is like, I don't even know where all my APIs are. I don't know which one I'm supposed to use. And we've got some great stuff out there to catalog them and document them and you've got, you know, open API to give them some structure. But none of it really gets, I think at the real challenge when you're building something cool, which is like, I know what I want. Why do I have to spend all my time wrestling with these, these really low level endpoints? And that's the big mindset shift that comes when you move to the graph. 

RD: Yeah. As somebody who spent a lot of time documenting APIs, internal and external, just figuring out who did what was always a challenge. Do you think this will change how developers write APIs?

MD: I'm sure it will. I think the thing that I've learned though is, we find that graph is especially valuable in big companies where you've got a lot of these APIs. I mean, it makes sense, right? Graph is a network and there's a network effect. So the bigger the graph, the more interesting it gets. I was just looking at one of the companies that's gone in this direction as New York Times and you know, it's not just a newspaper, they've got games, they've got Wordle, they've got recipes, they've got the athletic. So they've got over a thousand objects in their graph. If you just think about all the APIs, and they weren't built in any sort of consistent way because they've been built over decades, but across all of them, there's such a rich language of all the concepts, all the entities in their business, and I think APIs aren't going anywhere is the interesting thing. So we've got all this amazing opportunity to build great software. I mean, everybody's racing to build an agentic experience now. So you wanna be able to talk to an agent, you want that agent to go call the APIs for you. We can dream about all the amazing changes that are coming to APIs, but I've gotten to really look at this the other way, which is how do we bring all that stuff? 'cause that's what's valuable in a company at some level. Like the APIs are the company's capabilities, right? They're the digital building blocks, and I think the big prize is how do we forklift all that to a modern world that's built around AI, that's built around amazing customer experiences that lets you interact with the products the way that you want, lets you build software faster. And what's really great about GraphQL, I think, is how adept it turns out at being in the messy world we live in today. You don't have to have open API specs. You don't have to have written all your APIs in the same language. That's just a non-starter if you're trying to roll this stuff out. And it makes for a really exciting opportunity. You know, if you think about AI. The hot thing now is MCP and MCP is really just a way of saying, Hey, I want to be able to build these, we call 'em MCP tools, right? These capabilities that the AI knows how to use, and it's so exciting if you just think about how well a graph and an AI fit together, because the graph is semantic, AI can walk across it and understand it. It's self-documenting, it's structured, right? You get this consistent result every time you ask for something as a query. It feels like a really great foundation as we all race to build more and more capable agentic experiences to have a piece of infrastructure behind it that can make the bridge between them.

RD: A lot of interesting stuff I would sort of react to is brought up Agentic AI and that seems like it doesn't even care if there's an API. Like it shouldn't have to worry about it -

MD: Kinda - 

RD: Kinda?

MD: Here's the thing. It's true that an AI can kind of handle whatever you throw at it. We've all had this experience, right? Like summarize this document, or if you give it an API that isn't very well defined, AIs are getting pretty good at figuring out what's going on and how to call it. But what you're not gonna get is consistency and precision. If you imagine building an agentic, I don't know, an interface to a bank. Right. You probably want everybody to get what they ask for: ‘show me my recent transactions’. You probably don't want the first guy to get back five and the next guy to get back eight and the third guy to get back a different view. There's actual rules and regulations behind a lot of this stuff, in fact. So it's not about AI just magically solving all the problems. I think a big part of the puzzle is how do you orchestrate the AI the right way to meet all the other requirements that a real business or piece of software is gonna have. And that's where I think the bridge between the two is so critical to get right.

RD: Yeah. And you mentioned MCP, the model context protocol, which VC I was talking to, a podcast recently said MCP looks like GraphQL. Do you think they are similar in scope and design? Do you think they do similar things or -

MD: I think it's like peanut butter and jelly. I'm wondering what they were getting at there. I mean, I think the common thread is they're semantic, an MCP tool describes what it does. It doesn't force you to use it in a particular way. The idea is that's the AI's job, right? So the AI has this catalog of tools that you give it, and it's gonna figure out based on the context and the conversation that it's having, which tools it wants to use. I do think, again, there's just another example of declarative is gonna win the day across the whole stack. It's the only way to fly if you're doing anything at scale. And the idea that your tool can be a different capability in your graph is a really compelling one. 

RD: Yeah. I could see with the AI agents, you don't want them trying to figure out what every single API is they need to call, and then maybe stitching all that information together just to complete your task. Right. 

MD: Yeah. The models are also a lot better if you give them the relevant data, and one of the facts that we have to wrestle with is these APIs that we've got are, by design, they're general, so they return a lot of stuff. You know, one of the things people talk about with GraphQL in the early days especially, was it's got this benefit of less data on the wire. If I'm calling a REST API, I might get back kilobytes of data. That's no good if I'm building a mobile app, that just costs me, you know, network bandwidth and battery. So GraphQL lets me say I just need these three things. I wanna put a list of my products on the screen with the price and whether there's one available for shipping today. That's all I need. And that applies well to AI, is what I've found. Because if you give the AI too much extraneous stuff, it gets a little confused. It takes things maybe in the wrong direction. And one of the values that you get by using a tool that's designed around a specific view into the graph is you're just giving the AI exactly what it needs to see, so that it can turn around and make the most appropriate, you know, subsequent call to another tool or turn the value back to the user.

RD: Yeah. I think to sort of reframe a question from earlier, would those AI agents benefit from having APIs that return less information per call so the graph can just kind of pick them up, put them together as it needs it? 

MD: I think we're probably just now starting to learn what, you know, I'll call it AI orchestration, right? You've got your AI, you need to connect it to the rest of your stack. Just a fancy way of saying it's gonna call a bunch of APIs, right? But we like big words in this industry sometimes. If you ask that question a year ago, people would've said it's rag. I don't think it's rag anymore right now. If you ask the question, people talk about MCP tools, but MCP raises as many questions as it answers. There's no story there for how you're gonna manage policy or what the AI is or isn't allowed to do, or enforce consistency or all kinds of things like that. I think we still have a lot to learn. Yeah, I do think it's gonna make sense. I think you're gonna get much better results and tokens aren't cheap. Right? So probably better economics if you can limit and focus the information coming back to the AI. And I think that's a great fit for microservices, right? You're gonna have a lot of API calls and they're gonna return specific objects, but we're pretty excited 'cause it just feels like a lot of room to explore, and I suspect over the coming months we're gonna start to refine what the AI stack really ought to look like. I think graph's part of that story. 

RD: I've talked to a couple folks looking at graph based technologies to like access data, access services. I think the sort of abstraction that the AI agents almost requires is like, we gotta figure out how to give them everything without them calling the things that they don't need. Right? 

MD: Yeah. I think that's spot on. 

RD: You mentioned the API concerns, like the traffic, the back pressure does a graph face API management system. Does it handle that as well? 

MD: Yeah, that's what Apollo's all about. I mean, the analogy to SQL I think is helpful, right? The SQL language explains, here's what you're allowed to ask a database for, and the database, the actual implementation is where you get all of these benefits. And so the same's true with Apollo, right? I touched on a few of them, but just the rabbit hole goes pretty deep on this stuff. If you think about how long it takes to ship stuff, that feels like it ought to be simple. I think a lot of times when you crack open the box and try to figure out what took so long, I mentioned the story before about how do you get the page to render quickly, and it's one of those things where it seems like it ought to be something a couple engineers could ship in a day, but you find yourself the proud owner of a message bus and a couple new microservices, and there's good reasons for all this stuff. All that machinery is the orchestration machinery that they're building by hand. I was talking to somebody recently in the retail world. Here's another example I think that brings this to life. So table stakes retail experience, you want to know when you're looking at a product on the screen, how quickly it's gonna come to your house. I don't know, I have half-heartedly gone through checkout flows before just because I just want to know and it's not on the screen. Right? And you're like, I don't wanna buy it. I just wanna get me far enough along where I see if you can get it to me by the weekend. And if you think about all the API calls underneath that, you've gotta figure out what warehouses the product's in. Then you've gotta call your shipping partners to figure out the cutoff times and you're gonna get back some shipping prices from them. Then you gotta call your loyalty API to figure out like, are you are a frequent buyer or a first time user, and now you gotta make a business decision. Do I wanna optimize for my customer experience here and get them the product really quickly or do I wanna optimize, I mean, I'm in retail, my margins are low, so do I wanna optimize my cost? And I mean, there's like 10 API calls inside that to put 10 characters on the screen, arrives Friday. Right. 

RD: Right. And that's all wrapped up in a sort of automated policy, right? 

MDs: Yeah. And we all know the stat. Every millisecond of latency in a retail experience costs you money. You have to implement all this or you're gonna lose your customers. You can't add any latency to your rendering budget or you're gonna lose your revenue. I mean, it's a tough spot, and that's the sort of puzzle I think that whether we're talking about an AI or we're talking just about bread and butter, mobile development, app development, it's all the same really. It's about customers are gonna pick the stuff that's best. That's just a great experience that gets them what they expect. There's just such pressure to ship that. Developers want great tools that let them not have to build message buses every day, I think is the punchline. So yeah, the system should do it for you. And that's the value of infrastructure and a declarative architecture. It's the same story as Kubernetes, right? Nobody's scheduling their container processes on virtual machines anymore. They have software for it. Yeah. 

RD: Right. You don't wanna have to monitor them. Yeah. Talked about the latency and another, the big API concerns are reliability, especially for e-commerce. Is that a more complicated API process when there's a graph involved? 

MD: Well, here's another example where it helps so much. So in GraphQL, there's the ideal of, we call it null ability. So you can write a query and some of the query comes back blank. So if you think about a lot of software experiences, some of the API calls have to work, and if they failed for some reason, either you're gonna get a 500 page or there's gonna be a retry behind the scenes. Sometimes though, hey, if I'm looking at a product catalog, and the recommendation service didn't return data quickly enough. Maybe I just don't put some recommended products on the screen that time. No big deal. Right? And it's another one of those things where with a query language, this comes for free. You just write the query so that those things can come back nullable. And if they're null, then we're not gonna draw the React component. By the way, if you're using GraphQL, there's no client side code to write at all. You just staple that query to your React component tree and React and Apollo work together to do all the data fetching and and management. But if you're doing it by hand with API calls, think about all the logic you've gotta write to say, yeah, if this API call didn't return, then maybe I'll put this thing over here. I'll do this other thing. And it's little things like that that just add up over and over again. 

RD: Yeah. Well, it's interesting you talked about React, getting the component stuff for free, and you talked about it being a declarative language. Is there deeper integration where you can just sort of set up a GraphQL call and tie it to an entire component structure? Or is there a little more glue involved? 

MD: It's pretty turnkey. GraphQL and React both came out of the same origin in Facebook. So there's a common thread there in terms of what this is all about. How do you build great consistent user experience across a large engineering org. And GraphQL isn't the only, let's query the API's idea that's been out there, but I think the reason it won is that it's got great ergonomics. Like if you just look at the actual keystrokes you type, it's so pleasant and one of the things an app dev will say they love about GraphQL is: it's typed, it's strong typing. So you get this support in your editor as you're building a component that the type of the query doesn't match what the component's expecting. This is especially valuable for mobile development because you can actually create, when you write a query, you get a swift type or I don't know Java as well, but the same idea in Kotlin for Android. It's just night and day, like that kind of stuff. So the editor integration and the tooling support that you get once you're in a structured environment is so, so much better than the handwritten alternative.

RD: Is that typing done at the sort of description level? 'cause I don't remember APIs coming back typed. 

MD: Yeah, that's what the graph does. So we call it a schema. This set of objects that we're representing, that we're modeling in GraphQL, it's typed, right? So you define types for things like user or product or what have you. And then there's some fundamental types for scalers. You've got, you know, strings, integers, and so on and so forth. And so that whole thing flows through. And when I write a query, at build time, I can tell exactly what object types are gonna come back from that query. That lets me generate, I can make stubs for, you know, swift types, for example and now in my Xcode project, I've got a strongly typed system from end to end. And just think of all these bugs that come up all the time from like, oh, I forgot to check that thing could have been null or not null. No, that's baked into the type definition now in graph. So it all comes for free. And your compiler, you know, takes care of you the way it should be.

RD: Yeah, a lot of the early web was built on sort of un typed stuff like JavaScript and then it's slowly gotten like type script to enforce that 'cause you know, everybody gets those errors. You talked about the ergonomics, which I love that phrase to talk about software development tools. What sort of work have you y'all done to enforce ergonomics? Or promote ergonomics?

MD: Yeah. I think one way to look at it is what stuff really spreads? And GraphQL spread. Like it's hard, here's a quick aside: it's one thing to have a new database or a new UI layer because that's a single team consideration, but when you think about API technology, it's different because the whole idea is at least two teams have to agree on a new kind of API before they can use it. Right. And so how has GraphQL spread so far, so fast, I think comes back to that question. It's, we can talk about the technical benefits, we can talk about the business benefits to a larger company that is under this pressure I was getting at before. But I think a lot of this stuff comes down to developers have taste and we've got a choice in what we choose to use. It has to work, it has to be valuable. But I think there's this other thing where it's just, it's gotta feel good. It's gotta have that delight, that joy. Life's short. I wanna work with stuff I like. I wanna work with people I like. I don't think these decisions get made in a spreadsheet, so we try to think about that every day. I do think some of this just is inherent in how declarative stuff works and how GraphQL works. There's this amazing kind of magical experience when you first start your API, right? You can just try different queries, see different results come back. There's an amazing experience when you pair it with LLMs because the LLMs are so good at navigating a graph and so good at reading and writing structured data like a schema or a query. So now you've got this experience where the AI is really helping me. I'm all for copilot and cursor and having AI help write code, but it's sort of a weird gift at the end of the day, right? Like now you're the proud owner of a million lines of AI written code that nobody's looked at. We'll see how that one plays out. But when the AI is helping you write queries, it's a different thing. Because that feels like a really durable object you can build on -

RD: Yeah, you give it good guardrails -

MD: Exactly right. Yeah. And you know, it just goes from there. Like one of the things we're really focused on at Apollo is just how do we make it really easy to bring the APIs you have into this graph form? And that was kind of the knock on GraphQL in the early days is that it's amazing once you can query your APIs, but there's some fussy stuff you have to do to wire it all up. We've made a lot of progress moving toward a simpler, more declarative way of hooking that all up. That's another amazing experience where, you know, five minutes and now I can query any API that I'm interested in. So, you know, you just try to keep front and center how important it's for people to smile when they're using this stuff. And I think that accounts for a lot of it.

RD: Well. Thank you very much, everyone for listening. It's that time of the show where we shout out to somebody who came on to Stack Overflow, dropped a little knowledge, shared some curiosity, and earned a badge. Today we're shouting out a famous question, a celebrity. This badge was awarded to JKFE for asking how to hide show timely fields based on controller condition. If you're curious about that. So were a lot of people. I'm Ryan Donovan. I edit the blog, host the podcast here at Stack Overflow. If you liked what you heard, leave us a rating or review if you wanna. You suggest topics, email us at podcast@stackoverflow.com and if you wanna reach out to me directly, you can find me on LinkedIn.

MD: I'm Matt DeBergalis, I'm CTO at Apollo GraphQL, and you can find us on the web at Apollo Dev. 

RD: All right. Thank you very much for listening, everyone, and we'll talk to you next time.