Marco Palladino, CTO and cofounder of cloud-native API gateway Kong, talks with Ryan about the complexities of multi-cloud Kubernetes architecture, how AI has the potential to improve infrastructure management, and how Kong’s large action model will reshape the future of API platforms.
Kong is a cloud-native API gateway. Find them on GitHub.
We last spoke with Marco in 2023. inf
Connect with Marco on LinkedIn.
Congrats to Famous Question badge winner mjbradford7 on How to re-render one component from another in React.
[intro music plays]
Ben Popper Supercharge your AI development with Intel's Edge AI resources at intel.com/EdgeAI. Access open source code snippets and guides for popular models like YOLOv8 and PaDiM. Visit intel.com/EdgeAI for seamless deployment.
Ryan Donovan Hello, everyone, and welcome to the Stack Overflow Podcast, a place to talk about all things software and technology. I'm Ryan Donovan, Editor of the blog here at Stack Overflow, and I'm joined today by Marco Palladino, CTO at Kong. We're going to talk about the complexities of multi-cloud Kubernetes architecture, running APIs from that, and also how AI may be able to improve your infrastructure management. Hi, Marco. Welcome to the podcast.
Marco Palladino Hi, everyone. Hi, Ryan.
RD So Marco, I think last time I spoke to you it was for a Q&A so we didn't do the whole origin story thing, so can you tell people how you got into software and technology and how you got to where you are today?
MP Sure. It's a long journey. I actually started coding as a self-taught developer when I was 13 years old back in Italy. I now live in the United States but I grew up in Milano, and I remember when I was a little kid, my dad brought me into this electronics store. There were a bunch of computers on the left, a bunch of gaming consoles on the right, and he asked me, “Which one would you like to purchase– the computer or the gaming console?” I don't know why I chose the computer, maybe it was bigger– I was younger, a little kid– but maybe that was one of the best decisions of my life. So I started coding very young. I met my co-founder in Italy as well, and we were developers, so we were seeing the power of APIs very early on when people were asking us, “What is an API?” And we went through a few pivots, but then eventually we came up with Kong, and Kong eventually became our main business over time. So now we provide API infrastructure, as you know.
RD And obviously you've been at this for a while and infrastructure has gotten a lot more complex. With infrastructure as code, multi-cloud systems, and cloud native, what are the sort of new problems that arise with these complex infrastructures?
MP Well, I think that the organizations are trying to make the developers productive. That means being able to work in the environments that have the services that developers need to be able to be productive and deliver great experiences to our customers. Unfortunately, that also means that there's going to be some level of fragmentation across all the environments that the developers in our organization use, and the larger the organization and the more fragmentation there is going to be that will introduce complexity in how we manage that infrastructure. At the same time, for sure, we want the developers to make the right technology decisions to build their products, but we also want to make sure that as they do that, they stay focused on building the applications, working with the customers, working with the users. We don't want them to build infrastructure. So there is a bigger task these days, even more so than back in the day, for infrastructure software in general, not just in the API space but in general, to be able to support these different varieties of software. So being able to be portable, being lightweight, being able to distribute the infrastructure across multiple platforms certainly is a requirement for any modern infrastructure out there, whether it's in APIs or AI or pretty much any other type of infrastructure you can think of. Eventually the goal is to make sure that developers do not build infrastructure. The goal is to make sure that developers use infrastructure to build products. And I think that these new ones sometimes get lost in translation, but it's very important to always keep ourselves focused on giving them the best infrastructure, the best tooling for the job so they can focus on making the best use of their time.
RD So without this tooling building up infrastructure, what are the sort of chores and sort of nuisance work that they would end up doing?
MP Well, some of the complexity that's being introduced is the fact that our software may run, for example, on different physical data centers. Obviously, it's going to run in the cloud. In certain instances, it will run in a multi-cloud capacity. So it's actually quite hard. It's harder to find a single team that runs one application multi-cloud, although that happens if they want to have a failover and very stringent requirements when it comes to uptime. It's more likely to find a platform team that needs to support different teams, each one of them on a different platform, and sometimes that happens organically. The organization makes an acquisition, so even if the organization today, let's say, is standardizing AWS, they make an acquisition and the company may be on Azure. And so whether they like it or not, there is going to be multi-cloud infrastructure at one point in the journey. So providing the ability of installing, securing, governing the infrastructure software the same way across all these different environments becomes key to reduce complexity. And look, when I work with the developers but also with the platform owners and also with the C-level executives that own the platform, the CIOs and the CTOs, the key word is always to reduce complexity. The more complexity we have the harder it is to move fast. And so obviously different platforms introduce complexity. We cannot manage infrastructure in n different ways on n different platforms that we're running. There must be a good technology that allows us to standardize. And I think, by the way, this is why Docker containers back in the day got very popular, because even besides the orchestration of the containers themselves, it is a unified way to package software and distribute software across any platform we may be running. And then Kubernetes became very popular because it abstracts away the underlying complexity of the infrastructure to deploy those containers the same way across multiple different clouds, for example. And likewise, API infrastructure, which is becoming more and more important, also needs to be managed in a portable way, otherwise, it is going to be a huge amount of complexity we have to deal with.
RD Yeah, the containers seem to make the servers into cattle and not pets. It's a quick, repeatable, disposable server, so you can just launch it and destroy it as necessary. And as you said, it's getting more complex across cloud providers, so walk us through the process that when you call an API, what happens that's different with multi-cloud? Is there more complex routing, more complex traffic shaping, do you have multi proxies, what's going on?
MP Every developer in every team in the world right now is an API developer, because our software is fundamentally driven by APIs. So when it comes to APIs, obviously part of the job of a developer is to build the API and manage the requests and responses, and then there is the security of the API, the observability of the API, the traffic control of the API, the governance that goes into that API, the tiering of that API, the documentation, and so on and so forth that are all required components of a successful API in production that is what I consider being part of the infrastructure of the APIs. It is not the business logic of the API itself that can be written in any language that developers may want to choose, but it is everything that's around the infrastructure, as a matter of fact, that supports that API in production. It also allows us to think of an API as a product with a product lifecycle. APIs used to be created overnight and then new versions of these APIs, and that over time created a huge problem where no one knows what the APIs are anymore and there is too many APIs to maintain and to manage. And so being able to treat an API as a product with a lifecycle, to version them, to release them, to decommission them, to migrate the users to the new version, all of that is also what I consider part of the infrastructure. Now, if we are developing APIs in Kubernetes and we are developing APIs on virtual machines in one cloud, another cloud, we are most likely not going to want to replicate this infrastructure and all of these behaviors that we want to enforce in a different way based on the platform that we're using. We want to be able to enforce it once, whether this API runs in containers, in Kubernetes, in AWS, GCP, Azure, or whatever else it's going to run. Not doing so is going to cause an incredible amount of stress to the developers that are implementing the APIs, the platform teams that are going to be managing the APIs, and eventually the leadership that doesn't fully understand why the organization is not shipping as fast as it should. It is because they are spending time managing complexity by setting up infrastructure across different platforms, each platform in a different way, and that is a huge amount of wasted resources. This infrastructure is critical and essential for the developers to be successful building new products because every product is now powered by APIs, and therefore it is successful for the company itself. APIs are the new internet. 85% of the world's internet traffic runs on APIs. And so when I grew up, I knew an internet made of blogs, of emails, of websites, and that internet got replaced in front of our eyes with API traffic. Now it's all mobile, it's all digital, AI is also driven by APIs. The more AI, the more API usage there is in the world. And so when we are really talking about API infrastructure specifically, we're quite frankly talking about the infrastructure that powers our digital world, so it becomes quite important to streamline that.
RD I read something yesterday that said that all these companies that did fast scale up and hired a lot of people, maybe 70% of those people, those engineers, were working on infrastructure. Is there a way to shortcut to get your infrastructure built up and running and have those people just build the business logic and work on that?
MP There is many people in the infrastructure layer because obviously everybody recognizes how important this layer is. Of course this is the typical ‘build versus buy,’ and I am a technology vendor myself being the CTO of Kong so I'm also biased, but if there is a way to refocus our engineering resources into creating competitive advantage in our products and in our organization, then we should be able to do it and refocus our resources to do that. If there is anything else that can be abstracted away and leveraged as a primitive for the teams to get a head start, then we should probably buy it, because the success of a retailer may not necessarily be driven by the amount of time the engineers spend building API infrastructure, but primarily about the customer experience they're building on the websites or mobile apps. The success of an airline is going to be driven by the customer experience when people buy and purchase. And so can we refocus engineering resources on the core business that will make the business better from a competitive standpoint? Of course, that's a no-brainer. Everybody thinks that. I don't think I'm saying anything controversial. I think sometimes organizations get caught up into momentum that's very hard to change, and so at one point you end up in a situation where things are done in a certain way because that's how things are done here, and it takes a strong leadership stance to recognize the inefficiency and then being able to reallocate resources. As a matter of fact, the more I work with enterprise organizations, the more I am convinced that it is almost less of a technology problem. The technology is there, it's driving everything we do, but there is a huge people component to it and a huge leadership component to it that sometimes it’s lacking. And look, whatever we do with the technology, technology is not going to replace lack of leadership. So I think that there are two different points of view on how an organization should think about reallocating resources.
RD I think the people problem, whenever we talk about software engineering a lot of people want to talk about the tech, but Conway's law where you end up shipping your org chart. I've heard people talk about microservices being more of a team organization thing instead of a tech organization thing. Do you think that's accurate? Do you think that the sort of organization of the company is reflected in the way the technology is produced and shipped?
MP Well, if we believe that the decision making process of an organization has been driven by the people structure of that organization, which is always true, then the technology decision making is also being driven by the organizational structure of that organization. So 100%, I do agree with that. The real leadership test is being able to recognize where the inefficiencies are and then being able to show a vision and a path to move forward for the rest of the organization that basically shows them a world that could be but is not yet, and that world is a world that needs to be built. And by the way, in every organization that vision may be different and that aspirational playbook may be different, but there must be certainly that component. You see, one of my favorite questions to ask to technology leaders of top Fortune 500, top Global 2000s that we work with is, “What is the vision of this organization from a technology standpoint in the next five years? What is the API vision of your infrastructure in the next five years? Where do you want to be five years from now in such a way that we can work backwards from there and determine today what needs to be the right decision to get there?” And as you can imagine, surprisingly, more often than not, this question is a complete mess in a sense that there hasn't been much thinking around what is the organization going to look like in many years from now, so how do we know what decisions we have to make today to get there? And I know that this is an overused example, especially in the API space, but Amazon was not the first e-commerce company in the world but it was the first one with a very strong vision on how their architecture and APIs should work, such that Amazon, despite not being the first e-commerce company in the world, was the first one where AWS became a reality. So having a vision and being able to have the leadership strength to enforce it, in the case of Amazon, came from the CEO himself so it can't get better than that, but really can unlock an incredible amount of potential in the organization. Technology is not going to replace lack of leadership, at the very least until the leadership decisions are made by AI or an AI-like organism. For as long as we work with humans, humans are always going to be in the execution path of every technology implementation whether it's microservices or not.
RD I want to give you a chance to talk your book a little bit. I know when you all reached out it was about the new release of the dedicated cloud gateway. So you were talking about abstracting away infrastructure things and thinking about the next five years for API. What are the features that you're releasing now that are supporting greater abstraction and the next five years of API?
MP You're right. We did make a big announcement for our dedicated cloud gateway, which I think is a very unique product in the world right now. Because essentially, when you are looking at SaaS offerings, the cloud vendors, the hyperscalers, they tend to basically release a SaaS version of pretty much every software available in the world, but the problem with that is that, for the most part, they tend to be cloud specific, of course. They're driving business to themselves. And for the most part, they tend to have a strategy that's very wide in the market. So what they want is really to be able to cater to the majority of users on their platforms, and as soon as there is an organization that requires a little bit more than that, usually that's when they start coming short from a feature set standpoint. And so we are already very successful with the hyperscalers and our customers in deploying our Kong technology, whether it's the mesh, the ingress controller, the API gateway, and now the AI gateway, very successful in deploying it across the clouds. And customers kept asking us, “Is there an easier way? Could there be an easier way to do this?” And of course, the easier way is to deploy the whole infra in one click by providing ourselves a fully managed solution that can run in the cloud. Now, the unique part of our offering is that it is running in the cloud with the full feature set of Kong which is very vast, but it's also running simultaneously potentially on more than one cloud. So we're developing a multi-cloud offering that allows us to manage API infrastructure the same way, simultaneously on AWS, Azure, and GCP, and we have released our first cloud, which is AWS, Azure. It's coming in the next quarter, and GCP before the end of the year. So essentially between now and the end of the year, the organizations that we work with are going to be having the opportunity to run the same underlying API infra, same governance policies, same capabilities. We're basically turning the clouds into primitives of API infra in a way.
RD That's interesting. You brought up AI, and I know some of our listeners are sick of it, but I think it's still an interesting thing to cover. You talked about infrastructure, what are the ways that AI will change how infrastructure is managed and provisioned?
MP Well, at least in our business, which is the API business, we're seeing that AI is driving a huge amount of API traffic. APIs grow in the world based on the amount of digital use cases developers are building in their products. So the more mobile applications, the more APIs. The more microservices, the more APIs. The more AI, the more APIs, because whether we use AI, we train AI, or we have AI interact with our systems, it is always an API call that drives one of these behaviors. And so the more AI usage, the more API usage there's going to be. Of course I'm speaking specifically about Gen AI which is the latest trend. Organizations may tell you that they've been doing AI for 15 years. Obviously Gen AI is the new wave that I'm referring to. And so when it comes to AI, there is a big dilemma right now in the organizations. Everybody wants to use Gen AI, and yet everybody is afraid that developers are going to be building something that they shouldn't or sending data that they shouldn't to these LLMs, which may be self-hosted, but of course, more likely than not, they're going to be cloud. And it's a very interesting situation where, despite wanting to leverage Gen AI to create better product experiences, they don't really have the infrastructure in place to enable that AI consumption in the teams. Well, obviously this is in our ballpark and so we did announce an AI gateway that allows the organization to do three things. First, it allows the developers to be productive by using one API endpoint to consume as many LLM technologies as they want. So they build once and they can shift between OpenAI, Cohere, Mistral, we support a few of those. Then it allows them to enforce governance and compliance on the AI traffic the same way they would do on API traffic. And then it allows the organization to capture metrics and observability that are AI specific, so the providers, the models that are being used and the tokens that are being consumed, and then determine if there are opportunities to reduce spend because these cloud LLMs are very expensive to use at scale. And so there's lots of orchestration that's happening right now between self-hosted and cloud LLMs, as well as orchestration for preventing hallucinations and things like that. So obviously it is a very interesting space right now, and I know it's an overused and maybe misused term, and everybody's a little bit tired to hear about it, but by any means, it's real. It's out there and I'm seeing it in every organization that I'm working with. The risk of not having proper AI infrastructure is that the organization is going to be blind and they're never going to be comfortable about leveraging Gen AI across the applications. Back in the days when the internet was born, I was talking to someone and they were telling me that some organizations were banning the browser because they didn't fully understand how to grant access to the internet in a legitimate way from the organization. And imagine the disaster that that would have caused– banning the internet. You missed an entire new generation of software and customers that you could have targeted. So we don't want that to happen with AI. We shouldn't ban AI because we don't have the right playbook or the right infrastructure for it. We should think about what the right playbook is, what the right technology is, and deploy it so that we can harness AI the most we can for the products that we're building.
RD So for the future, what are the next five years? What's the API infrastructure problems that's going to be keeping you up at night?
MP I am very excited about this idea. I've been spending lots of weeks and nights. I'm a little bit obsessed with this whole API world so I've been spending lots of lots of days and nights thinking about the evolution of the API platform. And I am very excited about what's coming next, and I'll tell you why. We started to build APIs back in the day because we wanted to extract data and services from silos. Without an API, all we have is silos. APIs allow this data to be extracted and to be exposed. This is the emergence of API platforms as we know them today. We can build on top of those APIs and our company, our product becomes a platform. With Gen AI and the introduction of LLMs, we are now enriching our API platform with AI in order to be able to provide a better experience on what is fundamentally the same architecture that we've built 10 years ago. So we have an API platform, it gets smarter, so to speak. The evolution of an AI enriched platform is an entirely AI-driven API platform. What does that mean? It means that instead of building on top of individual APIs that we are developing on top of our products, those APIs are still there, but those APIs become primitives of a higher level abstraction that allows us to harness the intelligence of the organization by orchestrating the business logic across every API that we've built. It's almost like a large action model that sits on top of all the APIs that we have that allows us to describe what it is that we want to do, and that large action model will be able to then orchestrate the right API calls to drive to that outcome. So essentially APIs are becoming to this large action model what our data is to an LLM. It becomes a primitive that can be harnessed from one interface, from one entry point. Can we at Kong or in the industry drive the evolution of large action models such that we can create this evolution? And then if we do, we're going to be forever changing how API platforms are being built and this is going to be the next generation of API platforms. The reason why I'm excited about this, Ryan, is because once I start thinking about this, I couldn't unthink it. Once I start thinking about this new way of building platforms, it's just this explosion of creativity and I could not get that out of my head anymore. So this is one of the things that we're working on at Kong to see if we can provide a Kong large action model that will shape how API platforms are built in the future.
RD That sounds like a hopeful and ambitious note to end on.
[music plays]
RD All right, everyone. As we do at the end of every episode, we're going to shout out a question. Somebody came on Stack Overflow and asked a great question. Today, we're shouting out a Famous Question Badge, a celebrity question, “How to re-render one component from another in React.” Congratulations to mjbradford7 for asking that. It has been viewed by 10,000 people. My name is Ryan Donovan. I'm the Editor of the blog here at Stack Overflow. You can find it at stackoverflow.blog. If you liked what you heard today, please like and subscribe. It really helps. And if you want to reach out to me and suggest something on the podcast, you can contact us at podcast@stackoverflow.com.
MP I am Marco Palladino, the CTO and co-founder of Kong. I can be found online @subnetmarco. Of course, Kong is available at konghq.com. I'm very reachable, I'm very approachable, so if there is any follow-up or any question on any of the topics I have described, if you want to have a chat, I'll be very happy to do so. Well, thank you so much for the opportunity, Ryan.
RD All right, everybody. Thank you very much, and we'll see you next time.
[outro music plays]