This week we chat with Kamakshi Narayan, Director of Product Management at SnapLogic, who is focused on how APIs can apply fine-grained controls for privacy and governance to the LLM-powered AI apps vacuuming up our data.
You can find Narayan on LinkedIn.
Learn more about SnapLogic here.
Congrats to our user of the week, Ethan Heilman, for earning a Great Question badge by showing some curiosity and asking: How do I deal with garbage collection logs in Java?
This question has been viewed over 175,000 times and helped lots of folks gain some new knowledge :)
[intro music plays]
Ben Popper If you’re building AI apps with popular models like YOLOv8 and PaDiM, you’ll want to visit intel.com/edge for open source code snippets and helpful guides. Speed up development time and make sure your apps deploy seamlessly where you need it most. Go to intel.com/edgeai.
[music plays]
BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ben Popper, Director of Content here at Stack Overflow, joined by the illustrious editor of our blog and manager of our newsletter, Ryan Donovan. Ryan–
Ryan Donovan Hello.
BP So our guest today is Kamakshi Narayan, better known as Kams, the Director of Product Management at SnapLogic, and we're going to be talking about a bunch of things, product management among them, but thinking about what's going on in the world of software and development, especially as generative AI is all the rage and many organizations are thinking about how to weave it into their tech stack. So we're going to focus a little bit on API management and the role that plays in developing new AI technology, data governance, which is something we've discussed a bunch on this show when it comes to generative AI, and how the modernization of this legacy tech impacts data, because people are thinking about new ways to use vector databases or embeddings or RAG to play around with the data that they have, which ultimately is the fuel for the generative AI models and the output they produce. So without further ado, Kams, welcome to the Stack Overflow Podcast.
Kamakshi Narayan Glad to be here.
BP So for folks who are listening, tell them a little bit about yourself. How did you get into the world of software and technology, and how'd you find yourself at the role you're in today?
KN I always thought when I started my career that I would have a career in finance, or at least that was my intended plan. My father owned a small business and I used to pretend that I'm sitting behind his desk and sort of running the business, looking at his ledger. I would be fascinated with the ledger and the cash machine and all of that and just generally the idea of how to run a business. I kept up with this fascination, and I got my bachelor's in finance when I was in college. I had a lot of friends who took a different course, more in computer science, so we used to have a lot of conversations and I got a little bit interested in that so I wanted to dabble a little bit in that space too so I took up some private classes and learned some basic fundamental computer skills as I was still pursuing my college degree. And it was at this institution where I was learning about programming. There was a career fair and I landed an internship at a software services company, and by coincidence that company also builds software for large banks. So it was a little bit of a nice twist that I got to do something that I wanted to do, which was get into the finance field, but I also had a chance to do what I liked and what I got good at which was application development. So that kind of led from one thing to the other and my career started to unfold. I stayed in FinTech for a long time in building software applications for a lot of large banks, and then there was this point in my career where I worked at a startup. Everything everywhere happens all at once. I started moving out from being more of a software developer into other things around stakeholder management. I was talking to the CEO, I was planning roadmaps for the engineering manager, I was interfacing with the customers, I was interfacing with partners. And guess what? There is a title for someone who does all of this, which is called a product manager, so that sort of made me shift into product management. And there’s a quote from Leonardo da Vinci that says, “Once you've tasted the sky, you will forever start to look up.” So for me, there was no going back after that and I stayed along in my product management career. The career progression sort of happened from product manager to senior product manager, group product manager, but one thing I have always strived for is that I wanted to always try different things. So with the product management career I started trying different verticals, moving into different industries, getting a flavor for what other types of businesses are. That's kind of where I am right now here at SnapLogic as a Director of Product, and I own our API management solution which is an integral part of our integration platform today. A big part of everything that I do is all about APIs. It's become very ubiquitous and a quintessential part of the technology products, services, anything that companies are creating today.
RD So we did a podcast the other day with the CTO of Kong, and he said that generative AI problems are basically API problems. Would you agree with that?
KN Absolutely. I think this is the current state of where things are right now. As a result of AI, the number of applications of APIs is just going to grow exponentially. So we have all of these things that we call coding agents, Copilots, and that's going to sort of democratize application development and API development, getting it more into the hands of users who can create things on demand and on the fly. But every API I think in the future is going to have some interface with an AI capability, and that's going to start necessitating advancements in security and all those kinds of different connectivity models and things to look at from a higher point of view. And I strongly believe API management, because this topic is so close to home for me right now, is going to be able to provide you the structured way to expose and access all of these AI models and services, and it's also going to enable developers and all the stakeholders who are going to use these models to deal with it with responsibility in how you're going to incorporate them into applications and workflows that you create. And I think OpenAI started out as an API platform, as an API-based service back in the day when they started and everyone was so enamored with OpenAI. You were trying to sign up for it and get access to it and you would sometimes get the message, “Hey, sorry. Wait, you're in line. The lines are busy. What's going on out here?” And that sort of is a problem when it comes to when these systems start becoming bigger and you need to scale them and you need to think about how performant and other aspects that goes around with managing a service like this. And API management is the perfect solution to solve for this. There's a piece that does the orchestration of how you connect to an external service, controlling traffic, managing that I think is going to be very easy with an API management platform. And one thing I would also stress that I think is something a lot of companies still are a little bit wary of and hesitant in in the adoption of AI is around the aspect of security and in data privacy. What are users sharing with these AI models? What systems do they get access into? So those are, I think, top of mind concerns for a lot of technology decision makers who still control to what aspect does AI penetrate into an enterprise as it's used in development technologies and things like that. And API management really does come in to solve these challenges. It's providing all the sort of feature sets that can enable security, governance, and all those aspects.
BP So let's walk through a hypothetical example. You have a client who comes to you, let's say in the world of finance, since that's something that you always had an interest in. They want to create a chatbot that shares information with their employees when they have questions, or maybe even customer-facing. A customer can ask questions and get responses back about their own portfolio. And then like you said, they want to be very careful. They don't want a customer to ask a question and learn about somebody else's portfolio or get somebody else's financial information, whatever it may be. So the service is set up so that the data lives in one place and the model lives in another. Maybe the inference is done on the side of the provider, maybe the inference is done in house. Where would API management come in? How would a company like yours arrange and orchestrate things in order to provide an experience where you can work with a best-in-class model? Maybe you can even fine-tune it on your data before you work with it. And then when your clients or your employees do work with it, they get results that are accurate, there's not a lot of latency, and they know that the system isn't going to be emitting any PII or proprietary data in those outputs.
KN Great question. There are several layers to this, I think. When you asked where does API management come into the picture with this, I would say a lot of it is based on a case to case basis and what the companies are, how stringent they want to get with restricting the access, how open it is, and who is the user, are they internal, are they external? So those kinds of things also need to be taken into account when you are factoring in things around access restrictions. But just generally having a good policy, I would say, around the users. First, I think you need to identify who are the users, like I said, internal and external, and to what levels do they go down to get access to that data. API management platforms like us provide a lot of security policies when it comes to how the data is transmitted. So you could have data being transmitted that has encryption rules, security algorithms that are built in so some of this confidential information as it goes back and forth is tightly secured within a secure channel. And when it comes to what sort of systems and data they get access to, you can profile the users and set different layers of access. So if you are an exec as an example, you have a chatbot built for an enterprise and a company. If I am a CEO of the company and I'm trying to use this chatbot for some day-to-day work, as a CEO, I can have access to the financial records of the company, shareholder information, other kinds of things that generally a CEO is okay to have this ability. But the same sort of information is not relevant or is not applicable to someone who's working in support, and if they try to query that kind of information, API management with a governance layer, you could say, “Okay, this is a person who works in a customer service department trying to query information about financial records, so these are not information that can be provided to them.” So like I said, it really has so much power in how granular you can get with the sort of data that can be accessed and shared, and it is a use case by use case sort of a model, but the tools and the capabilities are definitely there to set those perimeters and boundaries.
RD It sounds like a lot of the classic security issues apply to AI, like good user access controls that you're talking about. Are there complexities that Gen AI brings that aren't part of standard API management?
KN There are some things that are areas that are for future potential on how API management systems can evolve. A lot of things about daily dealing with generative AI is around prompts– how do you actually screen prompts, what are some of the questions being asked, how do you get better at enforcing checks on some of the information that is being shared. So that is also an area that I think will come more center in focus in the future. Guardrails exist for access to systems and how the data is queried, but as we get more into user behavior around what they are doing with these prompts, I definitely see there is more opportunity for API management platforms to hone in and develop stronger security or other mechanisms around that.
BP And you had mentioned before this concept almost of a citizen developer where you feel like Gen AI will empower a lot of people to be able to make requests and then maybe empower them to build their first product, the MVP, or do something in marketing that doesn't now require engineering resources. But to me, that also feels like an area where you have to be careful. If somebody's building a piece of software that they're then going to utilize, do they understand the organization's security practices or what a bug or memory leak looks like? They may not be as familiar with that. They're just asking the API for something and getting back something that appears to work at least on the surface.
KN For sure. I think for me, the citizen developer model sort of emerged before AI came along. Citizen developers have been a concept. What AI has done is that it just added momentum to more companies starting to embrace this more widely now. For instance, SnapLogic, we are a low code/no code platform so we provide the ability for anyone to create these kinds of complex workflow integrations with AI powered capabilities. This becomes highly sophisticated in enabling citizen developers to build these applications due to features like drag and drop components, suggest capabilities, templates, what have you. And then there's also NLP. There's NLP capabilities that’s now in a lot of platforms and applications and that makes it much more easier for someone as a citizen developer to have a conversational mode of trying to accomplish any of their development tasks. And that has really simplified the way someone approaches a particular problem and how they seek guidance to some of the work and the tasks that they have to do on a daily basis. NLP I have to also mention now. Chatbots, like custom GPT models, like I said, that companies are creating with language learning models. So what they have also helped us with is that they are bringing the whole context aware scenario into the development process so now they know, “Well, who are you? What is your role? Okay, you are in sales. This is sort of what you do day in and day out. You're looking at leads, you need information about customers.” So those are things that have really become much more easy now, I would say with the citizen developer model with AI. And for a developer, I think I also have to mention because Ben said this, this is around testing and automation. This can really help when you're coming in the product development process, how it can ease up the whole cycle that is involved when it comes to things like that there are AI platforms that can analyze the code. You can detect vulnerabilities, it can identify bugs much earlier than when you have actually shipped the code, some suggest optimizations and what you can do to improve some of these things. So really a lot of things on how you can reduce the time involved here with testing and debugging. And I have to make a shout out to Stack Overflow here because you recently launched OverflowAI which is terrific. Everyone uses Stack Overflow to give the wealth of the knowledge that the Stack Overflow community has that can really help engineers do debugging, troubleshooting, controlling code quality so much more. There's just so much potential out there.
RD We always like a good Stack Overflow shout out here. So like you said, the citizen developer model isn't new, but I think the complaints about the citizen developers aren't new also. It's always like, “Oh, Dave from finance has this janky code that I have to debug now.” Those are the complaints I hear from developers that they're going to have to fix a lot of buggy code. How do you think we can make that easier? How can we address those complaints?
KN So there's some of those things when it comes to repetitive tasks that is involved for developers. The process of improving on code quality I think is a really good thing that we can learn from using generative AI models on how you can address some of these issues when it comes to producing high quality code. The other area that I would say generative AI is really good at, and a lot of developers struggle with this, is having to deal with documentation. No one wants to write documentation. No one wants to spend that extra time once they're done developing code.
RD I do. I do.
KN Once they develop the code, they're like, “Everything is done.” So automating some of those more mundane tasks that are in the job requirements of developers is another area that I think it really helps.
BP I've talked to folks who are saying, “Look, I still want to write the code. I want to be in the driver's seat. I want to understand what the AI is doing, but if it wants to write the documentation and code comments, and if it wants to write the unit test when I'm done, God bless it.” It's not really taking away a job, it's just taking away the annoying stuff. Thinking back to your comment on citizen developers though, here's the one thing I would say and what presents an opportunity for a company like yours and why it's so challenging. Let's say we're doing citizen developing 3-4 years ago when I was at Stack Overflow. We would talk to people and it's all about, “We're going to give you these Lego blocks and you can put them together in XYZ way, so you don't have to really understand how it works, but you basically have these drag and drop options,” whereas generative AI is nondeterministic. You ask it to write code for an app, you don't know what's going to come out. You could ask it to write an app and it’ll write it differently every time you ask. So in that sense, as you said, there's more need for guardrails– this API management where maybe a fix looks something like this chain of thought or chain of command. “Go write me some code and then and then give it back,” and that request goes to the API and then your security policies are on one side informing it what it can do, and once the code is admitted before it goes back to the user, it runs through a series of tests and then the user sees a sanitized pre-checked version that isn't going to cause any headaches.
KN I think that one thing I would also say is that with generative AI, because we are all in the tech industry, I think it provides a great opportunity to upskill and, to some extent, also reskill. It's going to teach you to learn to adapt to the ways of how you can be more productive in the work that you do, and a lot of it has got to do with personalization. The way that one person writes code is different from the way another person writes code, but you're throwing all of this into the model and now it's trying to pull, I would say, the best of everything and try to present you an option. The human factor here is that the critical reasoning skills that developers bring in, or in the citizen developer model, as a business owner brings in, interpreting that in that context like what you said, Ben, I think that's still going to be still very important. And those sort of checks that it can produce code, but you still want to make sure that what it's producing is up to snuff and would make the use cases that you are actually using it for. So there's a lot of interaction going on. It's not that something is going to take over and your work is almost done without any intervention. I think there's still that element in there involved as well.
BP I hear you on the upskilling part. I think we've reached sort of a new phase in the world of technology when it comes to software developers and their expectations around employment. I'm sure lots of people are upskilling and reskilling, but it can also be challenging. I spoke with someone recently who said, “Well, I've been a software developer, but for the last 10 years, if I ever changed jobs it was of my own volition. When I wanted to move up, I always had recruiters in my inbox and I would make a choice. Now I have to relearn how to do a technical interview. I haven't done one in 10 years,” and it's intimidating for folks like that. So hopefully there's things available online or at low cost where people can learn these things so that they can present themselves into a workforce that's changing pretty dynamically. All right, we're coming to the end of the show. Kams, anything we didn't hit on that you feel like is important to touch on?
KN I think one of the things that you had asked about earlier was around data governance and how API management tools for platforms can really put these kinds of challenges to rest. When you look at data governance, there's a lot of different layers when it comes to how you can set that up. There's data quality and what's happening with the data that you're going to be using. Those are some areas that falls not really into the purview of what an API management system does, but I think on the data privacy compliance aspects, there's a lot that can be done around, like I said, access controls, data handling procedures, and how do you provide the visibility of who's accessing the data. So we have a lot of tools available on the monitoring/analytics part of it so you know which user is trying to access which system, what data they're getting out of it, what are some of the other systems that this data is touching and operating on. So those are things that really are very much in what an API management system does provide, but data governance is a completely entire topic on its own. I don't want to cram it into the end of this interview. It is something that our company, SnapLogic, we have various different things that we are providing in terms of features and capabilities, but it borders on a lot of different products being able to provide that sort of function for enterprises and companies.
RD We've definitely been thinking about data and how it relates to Gen AI a lot here. It's sort of the foundation. Data is the new oil, said for years, in that people are sort of realizing this extra value to it so how can they protect it, but also use it in Gen AI. Are there specific security measures that you have to put in place to make sure that your data doesn't leak?
KN Like I said, I think the whole aspect of data privacy, data security, has got to be around how do you build that whole system around it. Who owns the data, who's accessing the data, those are responsibilities that fall on the data owners. But when it comes to data usage and what kind of information is being retrieved, again, you have to have not just procedures for accessing the data, but also in terms of risk management strategies. What happens when there's a potential misuse of data? What is the company's policy around making sure they react to any misuse or breaches in data? And for me, because I worked in the FinTech world, the whole data compliance aspect is very very important and there's a lot of projects that I work for that this is quite important and crucial for very early on when you are actually even building a product. So the whole aspect of data compliance, like customer sensitive data, PII data, how can you retain it if it needs to be removed from a system completely? We had GDPR regulations and other things which I lived through when I was in my past role, all those are something around going into data, like data management, data life cycle. Like I said, it's a big topic on its own. I don't want to keep pointing different areas here, but it is quite important. And I think with Gen AI, it just becomes highly crucial for us to get the data management, the data governance part of an organization in order. That means you have built a good data governance system so that as we're building on top of these newer technologies, all the necessary framework and the processes, policies, everything was in place as to meet the needs of the future.
[music plays]
BP All right, everybody. It is that time of the show. We want to shout out someone who came on Stack Overflow and shared a little knowledge or curiosity and helped folks out there doing their coding get the solution they need. A Great Question Badge to Ethan Heilman, “How do I deal with Java garbage collection log messages? They're too verbose.” If you've ever had questions about this and need an answer, we've got one for you. You're one of 175,000 people maybe who has been helped by the info on this question. So we appreciate it and congrats on your badge, Ethan. As always, I'm Ben Popper. I'm the Director of Content here at Stack Overflow. Find me on X @BenPopper. If you want to hop on the show because you're a software developer and you have thoughts, or you want to hear us talk about something or bring on a specific guest, email us: podcast@stackoverflow.com. We listen and you can participate. And if you liked today's show, do me a favor, leave us a rating and a review.
RD I'm Ryan Donovan. I edit the blog here at Stack Overflow. You can find it at stackoverflow.blog. And if you have article ideas, secret lore, tech opinions, you can find me on X @RThorDonovan.
KN Hi, everyone. Thank you for this opportunity. I'm Kamakshi Narayan, and you can find me on LinkedIn at linkedin.com/n/kamakshinarayan. I'm Director of Product here at SnapLogic, living and breathing everything APIs at this moment. And if you have a question for me regarding APIs, you can chat with me on LinkedIn. Thank you.
BP All right, everybody. Thanks for listening, and we will talk to you soon.
[outro music plays]