The Stack Overflow Podcast

What security teams need to understand about developers

Episode Summary

The home team is joined by Kinnaird McQuaid, founder and CTO of NightVision, which offers developer-friendly API and web app security testing. Kinnaird talks about his path from school-age hacker to white-hat security expert, why it’s important to build security practices into the software development lifecycle, how GenAI is changing security testing, and what security teams need to understand about developers’ working lives.

Episode Notes

NightVision offers web and API security testing tools built to integrate with developers’ established workflows. NightVision identifies issues by precise area(s) of code, so devs don’t have to chase down and validate vulnerability reports, a process that eats up precious engineering resources. Get started with their docs.

Connect with Kinnaird on LinkedIn

Stack Overflow user Cecil Curry earned a Populist badge with their exceptionally thoughtful answer to In Python how can one tell if a module comes from a C extension?.

Some great excerpts from this episode:

“From the program side, I would say if you're running a security program or you're starting from day one, there's a danger with security people and being the security person who's out of touch or doesn't know what the life of a developer is like. And you don't want to be that person. And that's not how you have actual business impact, right? So you got to embed with teams, threat model, and then do some preventative security testing, right? Testing things before it gets into production, not just relying on having a bug bounty program.”

“With code scanning, you're looking for potentially insecure patterns in the code, but with dynamic testing, you're actually testing the live application. So we're sending HTTP traffic to the application, sending malicious payloads in forms or in query parameters, et cetera, to try to elicit a response or to send something to an attacker controlled server. And so using this, we're able to. Not just have theoretical vulnerabilities, but exploitable vulnerabilities. I mean, how many times have you looked at something in GitHub security alerts and thought, yeah, that's not real. That's not exploitable. Right. So we're trying to avoid that and have higher quality touch points with developers. So when they look at something, they say, okay, that's exploitable. You showed me how. And you traced it back to code.”

Episode Transcription

[intro music plays]

Ben Popper : Maximize cloud efficiency with DoiT - an AWS Premier Partner. Let DoiT guide you from cloud planning to production. With over 2,000 AWS customer launches and more than 400 AWS certifications, DoiT empowers you to get the most from your cloud investment. Learn more at doit.com. That’s d-o-i-t dot com. DoiT

BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ben Popper, Director of Content here at Stack Overflow, worst coder in the world, joined as I often am by my colleague and collaborator, Ryan Donovan, the Editor of our blog. Hello, Ryan. 

Ryan Donovan Hello. 

BP So today we're going to be chatting with Kinnaird McQuade, who is a hacker enthusiast and former engineer at Square and Salesforce, now a co-founder and CTO at NightVision, which is a company looking to give developers the ability to perform AppSec testing themselves– application security testing themselves. We're going to discuss a range of different security testing issues and talk a bit about API securities and just generally how developers can better secure their organizations. So without further ado, Kinnaird, welcome to the show. 

Kinnaird McQuade Thanks. Appreciate you having me. 

BP So tell us a little bit about your background. How'd you get into the world of software and technology and what qualifies you as a ‘hacker enthusiast’? 

KM I always loved computers growing up. You could find me hacking into my school system and getting in trouble for various things. I initially started out as a musician in college and then realized I didn't want to do that for the long term, and I was sitting there thinking to myself, “What should I do? What am I good at?” Well, I loved video games and loved taking apart computers, so initially I thought I wanted to be the next Steve Jobs. Then I went to Marymount University outside DC. Everybody was working at the FBI, CIA, NSA, and I realized hacking was really cool. And back then you could just YOLO Burp Suite at some websites and get RCE. And the first time I got access to a server I fell in love with it and I haven't looked back. It's been great. 

RD Obviously in the early days of application security, there wasn't much and you could just append things in the URL or put raw SQL code into the field and get root, stuff like that. But today, everybody's learned some lessons. What are the current best practices to keep defensive, keep protecting? 

KM Well, the core things still remain the same with authentication, authorization, input validation, encryption, things like that, but from the program side, I would say that if you're running a security program or you're starting from day one, there's a danger with security people in being the security person who's out of touch or doesn't know what the life of a developer is like, and you don't want to be that person, and that's not how you have actual business impact. So you've got to embed with teams a threat model and then do some preventative security testing– testing things before it gets into production, not just relying on having a bug bounty program. So that would include secret scanning, code scanning, dynamic application security testing, that's what we do. And I would say that another best practice would be using secure libraries and frameworks that eliminate classes of security risk, so certain React libraries or Django settings or Django libraries, those are good examples. 

BP Okay, so for folks who don't know, can you just define dynamic application security testing a little bit and then talk about what do you mean you're simulating attacks like that for folks? 

KM So with code scanning, you're looking for potentially insecure patterns in the code, but with dynamic testing, you're actually testing the live application. So we're sending HTTP traffic to the application, sending malicious payloads in forms or in query parameters, et cetera, to try to elicit a response or to send something to an attacker-controlled server. And using this, we're able to not just have theoretical vulnerabilities but exploitable vulnerabilities. How many times have you looked at something in GitHub security alerts and thought, “Yeah, that's not real. That's not exploitable.” So we're trying to avoid that and have higher quality touchpoints with developers so that when they look at something, they say, “Okay, that's exploitable. You showed me how and you traced it back to code.” 

RD Are there things you see in the wild with clients where you're like, “Oh man, are people not sanitizing their inputs?” or whatever you're sort of surprised that folks aren't doing? 

KM Some of those things are still input validation issues. I'm seeing a lot of application-side, server-side request ordering as well, and everybody's saying authorization bugs, indirect object reference, BOLA, things like that, and everybody's struggling with under-documented APIs, which is something that we're trying to solve, too. 

BP So do you work with both small, medium, and large companies? What's the sort of spectrum of companies you work with? 

KM I would say that we're mostly with medium and large size companies. If you have at least a dozen security people within your org, that tends to be a pretty good fit, especially companies that are trying to shift security testing to the left so they've already started a DevSecOps initiative.

BP Nice. What is one thing you would say when you go into a company with a 12-person or more security team, 500 or more employees, maybe 5,000 or more employees, and it surprises you? You're like, “This here, really? Haven't we seen this before?” 

KM Well, I try not to look at it like that, because I think there's an attitude among security people that developers are stupid or they're doing this bad practice or this or that, and I saw that a lot in consulting. But I've worked with a lot of Fortune 500 companies– hotels, banks, casinos, car companies, before I was at Salesforce and Square so I've tried to recognize– and also being a developer and a security person, this stuff is really hard to do and it's hard to miss things, and you're also trying to build a product that people can sell for whatever company you're at. There are some things that I've seen that can be a little egregious. Whenever I see some input validation issues or a SQL injection it's really concerning, but we try not to judge and just help people identify it and fix it. That's the right way to go. 

RD To jump back to something you mentioned earlier, you said the security folks were out of touch because they don't talk to the developers. What's that actual divide there, because they're both building applications but I assume that you're coming at it from different perspectives, right? 

KM Right. I've seen that before where, especially in siloed orgs which there can be a place for a standalone security org not being fully embedded, I've seen that before where you file a ticket, run some generic scan, then you throw some OWASP docs at the developer and people aren't able to really explain it very well. That's not really a good experience. I've always found it to be more effective when you have a security champion, so a security person who is the point person for any security issues for that team. We did a really good job with that at Square and Cash App. So there would be point persons that the people on the development teams could reach out to at any point, and they knew that person was already familiar with our application architecture, how we're doing things in CI/CD, they’ve probably deployed stuff to production, and so they can help relay some of that context to the central security org, and I found that to be way more effective. So not everybody within the security org has to know how to code or do these different things. Some people are there for compliance, some people are there for fighting the business battles, getting buy-in. Those are really important, but you always want to be anchored in reality and actually bringing business value to your customers. And when you're part of the security team, the engineers are your customers. The rest of the org are your customers. It can't just be overhead, we have to provide value.

BP It's interesting, we were talking recently with someone whose title is ‘Documentation Engineer’ and chatting a bit about how important they feel documentation is to maintaining great code quality and architecture within an organization, but how many developers see it as a chore and don't really want to spend time doing it. I don't know if security necessarily falls into that same bucket, but maybe you're right that there's a divide between developers who are creating and folks who think, “Okay, it's my job to pitch this over the wall and somebody else will think about security.” And they kind of made the same point as you, which is that you've got to sell this into the organization. You've got to speak to developers in a language they can understand and align what you're doing and what you think is best for the organization long term on the security side with the KPIs they care about. How soon is this software going to get delivered? Is it going to have any bugs once it goes out? Is it going to be able to boost X, Y, Z engagement or sales? These are the kinds of things that they have their managers breathing down their necks about, and so if you can connect it to those incentives, they're much more likely to listen. 

KM Definitely. When you're working with engineering teams, having them feel like you're also invested in helping them hit their deadlines, doing the different security assessments, hitting these different milestones before they have to hit a deadline, thinking beyond the ticket or the SLA that you've committed to, again, not just being overhead, but actually augmenting their team and bringing business value. 

RD So let's talk about the API security for a bit. I think every application is becoming an API application, essentially. My experience with API security is that you rely a lot on API keys, OAuth, and passing tokens for authentication authorization. Is there more to it than that? Are there things that people don't consider when they have this sort of basic scheme? 

KM That's definitely a big part of it. One of the security best practices people talk about is, “Don't roll your own auth,” which is true in the sense that it's good to not manage your own username and passwords, but even if you're using something like Auth0 or a third party auth provider, you're still kind of rolling your own auth. If you still have to handle those tokens, there are plenty of ways to mess it up. So that's definitely part of it. There are still input validation and injection issues that are happening all the time and authorization bugs. That stuff is really difficult. You can't really get that without either manual testing or really intelligent code analysis and testing, which is something that we're trying to accomplish. And then everybody struggles with undocumented APIs which present their own security challenges. If the API is undocumented, then how are you going to know how to test it? How are you going to know how to communicate with the API when you don't understand the API contract? So we try to illuminate that by scanning the code first and then we generate a SwaggerDoc so you don't need to do any instrumentation on your own and you don't need to install any third party dependencies. And then we get that SwaggerDoc and then we feed it to our scanner. So it's interesting because it kind of has that extra benefit to developers, not just the security testing, but it's for the purposes of security testing. And then because it's so difficult with documenting those from a program perspective, deciding that I want to test all my APIs, well, you can't, because they're undocumented. 

BP Talk to me a little bit about the scanning and some of the automated either write-up that you do or understanding that you do and how you pass that back, because Ryan and I have had a bunch of conversations recently with companies that are inside of the code generation or developer assistant space and they've got AI-powered tools and they're making the case that they're going to be able to generate tests for you, documentation for you, code for you, refactor for you, and this will increase productivity while keeping a human in the loop. You'll get to sign off at the end of the AI suggestions. But when you talk about Swagger and thinking ahead, you said, being able to create a better understanding of their code base so you could better assist them. What kind of scanning and understanding are you using there? 

KM Sure. By the way, I love the AI type-ahead tools or unit testing generation tools. Some of them can generate noise, but I think we accept a little bit of that for the acceleration speed. But we take a different approach to that with generating those API documentations. It's dependent on language and the frameworks that people use. So to develop REST APIs, so Java, Spring, Flask, Python Flask, Python Django, things like that, and some of it is convention, some of it is following and doing proper program analysis and symbolic execution, classic computer science problems. So we scan the code first and then we generate that SwaggerDoc, including not just where are the different endpoints, but also the parameters like query parameters, host body parameters, URL parameters, things like that. And then we get that SwaggerDoc and then we feed it to the testing tool. And then that also allows us to be able to generate sequences of requests for our tests. So it's all about, “Here's your API. Let's describe all the API shapes and then let's use that to inform our testing.”

RD So are there still holes? Obviously there are going to be still holes, but what are the sort of malicious payloads you use to test once you have the API shape understood? 

KM So once you understand those attack vectors, then you can understand how to structure your payloads to properly test it. So some need to be a syntactically valid ID, other ones can be an open text field. Those are great opportunities for us to throw in a SQL injection payload or encode a malicious payload and then see how the application responds or see if it communicates with our attacker-controlled server. So part of it's understanding with that API shape, how to communicate with it, if it's something that we can tamper with or should tamper with, and then how to analyze that response. 

RD So I think we talked about how this affects the Gen AI, because a lot of those are accessible by API now, and if anybody has Gen AI applications, you're sending a request and responses and that's another attack vector. What's the new version of sanitizing your inputs when you have payloads processed by generative AI? 

KM I would say that a really difficult thing to solve is with the data that's output and with whether the person who's seeing it should be able to access it. So with role-based access control, engineering probably shouldn't have access to M&A. Sales probably shouldn't have access to code if you have internal AI. So access-controlled for internal AI is a total nightmare. Testing for prompt injection is still a new space, very difficult, but I would say the challenge is with roles and data sensitivity. The person who is getting the response, should they be seeing this data? And in my opinion, the best way to handle that is using an LLM to categorize and assess the responses from the API to make sure, should this person be seeing this based on their role? So that's something that we're working on as well. 

BP So when I think about the domain of security and the conversations we've had recently, it feels like API security is the topic that comes up most. And Ryan mentioned that the web used to be a pretty Wild West place but feels increasingly maybe locked down and sanitized because people are working through third party cloud providers that supply them with a lot of ready-made security. Do you see APIs as being sort of the attack vector of choice or a growing area? Is that because of our increasing reliance on it for microservices or Gen AI? What is it about APIs specifically that makes them an important focus for you and maybe industry-wide something that is emerging as one of the top areas for security threats?

KM So APIs are huge. Because of CI/CD and cloud and also with generative AI, we're able to ship applications and APIs faster and faster, and so there are more APIs being generated, and often way more than websites. So there was a study by TechTarget’s analyst arm about a year or two ago that for 75% of orgs there are 24 APIs for every web app, and that's only going to get bigger. And with a web app, you're able to go through and crawl and discover those forms and inputs, but with APIs, especially when you have all these terribly written Flask apps on the back end or in your microservices that maybe aren't performing validation or aren't getting the scrutiny that your external APIs will get, it's more and more likely that they're going to be full of holes, and that's what we're seeing. It's valuable to not just throw a WAF on top of your externally-facing API, but you’ve got to take a unit testing approach to your security. Break it up into components, look at all these different APIs that are just microservices on the back end, and then test those for security issues. So a defense in-depth approach to security testing.

RD I think an API is basically a direct line to whatever software you're running, and those are discoverable by simply checking out the web app, going to whatever Chrome tools or just being like, “What are you giving me? What can I play with?” It must be a festival for hackers like yourself.

KM Definitely. And you have to think about it that way. If I can find it with a tool pretty quickly just through an automated scan, think about all the different people who are sitting out there on the internet with oodles of time to do nothing else other than test your web app or API, or if there's an APT that has gotten access within your org, and for orgs of a certain size, you have to assume that that has already happened, that you're already breached, that maybe a nation-state has some kind of foothold into your infrastructure and sometimes you don't know about it. Especially in that case, you have to protect your APIs and test them thoroughly, because you can't just assume that the attackers will never get in. And you have to test those things that aren't getting the external scrutiny. 

RD APIs are a geopolitical issue now, huh? 

KM Yeah. 

BP It's interesting, I realized we're starting up a new podcast with one of our senior staff developers where he just chats with other developers, kind of like a mock job interview but it's playful and not stressful, and the guy was talking about coming into the world of software not as a developer like yourself, but as a photographer, then working at the Apple Store, then working at the Genius Bar, learning his way onto the web, and then discussing, as you were sort of saying, how amazing things are today where anybody with a little bit of low code/no code can get something running in a Flask app and an API and they're talking to other services and they're fetching data. And so for the beginner developer, for the small company, the ability to whip up internal and connect with external APIs is amazing. On the other hand, to your point, that creates this spiderweb of potential vulnerabilities. So very interesting, double-edged sword as always. 

KM For sure, for sure. 

[music plays]

BP All right, everybody. It is that time of the show. Let's shout out someone who came on Stack Overflow, contributed a little knowledge or curiosity to the community, and in doing so, helped us all. A Populist Badge was awarded two days ago to Cecil Curry: “In Python how can one tell if a module comes from a C extension?” Cecil earned a Populist Badge, which is when your answer is so good that it gets significantly more votes than the accepted answer. And we've helped over 7,000 folks with this one, so appreciate it, Cecil, and congrats on your badge. As always, I am Ben Popper. I'm the Director of Content here at Stack Overflow. Find me on X @BenPopper, or if you want to come on the program and chat with us, email us, podcast@stackoverflow.com. Let us know who we should bring on as guests or what you want to hear us talk about. And if you enjoyed today's conversation, do me a favor, subscribe and leave us a rating and a review. 

RD I'm Ryan Donovan. I edit the blog here at Stack Overflow. You can find it at stackoverflow.blog. And if you want to contribute a little knowledge to me and the podcast, whether it's corrections or a heads-up of new stuff, you can find me on LinkedIn.

KM I'm Kinnaird McQuade, CTO here at NightVision. You can find me on LinkedIn or Twitter. I believe my username is @KMcQuade3. And if you want to learn more about NightVision, you can reach out to me directly or visit our website, nightvision.net. Always happen to talk shop. 

BP Terrific. All right, everybody. Keep your APIs clean, thanks for listening, and we will talk to you soon.

[outro music plays]