The Stack Overflow Podcast

Why everyone should be an AppSec specialist

Episode Transcription

[intro music plays]

Ryan Donovan Hello everybody, and welcome to the Stack Overflow Podcast, a place to talk about all things software and technology. I'm here today with Laura Bell Main, CEO and AppSec specialist at SafeStack.io, and today we're going to talk about how everyone should be an AppSec specialist and how she's trying to make that happen. Laura, welcome to the podcast. 

Laura Bell Main Thank you so much for having me. It's great to be here. 

RD So we always like to start these off by talking about how people got involved in software and technology. 

LB Well I'd love to say that I always knew I wanted to be in technology, but I didn't. I thought I wanted to be Ally McBeal from that lawyer show from many, many, many years ago and I wanted to go and do languages. But as most of us do, I went on a bit of a different adventure and I ended up being a COBOL developer at age 17, as you do. Since then I've done all sorts of things– 15-20 years as a software engineer myself, real-time radiation monitoring software, contributing PHP dev, all sorts of bits and pieces. And then later I shifted over into security and ended up as a penetration tester for a number of years. And now I'm this weird hybrid; I live between the two worlds. So part of me writes code and is excited about this future of amazing technology we're building, and the other one finds the bugs and is horrified. So it's a complex life. 

RD How do you find mixing your dev and your security brain works in practice? 

LB It's something you really have to work at, because I think if you spend a lot of time in security it's quite natural to feel quite risk averse and quite reluctant to do new things because risk is everywhere, and when you are responsible for that risk, it can feel like, “Well, if I screw this up then maybe I won't have a job next week and that's not ideal.” And something I've practiced over time is really understanding the severity of risk and the impact and the reality of the context I'm building software in, because there's always problems. Even if we're not talking about security, every piece of code we write has a potential to have bugs in it. It has flaws, we do things wrong, we have a bad day, and security is just another layer of that. And if you are overwhelmed by it, overwhelmed by the potential that you could fail or it could be a problem, then you can actually compromise the innovation part. And we don't build software to do security, we do security to help us build amazing software. So it's something I have to keep in check and it's something that I try and help others understand so that we can see the vulnerabilities and we can embrace security, but not at the cost of the amazing technology we want to build.

RD Right. And I think a lot of developers haven't typically focused on security, and I always hear stories of CS students writing something and then their professor going in and putting in bad input and they're breaking the software and being like, “You've got to keep going.” Is there a way we can start building security into the code education process?

LB Oh, I really hope so. So funnily enough, there was only one mention of security when I did my college degree, and that was many years ago. We don't talk about that because we're all old and gray-haired. But one guest lecturer, and this was in the very early-2000’s, and he taught us about an attack that was called Smurfing, which was a very old school denial of service attack that at the time was being done against Amazon a lot. And essentially you would send bad traffic at hosts but you would say, “Actually, I'm Amazon,” and so all of these hosts would reply and say, “Hey, there's a problem,” but they're all sending their traffic to Amazon. Now I remember distinctly sitting in that lecture theater with my best friend at the time and turning around to her and going, “This internet thing is pretty cool but this is pretty scary stuff. Do you think the Internet's going to have police one day?” And she looked at me as if I'd lost the plot entirely, but that was the only mention in a four year degree. Now, the reason I got into security was because I was an enterprise Java developer at the time and I was very good at just finding bugs in software. Perhaps there was a QA streak in me I didn't know I had. And I'd been telling my boss, and my boss came up to me one day and he said, “Laura, we love you. You're nice and all, but you've got to stop finding all these bugs. This is really annoying us. Stop, please stop.” They were like, “You can either go work somewhere else, we'll give you a great reference, or there's this team that literally works in the basement. They do security and they all find bugs all day long. You'd love them.” And I accidentally ended up there. I would love to get to a point where security is something that is taught to everyone, whether it's in college or in a tertiary education or in a bootcamp, so that we start out at the beginnings of our career with this, that we don't just migrate there if we happen to take that path and meet the right person. 

RD Yeah, everybody talks about shifting security left. This is shifting it way, way, way left. 

LB Way left, absolutely. I have two daughters and that's the ultimate shift left. I have problems with my 10-year-old who is bypassing parental controls on every device in the house at the moment. So if we can do it in our household, I have absolutely no doubt we can do it as an entire community of software engineers. 

RD Hoisted by your own petard. 

LB Oh, it's awful. It really is awful. Children, who knew? 

RD So we were talking earlier about an initiative to make everybody an AppSec developer. Why do you think we have it separated right now? 

LB I think security came late to the party. We started doing security much, much later than we built the systems themselves. And the systems were initially built firstly in closed networks, so the idea of somebody external causing you harm really wasn't considered; it didn't need to be. And then later on we also believed that people were quite trustworthy. Well, humans have never been trustworthy, so we've remembered that now with time. The security folks who came in didn't come from software backgrounds originally. They came from networking and from the Border Defense land. And that's cool, it links to the origin of how our tools were built. But now as we've changed, as our systems are more decentralized, more distributed, as our architectures have changed, as our usage has changed, all of the systems we build now, most of them are –let's face it– 24/7 systems. Historically, they would've been 9 till 5, and if somebody had a problem at 7 PM, everyone was like, “Well, we'll deal with it tomorrow.” As all of this has changed, the need for the folks to be leading security from inside the team has become more important. And we don't just need them inside the team, we need them to understand how the software is built, how it works, how it's intended to work, how it could work on a bad day, and those skills don't naturally live outside the team. So we're in this migration now from being separate because of our origins, to really now bringing it together as we realize that much like observability and performance and scaling and usability and accessibility have all become core parts of software engineering, security is now one of those. And so the time has come for it to be alongside its peers rather than a separate team, a separate event. 

RD It was all just so much simpler when everybody was just writing code and burning it on a disc and shipping it through the mail. 

LB It's some nice nostalgia. I don't know if it was simpler, but it was definitely different.

RD You don't think it was simpler? Do you think there were equivalent security dangers back then? 

LB There's always been security dangers because there's always been people. So security doesn't exist because of the technology we're building. It exists because of the way we as people use it. I'm going to age myself really horribly here, so sorry, audience. My first computer system at home was an Amiga 500 games console. So it had a keyboard and a little floppy disk drive. We weren't cool enough to have Nintendos or ZX Spectrums, we had this Amiga. And my first game that I really remember playing was the original Street Fighter II, and it came on 10 floppy disks and you had to switch the disks between characters– those were the days. But what was also happening at the side was a massive piracy industry, because games were expensive. And so even in my small town –and I'm from a very small town in the UK that's famous really for car theft and teenage pregnancy, that's pretty much the only reasons it's notable– you would go to somebody a few streets away and in their garage they'd figured out how to copy floppy disks. And so you would go with your pack of 10 floppy disks and they would copy you again. There were also cheat codes built into them. People were mudding the games. They were looking at how they worked and doing different things, and this was just gaming. Every system that we've ever built as people, whether it was a paper-based system, whether it was physical, whether it was computing, we have always found ways to use it in ways it wasn't intended to gain our motivations, whether it was money or some sort of access or gold or physical things, whatever that was. And so I think we're in an inevitable step right now where it's not that it has gotten worse, it's just that we have more technology that is more accessible by more of us now than there has ever been before. And so the scale of the same problem has just grown. 

RD More people are more problems, huh? 

LB Yeah, who knew? 

RD How do you suggest we make everybody pay attention to application security? How do we get 30 million AppSec developers? 

LB Well, so we can't make them. I know that. We've tried for years to have the big stick of compliance. But let's talk about what I'm trying to achieve. So I have this company, SafeStack. We do secure development education. But what we try to do is we're for-profit with purpose and we reinvest our time and our money into initiatives that try and upskill large sways of the population. So for example, we give free training to every new graduate in software development in New Zealand and Australia free of charge. Now what we're trying to do at a now bigger scale from mid-August is what we're calling OneHourAppSec. Now if you look at a sprint right now, let's say you do a standard two-week sprint. That's 40 hours a week so about 80 hours total. So what we're asking folks is to just give one hour per sprint, so one hour out of 80 to do security. And to make that really easy for folks, because we know that making it easy helps, we are releasing every two weeks a little newsletter with some how-to videos and some guides, with a topic for that sprint, so that wherever you are, whether you're in a tiny two-person nonprofit or a large team somewhere else, whether you're in Taipei or Rio or you're in New York, you can get into some AppSec and do a little bit every sprint to make us more secure. And the idea behind this is if there's 30 million software developers in the world right now, and it grows at a rate of about 1.2 million a year, and if all of us did one hour of security every two weeks, then actually a lot of the low-hanging fruit, a lot of the common problems that we're seeing, a lot of those basic practices could be put in place. And that could be a huge shift for us and a shift towards making security be from something that you have to get a specialist team or it's outside of our control, and it brings it into something we can control as engineers, something that we can own and be really proud of because it becomes part of being an exceptional engineer– caring about security.

RD What is the low-hanging fruit? If somebody doesn't have any AppSec background and is just looking at this and being like, “Oh, geez. Okay, I probably should know about security, probably should figure out what it is.” What's the easiest thing they can start on for their first hour? 

LB There's quite a few, but I'm going to tell you a little bit about brains first because it's really important before I tell you the low-hanging fruit. Our brains, as we know from how social networks are built, are dopamine junkies. And dopamine loves to hang out in novel, exciting problems. So as engineers, we know this because we love working on new features and innovation, but you put us on something we've been working on for six years and we're like, “Eh, this isn't so much fun.” Now I need you to know that about your brain before you kick off doing AppSec because a lot of the low-hanging fruit are not sexy, innovative problems. They're low-hanging fruit, they're the equivalent of putting on your socks in the morning so your dopamine response is not going to trigger for these. So we're going to embrace some unsexy problems to start with, some things you could do with your first hour. So the first thing that I would always do is make a list of all of the repos, all of the projects you have in your team. Now that sounds really non-technical and it is, but it's a really essential starting point because you've got code that you are actively working on that's super easy to start changing security on because you're there in it day to day, but you've also got projects that you haven't touched in ages. They're deployed, they're doing their job, wonderful. But that means they're never being built. They're never going to be subject to any of those automatic things we put in later. So we need to know where all the code is we want to protect. The next thing you can do if you're going to do your second thing, is we’ve really got to talk about passwords. I love you all dearly, I love myself, but I think we can all admit that there's at least one password in our life that is probably old enough now to start school. And if that's the case for you or there's a password in your life that you know is used for more things, attackers are really fundamentally very lazy. They are objective-focused, so, “I want to get to my goal.” It's not about doing SQL injection or proving that they're the best 99% of the time, it's about just getting the job done. And if we're making it easy by having poor quality passwords or putting our passwords and secrets inside our source code repos, we're making it really easy for folk. So these are the kinds of things that we can do in just an hour a sprint and it really doesn't take much effort. If you were to block out an hour and say, “Okay, I'm going to change a few passwords,” it's not going to take you the whole hour. So you can fill in your hour with a couple of conference talks, you could read that paper you've been meaning to read. An hour can go quite a long way, and suddenly over the course of a year that's 26 sprints, 26 hours. You're going to have achieved a lot. And if everyone in your team does something, that's a huge change for you.

RD I think that's a really important point that these hackers are often very lazy. I've read some security research stuff where they broke into a system by using a corrupted video file or they faked an IO port on a hypervisor and I'm like, “Just check your passwords. Just update your software.”

LB Yeah, the clever folks at Google, particularly in Project Zero, they talk about that 80/20 rule where 20% of the attacks we see are these very novel, deep technical attacks, undoubtedly, but 80% of them use known vulnerabilities that have already been patched but have not been applied, or they're guessing passwords, or they're exploiting what we would call relatively straightforward exploit vectors or attack vectors. And so I accept that it's impossible to protect software from all attacks. There's always going to be something that gets through, but I want to make the attackers really sweat for it. If I'm going to get attacked, I want them crying at the end of this going, “They made me work for that.” So that's what we want to aim to do. It's not about being perfect or 100% secure. It's about saying, “I'm not going to let them in the easy way. I'm going to really make them work for it.”

RD So when you were a penetration tester, what was your sort of go-to starting point and what was the most you had to work to break in?

LB Oh, that's a really good question. So pen testing is a lot less glamorous than the internet makes it out. So those folks who are listening at home going, “I want to be a pen tester one day,” don't do that, it's a lot of report writing. But all pen tests, almost all of them start out in the same place, and that's with reconnaissance. And that's the same thing that you could actually do for your own applications. And there's research. So I would actually go read the job listings for an organization. What technologies are they hiring for at the moment? I'd go look at their conference talks they'd done last year. Go read their website. That first stage of reconnaissance you're not touching anything. It's all open source information, it's not even on your systems. The second stage is looking at your systems itself, so running scanners or looking at the metadata of files on your website, looking at anything that's been exposed and seeing what information it could have. That helps you narrow down your focus to where the valuable things are, what kind of technology you're looking at attacking, and what pathways you might take to that. Now remember as a pen tester, as engineers we write in all sorts of languages all the time. As a pen tester, you might be testing a .NET application one day and a Java application the next day and Laravel the next. And so you are always having to refresh yourself on what's going on in those languages, where am I, what do I need to do here. And so you actually make it easier for attackers when you forget to patch everything, because we don't have to learn the new versions. We're like, “Oh, it's old Java. Cool. Awesome. Done that one before. We've got a bag of tricks for that.” So the easy ones are the ones with the unpatched systems. An unpatched Wordpress is the clichéd example, but equally, if any of you are still –and you have my sympathies if you are– if you're still rocking the old school enterprise Java, popping a WAR file on a server, those are terrible. It's really awful. ColdFusion folks equally, I feel your pain. I know that's still out there. We love to believe that we're all cutting edge now and we all just do JavaScript but it's just not true. The most interesting were the ones where they allowed the scope to be wide. Now a real attacker, their objective focus as I've mentioned, they're not going to only attack your main web application, they're going to go through any avenue to get there and that includes your marketing site, that includes everything. In a penetration test, often you will scope it down just to one particular system because we only have a finite amount of budget and time. So the best tests I've ever been involved in allowed that breadth. So things like, can you change what is showing on a TV channel on a commercial TV network? Or can you interrupt operations at a casino, all of these kinds of things. Those are a lot of fun but they are a lot of work, so a lot of organizations who want to get to testing at that level are actually at that earlier stage right now. So you are going to get there one day and you're going to get really exciting attacks against you, but right now let's deal with the low-hanging fruit and make the simple ones fail.

RD That's interesting. I know taking over the TV channel is a sort of movie trope. Were you able to change something on the broadcast? 

LB The sad thing about that one is, yes we could, but not for the reasons that all the movies seem to think. In that one, the hinges on the gate to control the broadcasting station were installed backwards, so you could actually get into a very, very secured area with a screwdriver. And then you can physically touch equipment, and if you can physically touch equipment, then you can do whatever you like. So the moral of this story is, as complex as your application security controls are, if you install hinges backwards and you can unscrew them from the other side, you're always going to have a problem.

RD Again, people are the problem. 

LB We're also the solution, and that's the nice thing of it. We've always, even since we were cave people, wanted something we couldn't have or that wasn't ours, and we've always taken steps to protect those things. We just need to remember that we play both roles. 

RD So obviously security doesn't depend on just the code. There's the infrastructure, the delivery that are the typical ops and SRE duties. How can they get a little more security-minded too? 

LB Well, I love the folks in SRE. SRE is such a superpower for a team when it's done well. But it's also like catnip to attackers because if you're an attacker, you want to be efficient. So unless you are particularly singularly focused on an organization, you're probably trying to find vulnerabilities in a technology that is used by many organizations and then trying to find an organization that uses it. Now when you build a custom application, say you are a bank for example, you built your custom banking application. Cool, that's very custom to you and your setup. Unless you've taken a huge third party white label component and repurposed it, mostly it's custom to you. But there's elements in your infrastructure that are publicly exposed that are not unique to you, so things like using AWS or Azure or using a CI/CD pipeline of some description, your version control system, your API gateway. Now, while those components are sophisticated and they've been built by big complex teams and there's a level of expectation protection on them, if you as an attacker are able to find a flaw in one of those shared large components, whether that's a poor quality authentication at its simplest or a vulnerability in the software itself, then those components give you phenomenal power. So you don't need to attack the web application for the bank itself if you can get access to its CI/CD pipeline or to its API gateway because you are in such a privileged position at that point that you can do what you like with the traffic. Most things are coming through you anyway. And so as SRE, we have almost inherited some of that network level security responsibility, but without the same equipment as we used to do it. It used to all be firewalls and actual access control lists, that kind of thing, but now we're putting a lot of those kinds of controls and those API gateways in those CI/CD pipelines. So we have to kind of remember that responsibility, remember to keep our accounts limited so we've only got access when we need it for short periods of time. I'm a big fan of, for example, ephemeral passwords where you don't have passwords for these systems until the moment you need it and then it expires out quickly. So putting those kinds of practices in place provides this extra layer in these very visible, very high profile, and very tempting targets for our attackers. 

RD So obviously everybody's talking about AI these days. We recently made our own AI announcements. Can you speculate a little bit? How do you think AI will change the security landscape in the future? 

LB I think it's an exciting/terrifying time for security folk with the– I'm not going to call it the birth of AI because I was silly enough to do a degree in AI in the early-2000’s, there just weren't jobs in it then– but I would say the resurgence and this new wave, this new generation. Now there's a few things that I think are very important. Firstly, the speed of adoption is very high. And security, if you look at all the guidance, things like the OWASP Top 10 that we're familiar with as being guidance for how we secure applications, they've been built over 15 years and they're retroactive. So we look backwards and we go, “Hey, this thing has happened. We should do this in the future.” Now we don't have a long history of AI to go back and look at, and it's moving so quickly that we don't have the time for a very long feedback loop, so we're going to have to get used to doing our advice a lot more dynamically and being okay with going, “Hey, actually things have changed. We've learned things, we need to adjust our course.” So rather than us having these institutional standards and controls, us being much more dynamic and working much, much closer with our engineering teams to make sure that we're really understanding what's being built. I think the second side of this is how we do threat assessment. Now threat assessment is how we understand how somebody might attack a system. Now in our conventional systems, when you write code you are setting the full algorithm. You say, “If I insert input here, it's going to do X, Y, and Z, and this output is going to come out over here.” And that’s a simplistic view of it but we can rationalize and understand the linear pathway through our code. Now with AI systems, particularly where you're using somebody else's models or you are training a model that somebody else has done the algorithm for, there's an evolutionary aspect to those. You cannot categorically say at any time that each time you put the same input in it's going to do the same steps and get you the same output. That's the nature of the beast, which means that we aren't able to do static threat assessment the same way we have before. We can't say, “It's always going to do this,” and that creates conundrum for security because how do we give people assurance that we're able to understand the potential outcomes of a piece of software and how it might be misused when the software itself is changing so rapidly or that its outputs are unpredictable or evolving because of the dataset it's been trained on. So these are what I call very exciting challenges. I'm really genuinely excited that we don't know how to do this because that's a really, really good thing for security– to be in a position where there is a new technology that is incredibly powerful that is changing the world and we can be involved from day one, standing side by side with our engineers and saying, “We don't know. Let's figure this out together.” It's so powerful because when we come in as security people and we say, “Hey, we know better than you. Hey, engineering team, you should do this thing. Your baby is ugly and you should feel bad,” that's not a particularly positive relationship. But if we're coming together right now, security folk and engineering folk, and going, “Whew, this is exciting and hard. Let's do this together,” that's an amazing opportunity. 

RD And that's where the dopamine is. 

LB It is indeed. I love my brain. If that's the tasty treat it wants, let's do it.

[music plays]

RD Well, that is the end of our show today. As we do in a lot of these episodes, I want to shout out a community member who gave us a good question. Today's shout out goes to Stellar Question Badge winner UmAnusorn for the question, “Example of when we should use run, let, apply, also and with on Kotlin.” So if you were wondering how to use these keywords in Kotlin, check it out. We'll have it in the show notes. I'm Ryan Donovan. I edit the blog here at Stack Overflow, stackoverflow.blog. If you like this episode, please leave a rating and review. It really helps. 

LB Awesome. And thanks for having me today here, Ryan. If anyone wants to catch up on what we're doing with OneHourAppSec or to check out our community edition of training, which is free training for your team, so no strings, no gimmicks, no tricks, you can find out more at SafeStack.io and you can find me online in all the various locations as Lady Nerd. So I hope I see you in the community and doing your one hour of application security. 

RD Well, thank you very much, Laura, and we'll talk to you next time.

[outro music plays]