The Stack Overflow Podcast

Monitoring data quality with Bigeye

Episode Summary

Bigeye cofounders Kyle Kirwan (CEO) and Egor Gryaznov (CTO) join the home team to discuss their data observability platform, what it’s like to go from coworkers to cofounders, and the surprising value of boring technology.

Episode Notes

Bigeye is a data observability platform that helps teams measure, improve, and communicate data quality clearly at any scale. Explore more on their YouTube channel.

Bigeye cofounders Kyle Kirwan and Egor Gryaznov met at Uber, where Kyle worked on data and Egor was a staff engineer.

Kyle and Egor made a clean break with Uber before founding Bigeye, eager to avoid even the appearance of an Anthony Levandowski-like situation. If you’re not familiar with the ex-Google engineer sentenced to prison for stealing trade secrets (and later pardoned by Trump), catch up here.

Learn how to save your energy for innovation by choosing boring technology.

Connect with Kyle on LinkedIn.

Connect with Egor on LinkedIn.

Compiler is an original podcast from Red Hat discussing tech topics big, small and strange like, What are tech hiring managers actually looking for? And, do you have to know how to code to get started in open source? Listen to Compiler anywhere you find your podcasts or visit https://link.chtbl.com/compiler?sid=podcast.stack.overflow

Episode Transcription

[intro music plays]

Ben Popper Compiler is an original podcast from Red Hat, discussing tech topics, big, small, and strange alike. What are tech hiring managers actually looking for? And do you have to know how to code to get started in open source? Listen to Compiler anywhere you find podcasts, or visit redhat.com/compiler. 

BP Hello, everybody. Welcome back to the Stack Overflow Podcast, a place to talk all things software and technology. I am your host, Ben Popper, Director of Content here at Stack Overflow, joined as I often am by my collaborator, Cassidy Williams. Hi, Cassidy. 

Cassidy Williams Hello! I'm excited to be here. 

BP I know. You've lost your voice a little so apologies in advance.

CW A little bit so I'll try not to be too whispery. 

BP Okay. Big day of meetings yesterday. So today we're going to be talking to two folks from a company called Bigeye. I was perusing their site earlier today and a lot of it has to do with data monitoring and assessing data health for clients. It's interesting because this just came up. I was on a call with a client who's one of the biggest names in television and we were talking about their AI and ML models and how they decide what to recommend to people. And one of the things they were stressing was that you won't know if the recommendations are good or if it's working unless you really understand what your data is and if you're looking at the right metrics, and maybe they called it clean or robust or trustworthy, but in some measure health. So I think that's a really interesting problem that a lot of folks are facing. There's so much to measure and once it goes into the AIML it can be kind of a black box so you’ve really got to understand like, garbage in garbage out. Am I feeding this machine the right stuff? All right, so without further ado, Kyle and Egor from Bigeye, welcome to the Stack Overflow Podcast. 

Kyle Kirwan Thanks, Ben. 

Egor Gryaznov Glad to be here. 

BP So the first thing we always ask folks to do is just give us a quick flyover, date yourself a little. How'd you get into the world of programming/software/technology, and what landed you together at Bigeye?

KK Maybe I can let Egor introduce himself. I think he's got a pretty fun story of getting into data so maybe we can start there. 

EG Yeah, I can definitely get started. Thanks for having us, Ben and Cassidy. So I'm Egor Gryaznov. I'm the co-founder and CTO at Bigeye. I am a software engineer by training but I got into data through Hadoop. My first job out of college was actually at an enterprise software company. They did analytics for call centers and part of that was needing to get with the times in terms of big data analytics so let's move our analytics stack into Hadoop. And so I wrote raw Java Map Reduce jobs on Hadoop.2 or whatever it was back in the day. And then Cloudera released Impala and it was like, “Wow, SQL and Hadoop, so cool!” And obviously it just took off from there. From there I got into data warehousing. I did it at a company called One Kings Lane. They were an e-commerce company back in the day.  I think they are still around but a subsidiary of Bed Bath and Beyond now. I was the first data engineer there and built out the warehouse for them. I learned a lot about BI and ETL and data modeling and working with the business and understanding what marketing analysts want and understanding what product analysts want. And from there went to Uber in late 2014 and was one of the first data engineers there, set up the data warehouse at Uber. It was a year and a half of just nonstop building the data team and platform for a while there. And I met Kyle along the way in 2015 where I was the data engineer and he was the data scientist and I was wondering why Kyle had to write these very long and expensive queries on the warehouse. And my job was to make them more optimized and so we became friends ever since then. And I did a lot of work on different analytics projects at Uber. Most of my time was spent on experiment analytics, so A/B tests and analyzing A/B tests. For perspective, if you are in any major city and you open the Uber app, you are probably simultaneously in a couple hundred experiments, and that's everything from pricing to what services are available to you, to how many cars you see on the map, so on and so forth. And we built a platform to do analytics automatically for that and obviously data quality was a big problem for us because if the data was wrong, the analytics was wrong, and our data scientist got mad, the analysts got mad, everyone got upset. And we built a lot of tooling internally at Uber, and Kyle can definitely talk more to that, in order to manage the scope and scale of data there, and we wanted to take a lot of those lessons and bring them to the rest of the market and save data engineers time from reinventing the wheel themselves to using something that's already tried and true and can save them time and let them do the more interesting things that they want to do.

CW Cool. 

BP Gotcha. So the two of you met when Kyle was costing you money, when you had to rewrite his stuff, make it less expensive. 

KK And pain and sleep. Egor was managing the data warehouse at the time so if I backed up a queue or caused performance issues it was not just the company cash that was a problem, it was also keeping Egor up late at night. So he had a very personal investment in getting those pipelines working better. 

BP Gotcha. Kyle, briefly tell us a little bit about your background and then from there we can jump into some more technical and company-oriented questions. 

KK Sure. I studied industrial engineering in school, so not computer science notably. I actually think this is a recurring theme, especially in data circles as people come from physics or econometrics or statistics or industrial engineering and not from computer science a lot of the time. I think that's had a big effect on the way the data industries evolved. But so I come from industrial engineering in school, originally from Florida. And my first job in tech was while I was in school while I was studying and I interned at a company called Grooveshark. So if you were into streaming music before Spotify. 

CW I was going to say, that's an old one. Yeah, I remember that. 

BP Yeah, rings a bell

KK Yeah. So there was Rdio back in the day, there was Grooveshark, there was a whole bunch of them. So I worked there as an analyst and I was writing hive jobs to pull data off the data warehouse about listener statistics. A lot of that was used for marketing. So obviously Grooveshark was very heavily ad supported so I was doing analytics on listener patterns and how many track listens we got and things like that for our marketing team to do ad sales. I moved to San Francisco in 2013. A friend of mine was working at a company called Discuss at the time. He told me to buy a one-way flight out there and see what it was like. I'd never been to California before, I'd only been out of Florida once in my life I think up to that point actually. But I sold my gaming computer to a friend. I had a full tower dual GPU gaming computer, and sold that to a buddy for the flight over there. I took a one-way flight to San Francisco and then just pounded the pavement and applied for jobs and wound up a couple months later as one of the first people working on data at Uber. Maybe the company was about 200 people at the time, so I think there were four people doing data science and three people that were sort of early data engineers, so they were managing like our Postgres read replicas and I was the first person on that team that was querying the read replicas and pulling some basic stats about how many trips are people taking, and when they try to sign up, where do they tend to drop off? Big surprise, it was the credit card screen. But that was still useful information back then because that's how early stage it was.

BP I could have told you that, but you could validate it.  

KK Yeah. I mean it's still good to know even if you have a hunch. So that was kind of how I got into tech ‘for real’ at a company that was growing quickly in SF, not that Grooveshark wasn't real tech or anything like that, but I was an intern, not quite the same. So yeah, that was sort of my entry into the world of data. 

CW That's such a leap of faith. That's awesome. 

KK It was a fun time. It was a lot of trying to stretch a small amount of cash that was in the bank and there was a lot of ramen and sometimes I would go out for a burrito in the Mission and that was a treat. So yeah, that was a time I remember very fondly. 

BP Very cool. So was there some moment of inspiration with the two of you having met at Uber and working there together where you felt like, you were kind of saying this, you were learning things about data health, about how to work between a data scientist and someone managing a warehouse, you sort of had a moment where you thought we could build this better, or a tool you built internally that you thought, “We could spin this out.” What was the genesis to go from meeting each other there to deciding to form a company together?

KK Initially I think we made friends by sort of challenging each other. I don't think we initially were immediate friends. Maybe you can talk about that, I can talk a little bit about the tools.

EG Yeah. So actually, Kyle and I met at Uber. Even before I was fixing his pipelines, when I came in and started setting up the warehouse with my team I realized that Uber just had a big text box to run queries against the warehouse and that's it. No other controls, no BI, no nothing, just freeform, anyone in the company sits down, writes SQL into a text box, and the warehouse better run it.

CW Wow, my gosh. 

EG Yeah, that's how we felt. 

CW What a wild west the internet is sometimes. 

EG It is pretty impressive. There are a lot of interesting stories around the limitations of that model. They came up pretty hard and fast, but I decided to teach an internal class at Uber and I was like, “Here's how you write SQL better and here's how you can optimize the performance because you can't just throw anything at this warehouse. The warehouse is a Ferrari and you can't just turn the key and slam on the gas pedal.” And that's when Kyle attended that seminar and he’s like, “Oh, this guy is going to teach me about SQL? Well, I'm going to ask him the hard questions about SQL.” And so he asked me a couple of hard questions and we had a good back and forth, and for one question I was like, “I don't know how to gap fill dates in SQL. That's an interesting challenge you pose, let me get back to you.” And then a few days later I was like, “I read the docs. Here's how you would do it.” Kyle was like, “Okay, cool, I like you. You're a smart guy.” And so from there to the genesis of the company just turned into, “Okay, well, what do other data teams do? What tools do they need?” I had to teach a course. That obviously doesn't scale. How do we fix that? So we built a lot of tooling to prevent bad queries at Uber to pretty much scale that same knowledge of, “We know this will be a bad query, we're just not going to run it.” And then, I'm going to jump ahead a little bit, but talking to other people in the space, like listening to the tooling that other companies built in this space, everyone sort of builds the same thing. I know we're in the data quality space so let's take that as the basic example. Every single data engineering team will get to a certain size where they will build a tool that takes a SQL query, runs it on a schedule, and then sends you a message if the query either fails or returns more than zero records. This is a tool that everyone reinvents the wheel on, everybody. And every team will start there and then they're like, “Okay, well now anyone can write these queries, but now they have to be maintained and now we have to maintain the notifications, and now we want to do anomaly detection rather than constant thresholds,” and then next thing you know, you have a team of six people supporting a big Python script doing all of this work. And that was sort of how we went from, “Okay. Well at Uber we needed to scale our own knowledge and our own understanding of how to solve these business level problems.” And once we started listening and going to meetups, talking to other teams, talking to other folks in the data space, we realized everyone's solving the same problem, everyone has the same problems. I mean, this is why dev tools exist, right? Because all developers have the same problems. Like, I need my version control, I need my CI/CD pipeline, and those problems are solved with dev tools, but the data teams have historically just not had the same sort of access to tooling and everyone would build it themselves. And so we saw the corollary there and we realized that it was a good time to build tools for data teams.

KK And I think in particular, in the beginning we didn't initially set out to do data quality specifically, actually. There were I want to say four or five distinct either tools or underlying microservices that my team had to build out, data catalog, data lineage, freshness tracking, quality testing, incident management. We had an internal data incident management product. There was a product for making announcements about changes. So for example, you have a table, you're going to remove a column from the table. You may be impacting 50 other people at the company. You may be impacting 300 people at the company. You don't know who they are. So there were tools for, pick a column, I'm going to deprecate this column, trace the lineage graph and show me everybody downstream from me who's queried that in the last 90 days or any of its descendants. That way I can file a message to just those individuals and say, “Hey, two weeks from now I'm going to drop this column. If you have a problem with that, here's a comment box.” And they can just comment directly back and say, “Hey, please don't take this way, I need this,” or, “Can you delay it by another couple weeks,” or that type of thing. So there was a constellation of these small tools. The catalog was a little less small, it was more fully featured, but all those things kind of combined and helped us deal with the number of tables in the warehouse and in the lake that we were dealing with. And the number of users, so there were about 3,000 people a week in 2017 that were writing a query, creating a dashboard, building a new pipeline, et cetera, and trying to coordinate 3,000 people that don't know each other, obviously you’ve got to lean on tools pretty hard at that point. So when Egor and I left, the idea was just to work on data tools, that was kind of it. So the question was, “Okay, based on what we saw, what should we build? Which of those tools that we had experience learning as we built them would be most useful out in the world for everybody else?” So we started talking to data engineers at some startups, some larger companies, and we just basically asked them what's most annoying or painful or difficult and can you stack rank those things for me? And quality problems, broken pipelines, not knowing that a dashboard had no data in it for two days, if it wasn't number one, it was pretty often at least number two or three in that stack rank so that's what led us to start doing what we're doing now. 

CW Cool. And minor pivot on this– because you both worked together, you kind of knew that you vibed, you got along in the workplace. Is it a very similar type of relationship when you become co-founders together? Or did you have to have certain conversations and check certain boxes before actually working together? Egor made a face. He’s thinking, “Ugh, I regret this.”

KK I think that's critical. I don't know if I would do this again with somebody that I had not already, A, worked with, and B, I’m okay working with and being friends with. Because I feel like when you're co-founders you kind of have to be both at the same time a little bit. So if you haven't worked with the other person that seems risky because you don't know what they're going to be like. And if you're unable to be friends with the other person, I mean at the end of the week Egor and I hop on Zoom and we have a beer together. And if you can't do that because you can't tolerate each other as friends, that feels pretty rough too. 

BP So you mentioned doing this stack rank of stuff to figure it out. What was the MVP? Did you build that while you were still at Uber? Did you leave and go out and raise venture funding? What was the MVP, how'd you get started? 

KK I’ll let Egor talk about the MVP. I will say that a big concern of mine, I mean, anybody who was following Uber in the news around that time knew about Anthony Levandowski. 

CW It was a little chaotic around then. 

KK That was the situation, right? Now what we're doing has basically nothing to do with Uber's core business or IP, that type of thing. So even if it's highly unlikely that they would object to us building a company that leveraged what we learned while we were there, I still wanted to be pretty careful about not even having the appearance that we were sort of directly airlifting anything out of the company. So no, we both definitely left and started from scratch. That was important to me just to be sure that that wasn't going to be a problem. 

BP Gotcha. And for background for folks who don't know, Anthony Levandowski was an engineer who worked on the AI self-driving side who then went to a competitor and was sort of accused of taking some proprietary IP with him. Did I get that roughly right? 

KK Yeah, roughly. 

BP Roughly. Folks can look it up. I'll put a link in the show notes for a better description. All right. So then Egor onto you. So you made a clean break, I got that. Then you had to go and build an MVP. What'd you do? Let's hear it.

EG I think the clean break part of it is also interesting and goes back to Cassidy's point of if it’s different working with each other versus being co-founders. I will be vague about this just because it's going to take me hours to tell the whole story, but you have to have the difficult conversations up front, like this is what I expect out of a business, this is what I personally want, this is what I want to do professionally, and you need to get on the same page with the rest of your founding team. Otherwise, things just don't work out. I mean, this is also why it's so hard to hire early engineers, because you have to make sure they professionally and personally want things that can be provided by your company rather than saying, “Well, I want to make half a million dollars a year in cash and work on like fun, esoteric programming languages” and it's probably not the right environment for you. So we had all those conversations. We put money in a bank; Kyle and I just wrote a check each. We split the company 50/50 and we said, “Great, this is the company fund. We will pay ourselves out of the company fund a nominal salary. And this will last us for a year and if at the end of the year we have no more money in the bank then we call it a day. We tried, maybe we should just go back to not being founders.” Luckily we got money before the end of that through venture funding. We raised our seed round late 2019. So we got started in April 2019 and until then we mostly said, “Well, let's start building something and showing it to people.” So we just sat together on Zoom in WeWorks at the time. We started remote, we're still remote, and we would just work on a product and go and talk to people. That's all we did for the first six months of the company. You're either building something and trying it out or you're talking to potential customers or you're talking to investors. And I think that that was the only thing that the two of us did for six months. And by August, we had a little prototype on my laptop. I literally just carried my laptop to a potential customer and I'm like, “Please look at my product. It's very cool, it will solve your problem.” And then the customer would say, “Well, it doesn't do X,Y, and Z. I don't like it.” And then we would go back and we're like, “Okay. They don't like X, Y, and Z. Let's change X, Y, and Z and then we'll email them a week after that and say, we changed X, Y, and Z. Please look at our product again.” And we did that a couple of times. 

KK I will say though, the core concept that was present in the MVP is still present in the application today, which is something that I think is a good sign that the basic idea of– run a query on the warehouse, get a value back, put it in a time series, do something with the time series– that basically was the initial MVP. There was no anomaly detection there, certainly no data lineage, no metadata monitoring. It wasn't SaaS. It just ran on a local machine. But the basic idea of, run a query on a table in a warehouse, fetch your statistic back, create a time series out of it, and then send a notification when the time series is not what you expect, that's been in there since the beginning so I'm excited to see that that's survived for two, three years now. 

BP So you were walking around with this laptop. You knew that there was product market fit in there somewhere, and later found out that even the kernel was good. You didn't have to sell out to make money. So talk to me about based on your experience at previous startups or at Uber, your discussions with each other, what was the tech stack you chose? Is that still the same one you use? And what architecture decisions do you regret or are glad you made?

EG I'll start with the language conversation because everyone always asks for some reason what language we write in. I am a Java guy. I like strongly typed languages, I think Java's ecosystem is mature, there are build tools. Everything just kind of works and there's a library for everything. I don't have to reinvent the wheel on anything. So we wrote the whole back end in Java. The one thing that I do regret is that we didn't really have any front end experience so we kind of just figured it out using Bootstrap and just HTML files. 

CW Classic. That's a very backend developer energy there. 

EG And to make it worse, Cassidy, we actually used Mustache to generate the HTML files and just did server side rendering of Mustache files. And so we literally had Mustache generating not just the HTML, but also JavaScript at times and then it was shoving that whole thing into a web browser. And speaking of architectural decisions I regret, just hire a front end engineer. I should have just either learned React or hired a front end engineer because it took us a year and a half to unwind that. By the time we had a proper engineering team and were ready to actually migrate all the old pages into React we had so much logic and cruft there that it just took a long, long, long time. 

CW Yeah. That sounds like a very hairy rewrite. Gosh. Interesting, but chaotic. 

EG Yeah, chaotic. That said, Java as a backend helps because JDBC is everywhere and any database talks JDBC and so it's very easy for us to build abstractions on top of, talk to the database and get something back. And that was a great architectural decision. It's just SQL. A database is a database, it's going to talk JDBC. Sure there's going to be some exceptions to the rule and some methods aren't supported, but for the most part, it just works. And then from an infrastructure perspective, there is a great talk. Actually I don't know how many people know it. It's called “choose boring technology.” That is one of the core principles. 

CW I don't know it, but I've read articles about it and it's a smart concept.

EG I'll send it in the chat. We can post it in the notes, but the notion there is, you as an engineer organizationally only have so many innovation tokens to use and you want to use them on something that is actually a value-add to your business and organization rather than doing something interesting. And so I subscribe to that. I also feel like I over index a little bit on that because, like I mentioned earlier, Uber built everything in-house and themselves, and I thought that 80% of the time that was the wrong decision. And so we run a Java backend, a React front end, mySQL servers behind everything, and we use RabbitMQ for our queuing. Very proven, boring technologies power the whole thing. We have Docker containers running on Amazon, nothing innovative, nothing new here. But it works and it helps us just focus on the things that matter to us, which is to build interesting, cool things into the product. 

CW Yeah. It makes your ideas stand out because you're not being bogged down by like, “What if we did this really cool thing, but there's no questions on Stack Overflow about it because only like five people use it so far, but it's neat!”

BP Yeah. You did all your reinventing the wheel when you were at Uber building the tools that people needed. So now you can just use the things that work and solve the problems people have. 

KK Without describing it in too much specificity, because I did really appreciate that it was built and that it did work, there was a service that was fairly important in the data tools stack that we worked with. And it's not a language that nobody's heard of, but it's definitely not one of the common ones. And we got to a point where on my team of 12, at one point four people worked on this service. They all knew that language, et cetera, so it wasn't a problem. People change teams, people leave the company, et cetera, and after a year or two there was one person left on the team who still knew the language that the service was written in and that became a problem. So this engineer came to me and they were like, “Hey, by the way, I need some time in the roadmap over the next month or so, because the service is having some issues and I'm the only one on the team who knows how to work on it.” And I was like, “Well, that's not good for the team.” So yeah, sometimes getting too creative or too optimized can bite you later for sure. 

CW Yeah, it reminds me, I did a client project once where they wanted in addition to web apps and everything they wanted to build a Roku app. And the creator of Roku made their own programming language called BrightScript that is only used on Roku. And it was so unnecessary. They could have used anything else, anything else. And I had to learn this whole new language just to build this one application and then when it was done they didn't know how to maintain it because it was a language that nobody else knew. It's not worth it.

BP Interesting. All right, so I'll take us towards the exit here. Could you maybe quickly define for me, first in layman's terms and then for folks who are listening who might be interested either because they work with data themselves or they might be clients, what are the key things that you're measuring when it comes to data health? I know you talked about running a time sequence looking for anomalies, but what are the things that you found to be the most salient as folks go from data warehouses to data lakes to data oceans. We're all drowning in data, what are the most salient things you've found over the time you've been in business, and how do you think that will evolve as you build out from here?

KK Yeah, that's a great question. Before talking about the specifics, a lot of people jump to like, what attributes of data are you actually tracking to identify problems, or how are you doing anomaly detection? And those are certainly fun questions but I think there's kind of a boring one that you have to get out of the way first, which is like, let's say that we have a magic way of identifying a problem in a data pipeline, and I can point you to the table that's having the problem, et cetera. How do we actually measure where we're having problems, when we're having problems, how frequently, and what's the impact of those problems? I think that's sort of the higher level question that often comes second. And the way that we approach that question is we just again borrow a boring concept, which is SLAs. So we've had service level agreements. I mean, they even predate software, right? What were they used in telecom or something like that a long time ago? So we've had service level agreements as a concept for multiple decades at this point. So instead of reinventing the wheel around a data quality score, a data health score, some abstract notion of is the data good or not? And if it's a 98 then it's good, and if it's a 94 then it's not good enough, but nobody knows exactly what the difference between a 94 and a 98 is exactly. So instead of that, we try to leverage service level agreements and say instead, “What defines good enough for a table or for a pipeline?” So maybe you have some null values in a column, maybe you have some duplicate IDs, but at the end of the day if you're using that data as training data for a model and having a quarter of a percent duplicate IDs or null values just doesn't materially impact your model and you know that then you don't need to check that there are exactly zero duplicates. You just need to check that it's within tolerance. And so what we try to help customers answer first is, “How do I create a service level agreement for the data that's feeding something that the business cares about?” So the dashboard that the C-suite looks at, a machine learning model that's used inside the product, maybe it makes recommendations inside the product to your users. What is the definition of ‘good enough’ for those applications at the end of the pipeline basically, and then you work backwards from there. And then from that, that may yield things like, “Well, we need to know that we don't have any duplicate user IDs or that we don't have any null.” Maybe we're looking at a product name and we know that our model's going to recommend a different product name, the product name shouldn't be null. Or we know that there are 16 distinct product names and if we see that double to 32 overnight, then you know something's wrong. So those are the types of things that we're going to track in the data. I think we now have over 70 different attributes that we can track for that are built in. I think the other big learning we had is that not all tracking needs to happen at the same granularity at the same time. A lot of teams prefer to just roll out really, really basic stuff. Freshness, is the data updating on time? Yes or no, super simple. And they want that everywhere and they want it immediately. They don't want to configure that, they just want it tracked everywhere. Or, how many rows are in the table? Super basic. And then for the stuff that's important, then they want to go deeper. That's when they want nulls, duplicates, average standard deviation, social security number format, state codes, whatever. And then there's yet another layer on top of that, beyond that, which is super, super specific business logic. “I need to join these two tables and if a column value in the first table is X, then it needs to be Y in the other column in the other table.” So that was another big learning for us is that teams need to go sort of progressively from the most basic stuff everywhere to more detailed stuff on a limited amount of data, and then to hyper-specific, very custom stuff on an even smaller subset. 

BP Great. It's funny, Cassidy encouraged me to get into the world of 3D printing. One of my sons is now like a tabletop miniature gamer, so printing crude plastic objects actually has a lot of value. So I was downloading some software. It's the first time in a while I guess I've used community-driven software and man, there's a lot of null out there. That's all I'm going to say. Not going to name any names.

[music plays]

BP All right. Well, thank you so much for coming on. I will shout out the winner of a lifeboat badge– somebody who came on Stack Overflow and saved a little knowledge from the dustbin of history. I've been looking for the past few days but there's no new lifeboats, so when that happens I go over to the inquisitive badge. Somebody who asked a well-received question on 30 separate days and has maintained a positive question record. So thanks to Funzo, awarded yesterday for asking so many good questions. I am Ben Popper, I'm the Director of Content here at Stack Overflow. You can always find me on Twitter @BenPopper. Email us with questions or suggestions, podcast@stackoverflow.com. And if you like the show, leave us a rating and a review. It really helps. 

CW My name is Cassidy Williams. You can find me @Cassidoo on most things. I do developer experience at Remote and OSS Capital. 

KK This is Kyle. I'm the CEO and co-founder at Bigeye. And you can find me online in most places @KyleJamesKirwan. 

EG And I'm Egor. Co-founder and CTO at Bigeye. You can find me on LinkedIn, Egor Gryaznov. It's a tough one for most folks. I am primarily on LinkedIn. Sadly, haven't been on Twitter in years at this point. 

CW It’s not sad.

BP It’s okay. You’re not missing anything.

EG And you can find out more about Bigeye by going to bigeye.com. 

BP All right, everybody. Thanks for listening and we will talk to you soon. 

CW Bye!

[outro music plays]