The Stack Overflow Podcast

There is no golden path anymore: Engineering practices are being rewritten

Episode Summary

How do leaders ensure alignment, autonomy, and productivity as engineering practices continue to evolve?

Episode Notes

In this episode of Leaders of Code, Ben Matthews, Senior Director of Engineering at Stack Overflow, and Loïc Houssier, CTO at Superhuman, dive into how engineering teams can navigate paradigm shifts in a world of constant technological change. They discuss the importance of leadership in an ever-shifting industry and highlight the concept of aligned autonomy as a way to empower teams and increase organizational velocity.

The conversation also covers:

Episode notes:

Episode Transcription

Eira May: Hey everyone. Welcome to the Stack Overflow podcast. Today we have another episode of Leaders of Code, where we chat with tech leaders about the work they're doing, their challenges, their teams, their customers, their use of AI, and so much more. My name is Eira May. I am the B2B editor here at Stack Overflow and I am joined today by Ben Matthews, who is our Senior Director of Engineering. Ben, I think you've been on the show before. Welcome back. 

Ben Matthews: Thank you, very happy to be here. 

Eira May: And our guest today is Loïc Houssier. He is the head of engineering at Superhuman, which is an AI native email app. Welcome to the show, Loïc. 

Loïc Houssier: Thanks for having me. Super excited to be here. 

Eira May: Amazing. So I wanted to just kind of start off by asking you both a little bit about how you kind of got into the world of software and technology.

Ben Matthews: Sure, my name is Ben Matthews. I'm the Senior Director of Engineering here at Stack Overflow. And, like, my journey through technology has been through ups and downs. I've always had a fascination with it of just building things, whether it came from Legos to construction. Then when I found code, that was also a cool thing, but something I always sort of saw as a hobby as opposed to, like, kind of what I wanted to do every day. I always had this impression of coders go off into a corner and program and then at the end of the day they let you out and then you can be around people again. And you know, I always enjoyed working with people and just building things. But when I actually got in the industry and I saw everything that I could do and what you could be like, this is where I found a passion of working with people, helping them build things, and very happy and very lucky I've got to lean into that for my career. 

Loïc Houssier: That's pretty cool, Ben. Like on, on my side, like I started- I mean, it was a long time ago. I have some gray hair. I would say that I can obviously talk about it. My first interaction with code was, I was like in middle school, and I got this Amstrad CPC 6128. So for the old people that are listening to this podcast, it, I would say fond memories. And, and you, you were basically like ordering some like newspaper and at the end of which you had some tutorials that you can replicate on your- on this machine to do like some pretty cool stuff. That was, that was my first interaction. But then in college I was, I would say I fell in love with math. So I was not a CS graduate. I basically did math until my master, and in the master, basically, I tried to do some applied math to security. So a bunch of cryptography of this cool algorithm, which, indeed you have like both math and the theory, but you need to make it work and prove in the real world this is working. So, that was my first step with like the CE, and pointers, and algorithm, and optimization, and memory management and, and all of that. So this is how I started and I'm trying to like, as you said, like you had your up and downs Ben. I had my up and downs and I had my left and right as well. I spent, of course, most of my career in the tech industry, but I was lucky to spend two years outside of the software industry as well. So kind of like my left and right, which in terms of like leadership, that was something. Like I learned a lot about it and maybe we will be able to talk about it. 

Ben Matthews: And I always love hearing people's backgrounds of like what is that journey of how you got into tech? Because it sounds like so many people have different journeys and especially people that have changed industries and changed professions and moved into tech, they have like a lot of valuable context. What is your thoughts on how you get those people to see things the same with all of those different backgrounds? What are some of your ways you get your engineers to align around like a message of what success is? 

Loïc Houssier: I feel, I mean- I come with a math background and kind of like the purpose of math is to detect patterns in like the, in life and try to modelize or provide models that can be like a good approximation of the reality that you see on the ground. I feel that being in like tech leadership is all about the same. It's helping people to kind of like, take a step back and like, you know what, the situation you're facing today might not be that different than the situation you had a couple of years ago. And helping people to make those connections, helping people understand that, oh, there's maybe something there. There's maybe a pattern. There's maybe something that I do, maybe not in an optimized way, but like if I take that step back, I can learn from it and I can learn from others with their different experience of like, holy cow, like what we facing today. Sure, I was in the B2C world or in the B2E world and maybe working for like, core tech, but like if you take a step back, most of our problems are very similar. So yeah. To your question, I think this is a good portion of my job is to help people be unstuck from like, or being too grounded and too close to the reality of their problem and take that step back, something that I'm trying to do more and more with my people. 

Ben Matthews: Now, and I love that point of view of people with all those different backgrounds can have- like you can learn from each other to, to enhance your point of view. It's not even trying to get them all aligned to one as much as blending, like all of these different approaches and having them learn from each other to build, like new points of view. I'd love to hear more about that of, like, the chances that you do have to face-to-face chances for people to talk about those problems, try to be innovative around those problems. How do you lean into that with those opportunities you have? 

Loïc Houssier: So for one, you need to create those opportunities, and it's not always easy. We are all moving fast. We have like pressure from the market, from the competition, from the business. We need to move fast. And when you need to move fast, your main focus is execution and it's hard to create those opportunities. And at Superhuman, we are a fully remote company, so everyone is both remote and distributed. So we have people from San Francisco where I am to, Patagonia in Argentina, up to Halifax, Nova Scotia, Canada, and even one person in London right now. So like the range in terms of time zone is pretty wide and if you only focus on execution, you cannot miss that personal touch. You miss those close relationships. So we try to have those offsites regularly to build bounds. So you need to create the space. You need to be like, really deliberate and intentional for, and understand that especially also will be started talking, but like, execution and startups and all, when you're not in the same space, information flows in a different way. Trust is built at a different pace. So you need a bit more like intentionality. So you need to be intentional and that offsite was like a great example. I just, I would say this people manager offsite where we align on the strategy of the company. We aligned on the values and how we should work. Because at the end of the day, when you get that alignment, you can create and rely on their autonomy because everyone knows, understands, the goals of the company, the way we work, who are we as leaders, and how can we project that to the rest of the organization. When you get that alignment, it's easier to get that autonomy. So we call that, I mean, it's not nothing new, but like, the concept of, aligned autonomy. So using those offsites to build a bounds and create that alignment, increase the velocity of the organization. 

Ben Matthews: I'm totally with you on those people, meeting together. I think of our own offsites at Stack that we've done the amount of- just because of a changed, let's say landscape or different venue of ways to connect. So many innovative ideas have come out. I can think of like three or four off the top of my head in our last offsite of things, like, that's a great idea. And it's not even just so much that we were meeting face to face, which was value because you know, like we even got stuck in office sometimes. It was just changing the venue, changing the paradigm of how we interact. And so I would actually love to hear, like, I think as you said, what autonomy? The-

Loïc Houssier: Aligned autonomy.

Ben Matthews: Aligned autonomy. Yeah. Being able to answer that question, like could you give an example or two of like the time that's really helped across your organization of now that those people leaders have that aligned autonomy, it's helped your teams go faster? 

Loïc Houssier: I would- a good example like maybe switching to, I would say the world of AI. I would say when I started at Superhuman, it was like, we were starting like to talk about like ChatGPT code, Claude code and all of that, like all those tools coming together. But Superhuman has a team that is made of a bunch of old timers. So you build some habits, you build some assumptions, budget, ability to bring new tools, whatever that is. And one thing that we've done here at Superhuman is like, okay, this is a new world. We don't know the best practices yet. We don't know which are the tools that we'll be using. We know it'll be disruptive to some extent. What can we do with that? And part of this aligned autonomy was like, okay, what is the all alignment part?  What would be helpful for an IC, an engineer out of Brazil, what would be useful coming from the top to some extent, and the leadership to bring that clarity that will help that person move faster with those tools. And, and there was like, it was coming back to two things. One is like removing the red tape, like making really, I would say clear assumptions in terms of like, Hey, you know what? You can try out anything you want. So that was the first thing. Don't think too much about compliance. Don't think too much like, we'll expedite the process. Turn over 24 hours. You ask compliance, commit to review, whether it is like, sub two, type two, like whatever things that compliance wants to check, but like removing that red tape, like it will be easy. That was like the first alignment. We will make it easy for you guys to jump on whatever tools you want to try. No limit whatsoever. There's no preferred actor just yet or that we want to have, try anything that makes sense. And the last piece was, and you know what, we'll take the hit on the budget. You want to try 3, 4, 5 of those tools, like on a monthly basis and you need to buy a subscription and everything. You know what? It's fine. Just do it. So we've created those budget allocation, making sure that people feel enabled to try things out. Sometimes you're like, ah, that might be cool, but it's a bit expensive and I'm already using, whatever, like Copilot or we already using that, maybe it's too much for the company. And, and that's why  I was insisting on the fact that we have like old timers and you get used to some of the practices that were in place and sometimes it's just stick and you don't challenge them anymore. So like setting that alignment and saying like, no, no, no, no, no. This is a new world. The rules are changing and be wild. And it's, you're totally entitled to be wild. And it was interesting because of course, the first quarter it was like a bunch of ad hoc things and adoption, not everything was controlled, but we created this sense of urgency and this sense of enablement. So pretty much everyone jumped on something, some people using ChatGPT, some people using, I would say new IDE, whatever. I'm not here to name things, but different ways, and once we created that environment, now it was the time to okay, let's go back to, like, much more alignment. You had the autonomy. Let's align now. 

Ben Matthews: I'd love to hear how that AI adoption has changed like those workflows day to day, 'cause I almost feel like the AI landscape is changing to today.  The capabilities are, it's like the wild west of things happening all the time. So how do you kind of build a new normal and let that change and enhance your engineering practices?

Loïc Houssier: I mean, it's also like a new paradigm, so it's a new paradigm or like a new tools for ICs. But even us as leaders, like, I would say we are managing people. The way we were optimizing and making people move faster in the past was by standardizing a bit, providing a golden path so that you can optimize your process and tools for like the passing case and sure, some people can have some flexibilities and everything but you knew that maybe you had like a favor, like IDE, a favor like CI/CD, I would say platform or whatever. And our way to optimize was to standardize a bit, work on the dev rail associated to it. Now the problem is those tools, they change every freaking week. And, and you were like totally right with that, and you said something important, but, like, some are just like addition to your current flow. Maybe it's embedded in your ID, maybe like you're using VS code and that's great. This is within the same experience and same flow that you had. Some people might use, like, develop mostly through like the terminal and using tools like Zed or Wok I would say. Bring that into your flow. But more and more we can see that the flow is changing. Like for example, like open AI codex is like a different type of flow, like you are asking for some, I would say, weird request to this agent framework and it's coming back with a PR, like, you don't even have to look at GitHub. You just ask like, Hey, fix this bug that I've seen about like memory management in my, I would say that area and it's coming with a PR. And all of a sudden, you need to and, and in this situation we have some developers doing that now. They basically ask a bunch of requests in the morning. They don't care about it, it's fully async. Then they focus on their core project and at noon they take a break. They go back to those like adjunctive frameworks that is coming back with some PRs and now you are like in the reviewing PRs more than you are like actively working on things, and the workflow is different. The way to think about it is different. The way you organize your day is different. So right now, it's like it's way too soon, I believe, to understand how the new flows will become. So what we've done is just to capture some of the best practices. So we have a AI guild, kind of like following a bit like the Spotify models of guilds. We had guilds in the past from, and still have like for front end backend, so that we have like, some like consistency across the different parts. We created an AI guild with representative of each platforms, because it's not that one tool will work with everyone. Like ask my iOS engineer, like, I would say it's different. It's different. So we have best practices, we have, like, best models, we have, like, that are working per platform and we're trying to identify that, compile, and provide the up to date, and the AI guild, I would say that's their job to some extent is to work on those identification of best practices, communication of those best practices, and also track adoption and understand, I would say why some teams are not adopting, what is the specificities, or like subtleties of their world or their workflows that makes things like different. Long story short, we are pretty intentional in driving the change. 

Ben Matthews: Yeah, we have an AI guild here at Stack as well, and facing the same challenges and same benefits as well. People from multiple disciplines all like posting like, this sounds really exciting, and then they jump to other areas. This is really exciting. Even in a lovable way, we have some curmudgeon saying like, oh no, we should never do this. And that's a great balance to see. And everyone's having their own sort of rubric of like, this is a great place for us to use AI. This is probably not where we want to use it.

Loïc Houssier: Not yet.

Ben Matthews: Not yet. Well, hopefully, yeah, hopefully it keeps evolving and growing, especially as kind of like, we formalize how that data comes in and how it can keep improving it and make it like that cyclical agreement between content generation and then content delivery. That's a whole other podcast right there. But yeah. I love that AI guild, we were lucky at Stack to also have the same empowerment from our leadership of we tried to empower them, like try something, do it, give it a shot break, put it back together, you know, and just see how it behaves and change your paradigm. Try to get out of that comfort zone and see where it can help. And in some places we saw, especially around like documentation, some test generation, those were some easy wins for us to really get some things going. And then we're trying to find other areas of the code of what can it rewrite for us reliably and trying to find that balance of like, well, we really needed human context here that we weren't able to get the AI, but to some other areas, like, it actually gave us some new interesting ways to approach some problems. It's a very fascinating balance that is changing all the time, but trying to find that balance, it's a fun adventure of being in kind of this new frontier. 

Loïc Houssier: Yeah, and so, it's interesting, like, one thing that AI has really been helpful with is like those side projects that you never had the time to do, like, that are not production, for production, or for your customers, but, like, kind of like those helpers, or, like, those like data crunchers, or that you are, like, "Ah, if I could do like this script and everything, like, I don't have the time," and now you can basically do them like as a side project. So we had like a lot of like quick wins like this on all this mundane work which is pretty crazy. And one thing that- 

Ben Matthews: Not even just for engineers.

Loïc Houssier: Totally.

Ben Matthews: It's opened a door to so many other people just have like a quick Runup app to something help them along. Like that's opened so many doors and shortened that learning curve of those quick little side apps. 

Loïc Houssier: Yeah, no, totally. And one thing that we found like really useful in our case, like, we are pretty like design-oriented at Superhuman, so we spend a lot of time thinking about like the user flow, like, the user interactions because we want the product to feel right. The reality is, like, to feel right, you need to try the feature because if you have a Figma and everything, like, that's great, but you have like a lack of feelings when you use the product, and tools like v0, for example, allows our PM and designer to basically ship the working version of the user interaction. So now you're able to say like, “Ah, no it feels clunky, looks great on paper, but like, it feels clunky” so you don't have to implement it. Now you have those tools that are doing like a pretty good POC, that is, that looks great and you can try it out. So that's another, like, area that was like, pretty impactful for us, especially.

Ben Matthews: I love that. The POC part of where I think it really helps to explain some features. An example that came to mind was a feature we're hoping to announce kind of quickly at Stack but it came from a place not within engineering, with someone else, like, I have this idea on how to do it and they just had trouble explaining it so they were able to spin up just a quick example, like, "Let me just show you really quick, this isn't fully featured, this is not ready to go to, you know, in front of our millions of users but I can explain the idea much better visually by just giving you the interactions." And it was so powerful just like, "Oh, I totally see what you mean now," and that was so much more persuasive.

Loïc Houssier: No, 100%, 100%. I think this is, like, the interactions between designers, PMs, EMs, or like tech leads, whatever, like, your triad is, like, all those tools like, accelerate all of that in a pretty aggressive way. But that's also, like, something that is interesting because seeing all those progress, all of a sudden, your CEO, your board, like, they're like, "Hey, guys, now we can do like twice as fast." And you have, like, the expectations that are growing as well. So we are kind of like always catching up between, like, you have like all those CEOs that are bragging about the fact that 95% of the code is generated with AI, like, I would put their codes on this as well but like, you see a bunch of people bragging to some extent, or, like, maybe inflating a bit the numbers to show that they are AI natives and this is creating like an environment where you have also some sort of peer pressure to compare, like, what you're doing and the value, and, like, the improvement that you see in your team and the reality of like those that impact and this sort of like qualitative feedback that you get from the different networks. I'd be curious to understand, Ben, like, how you've been facing that.

Ben Matthews: At Stack, we're bringing in a whole lot of tools and how we use it. We still have a position I'm proud of and want to keep in that there's a human in the loop for anything that goes out. I think some places, there are obviously things that we don't use that for, like number crunching, organizing, sorting data, things that we do internally that may not be as production-facing, we don't have as much of a stress on, but we view AI tools as accelerating our engineers, not replacing some of the tasks that they do. So I think that's kind of where we put our line of whatever you were doing, especially like those tedious, redundant tasks that you have to do like 20 times of sorting, AI fits wonderfully there, but we saw the human that is doing those PR checks, that is checking there, that is monitoring the build pipeline. But what AI is really helped us do is how can we get alerted to changes or anomalies really quickly when we're monitoring or during observability, how can we kind of consolidate pieces of code. Identifying those opportunities and then kind of helping give us shortcuts on how to do it has really been helpful, especially on a couple teams where we had some longer timelines that got shortened just simply because of it kind of did some of this boring rote work that we would have to do to spin up new sites or new features. That's where it's really been leaning in for us and helping in delivery and stability a lot.


Loïc Houssier: Yeah, one thing that was like I see as being like relatively impactful is the onboarding and the ability to basically ask questions like you are a new engineer in your new code area that you don't know exactly, you need to interact with like can API endpoint and you don't know exactly, you don't understand like, lots of dependencies that's not your language of choice but yeah, so you're coming and you want to ramp up, and the ability to ask and whether it is Claude Code or like Copilot. And, like, how this is working, tell me more, like, I want to understand the difference and I would say connections between those libraries and everything, like, help me provide that I would say early step, like, that was something that was mentioned by one of my engineer, like, during one of those reviews like the cognitive load that you have when you jump into like a new piece of software because you're helping a team or whatever, it's usually, like, you spend a lot of time just trying to understand the lay of the land and this is like dramatically improved with those tools because they guide you. It's as if you have someone that knows the code already that helps you, "Look here, because this is where we do that and this is where we do this, and we're using this framework to now we say bring everything together." I'm like, "Huh, okay, now I get it," and getting to that how I get it from days to like one hour, using the right tool and asking the right question. So that was, like, pretty impressive.

Ben Matthews: To shift it a little bit to maybe some more traditional things in the context of AI, like, how has measuring performance of an engineering team have to change based on like kinda the new things, like, how do you judge a high-performing team and has AI tools changed your thoughts on that at all?

Loïc Houssier: I mean, there's always a part of like qualitative and quantitative. I wish someone will come up with like a one quantitative KPI that I can look at and I will tell you- - 100%. - I would say the reality. I'm still using, I would say one specific KPI on aggregate that is interesting for the trend rather than from the raw number which is like the number of PR per engineer per week. We have those DORA metrics and, like, space metrics and everything but, like, I find this one to be like a bandwidth throughput, and it's only a bandwidth throughput so it's not touching on the outcome, I would say the real stuff and it's something that you can trick because number of PRs, okay, I would do like a bunch of like Stack PR and I can, like, inflate artificially that thing, but still, when you talk about it with people and you explain to your organization that that's something that is interesting, don't trick it, it's just here to understand if there are some teams that are facing more difficulties, like, maybe it's just local, maybe it's because your Stack is more complex, maybe you're doing like a big refactor, or maybe there's always some reason that you need to provide. But having this KPI, to me was always good, like, trying to understand the number of PR per engineer per week, and it's normalized, like, you grow, you have more people, you normalize by the number of people in your organization and it gives you like an an idea of the throughput and, qualitatively, when you look back and you have always this feeling that, "Oh, damn, this month, like, they were like on fire" and you can always correlate those. So I found that one to be pretty interesting, and with the rise of AI, we've seen that number getting, like, higher. So like, the raw throughput is better doesn't mean that, like, the outcome for the end users like, is better. Like, we need to measure that, like, in a different way but, like, this raw number is, is getting better with AI and we complement that with some qualitative analysis. We do like monthly survey asking our people, like, "Is it better to you? Like, is AI helping you? Yes, no. Are you using it every day? Yes, no." And we can see definitely that there was some skepticism nine months ago and now people are like, "No, no, no, no, it's part of my job where I am." And right now, like, the last survey that we did in April, on average, I think that people are now saying, qualitatively, but in their own estimation that they probably won like 20% of their time, like, increase of productivity being 20%. - Oh wow. - Some people getting as far, and depending of when in the project, as 40%, and I was mentioning this case of, like, you're new into the project with all those tools, your ramp up is getting, like, so much faster. So, on aggregate, right now, and we have, like, mostly senior people in our organization. Understanding and, like, acceptation is like 20% is probably a bit higher than that but 20% is a conservative perspective on the productivity with AI.

Ben Matthews: You were speaking my language there on so many points of when we can find that one metric to judge it, like, we can all retire, we don't need engineering leaders. And I agree that I think numbers are important but they're indicators, they're, like, points, variables to help you make a decision on how a team is doing. Like, whether it's velocity or PR's pull, like, what's that classic saying, you tell me how I measure you, I'll tell you how someone's behaving, and that's really how they can learn. But when I do try to reassure them like, you're like, this number is just for us to try to find patterns. Using velocity as an example, if there's a team that for some reason they're, like, the number of stories or whatever drop, we look at, like, is it the type of stories, was someone just out sick? Let's just look at, like, what can we glean from that? Or when it really shoots up, like, okay, are they just really good at this area? What did we do different that really enabled them to do things faster? So we try to use it to be like an indicator of like, "Hey, that's worth looking at closer."

Loïc Houssier: Yeah, to me, it's a great conversation starter like, you have this KPI training up or down and you bring it to the team and say, "Hey, folks, last week we are pretty down, what's the context? Help me understand." And, like, you know, not only in a bad way but just more like, let me understand, "Oh, we had this big refactor because we moved from I don't know, CoffeeScript to whatever. So it's taking a bunch of time and it's usually massive PR." So yes, I got one PR for me this week, end the discussion. That's great. That's it. That's it. That's the context that I need. I would say this is a conversation starter to understand, like, the layer of the land and it differs per team. But at the same time, when you have like a board, when you have like a CEO, you need to give something to, I would say help them understand how the team is evolving, how the team is facing the AI, is it improving or not.

Ben Matthews: And I think they're trying to use that to maybe replace other shortcomings, as you said, it can accelerate engineers but that doesn't solve all your problems. I think a mistake I see some leaders making of, like, either they're not getting the right product market fit, they're not really hitting what the customers want. I think, like, well, we'll throw AI at it and then we'll build things faster and go, like, well, if you're still not going in the right direction, going in the wrong direction faster might even make your problem worse. So it doesn't really solve a lot of leadership or product fit gaps, like, and I think a lot of people are viewing AI that way of, like, we'll just give our engineers AI and then a lot of our problems go away. And I think it's an extremely valuable tool. It's another tool in the tool belt but you also have to give them the right direction, the right ideas, and the right enablement to still build the right product, the right fit.

Loïc Houssier: Yes. And you need the right profiles to look at it. You need the right seniority. I would say to use it efficiently, like, when you have like millions of users or, like, millions of, like, I would say transaction per, I would say maybe not per second but like you have a platform that is under like a huge amount of load, you don't want like a magic PR to be just barely approved by your junior engineer and going to production, like, no freaking way. No freaking way. Or we are not there yet. We're not there yet. No, we're not. And we'll be there at some point, but still, it's still a great tool, it's still something that is changing the way we work. How do we manage that with our, like, as an engineer, you are an engineer, you have 10 years of experience, you were not born with it. Like, I was mentioning that I was born with my Amstrad CPC 6128. But between that and my, I would say when I graduated with my master, the change was somewhat linear but, like, it was not like a step function change, like what I got from those like early tutorials and the way I was working, sure, more depth. I was, like, the metal, and, like, the way the processor is working and everything, sure. But, like, it was not a paradigm shift. Like, I was always thinking this way, now we are living a paradigm shift. How do you adapt to it? You need to understand, like, for your own career how to adapt to it, and that's tricky because, like, I was discussing with someone else last week and we were talking about, like, those engineers that are getting out of college, they were basically in a world where what they learned in the first year as freshmen was totally different in the second year because of the paradigm shift and, like, in the course of their study, they've seen like AI being cool, they discover like ChatGPT that can answer I would say some questions pretty fast. They understand that, "Oh, damn, now we can build with LLMs. Oh, damn, now there's agents. Oh, damn, now there's MCP. Oh, damn." Like, every time, there's a paradigm shift. So they're growing with this brand plasticity as like the normal while we were I would say building with the assumptions that you have time to learn because things weren't, I would say, I mean, we moved from pure JavaScript, to jQuery, to Angular, to React. I mean, it's not that much of a change if you think about it. And it took, I would say, it took like, what, 20 years to get there?

Ben Matthews: Yeah, you might be right. Especially like the JavaScript to jQuery. How revolutionary was jQuery at the time of, "I can do all of this now without having to write all this JavaScript?"

Loïc Houssier: It's just another, another level of abstraction. So for my brain at least, it was like, okay, it's cool, it's simpler, but it was not a paradigm shift. What do you think? 

Ben Matthews: And I would say a little bit, the paradigm shift I think was how welcoming and how it enabled new people to get right into making interactions and things on a webpage. I agree, the capabilities weren't different, it's not as if the DOM interaction changed with JavaScript or anything. But how many people could just say like just .animate on something now and move things around, when before this, were long JavaScript functions that 

Loïc Houssier: Oh, that's fair.

Ben Matthews: Had to adapt to each browser 'cause IE6 was still around and causing headaches all around. So I think, like, in terms of what we can do on the web, no, it didn't change, how many people could do things faster within the web development, I do feel like there was a big shift there.

Loïc Houssier: Yeah, my point was more about, like, what does it change for you as an IC in terms of like skills management, like 

Ben Matthews: mm-hmm. 

Loïc Houssier: Like, jQuery is coming, you don't have to jump on it and learn the new pattern just right away. Like, right now if, I mean, it's super easy to be left off. Like, you cannot just wait one quarter to understand what's happening. Like, you need to stay up to date. And I think this is probably, like, where like brand plasticity is important because we had more time. I mean, jQuery, like, you can take a year to learn it, that's fine for your career. Like, if you basically wait too much on those AI toolings and everything, getting a new job might be tricky because, like, we are all thinking and I'm pretty sure it's the case for you at Stack as well, how do you bring AI fluency in the interview process? Because more and more, you want people that are able to use those tools in a way that makes them proficient.

Ben Matthews: So that will be part of the interview process.

Loïc Houssier: Those are great questions. 

Ben Matthews: Tell me, what's next for Superhuman? What are you most excited about?

Loïc Houssier: Oh, it starts with a problem I feel that we need to solve. So we are, I was mentioning it, we overemphasize on quality, our users, they feel great, everything works, everything is fast, everything is correct. So there's a feeling that we're creating of delight and trust between I would say the user and the product. The way you can think about like Apple devices, it just works, just great. We might discuss like the new, I would say design framework that they shared during like the (indistinct), but long story short, this approach of it just feels right, with the rise of AI and LLM features, you are kind like switching from a perceived quality that we own and we can QA for as much time we want, so that's what's getting out of us is great and qualitative to a world where a part of the quality or the perceived quality depends on the usage of the user. If you prompt, I would say poorly, garbage in, garbage out, the quality that you'll get from an answer, highly different of the quality of your question and your ability to ask the question. So when you're in Superhuman and you are like allows for users to search their email and asking questions about their email and the context that they have, if they ask, like, poor-written question, the result would be poor. So one thing that is really important for us is to work on this perceived quality and avoid having our users complain about our quality while it was garbage in. So how do you control the garbage in so that you don't have garbage out? So that's one thing that is like something that I'm especially passionate about. It's more about UX than it is about like the typical, I would say, KPIs for AI and, like, precision and recall and all of that. It's mostly about, like, the user interaction and how do you guide them into asking the right question? And the other piece is there's a lot of mutation in this market. We are in the, I would say we are not only an email tool, we are a productivity tool. Our users, they on average, they basically, like, win three hours a week. So like, that's the amount of time that we save them. So we are a productivity tool more than we are a communication tool, and there's a lot of productivity things that are happening right now, can see like a lot of different interactions. Everyone is trying to, with LLMs, it's easy to get like, "Oh, this is your actions. Oh, this is," so everyone is kind of like working in this world of, like, productivity. We have a passion, I would say it's passionate, like we solved email. What is the next thing that we can solve? So the future would be about that. The future would be about understanding this evolution of productivity, understanding the perceived quality issue that AI is doing. And we think that we're probably the best place to solve those issues the most elegantly possible.

(upbeat music)

Eira May: Thank you for listening. If you have questions or suggestions for guests that you would like to speak with, if you have suggestions for the kind of topics you would like us to cover, you can always email us. My email address is emay@stackoverflow.com. You can also find me on LinkedIn.

Ben Matthews: So I'm Ben Matthews, Senior Director of Engineering at Stack Overflow where we're focusing on human-generated content. We're really excited about some releases that we're gonna do around enabling knowledge solutions and community products. You can find me @BenMatthews on LinkedIn or @BenMatthews at Bluesky

Loïc Houssier: I mean, I will just like suggest people to try out Superhuman for the people that are not using Superhuman. It's a nice product. It feels just great, and it's creating a sort of, like, positive dopamine effect every day when you hit inbox zero. You can reach me on LinkedIn. But yeah, thanks for having me, guys. That was nice.

(bright electronic music)