The Stack Overflow Podcast

How we’re making Stack Overflow more accessible

Episode Summary

Ryan talks with Stack Overflow front-end developers Giamir Buoncristiani and Dan Cormier about how and why Stack Overflow established a more systematic, sustainable approach to improving the accessibility of our sites and products.

Episode Notes

Read Dan’s blog post about the process of making Stack Overflow more accessible.

We followed the Web Content Accessibility Guidelines (WCAG), with a few exceptions. For example, we chose to measure color contrast using the Accessible Perceptual Contrast Algorithm (APCA)

We quantified the accessibility of our products using the Axe accessibility testing engine.

Our accessibility dashboard helps our internal teams and the community track the accessibility of our products: Stacks (our design system), the public platform (Stack Overflow and all Stack Exchange sites), and Stack Overflow for Teams (including Stack Overflow for Teams Enterprise products). 

We also implemented robust accessibility testing and made those rules open-source in a comprehensive package you can find here.

Shoutout to user Beejor for an excellent answer to the question What is the largest safe UDP packet size on the internet?.

Episode Transcription

[intro music plays]

Ryan Donovan Ready to integrate Speech AI into your apps? Join over 200,000 developers using AssemblyAI's leading speech-to-text API. Get 100 free hours at assemblyai.com/stackoverflow and start building today. 

RD Welcome to the Stack Overflow Podcast, a place to talk all things software and technology. I'm Ryan Donovan, Editor of the blog here at Stack Overflow, and I'm joined today by two of my coworkers in the engineering department: Dan Cormier, Senior Front End Developer, and returning to the podcast, Giamir Buoncristiani, a Staff Front End Developer. Today we're going to talk about how we got better at accessibility, how we stopped reacting to issues, and came up with an overall accessibility plan. So welcome to the show, gentlemen. 

Dan Cormier Thanks. Good to be here. 

Giamir Buoncristiani Thanks. Likewise. 

RD So, obviously we've been thinking about accessibility before this. Why is it something that's important to us? 

DC There's the basic moral argument, which is that it's great to be able to use our products regardless of what your abilities are. We have core value in our company to be flexible and inclusive. I think that extends to our users and it extends to our products. And I personally like thinking about the fact that I wouldn't have a career if not for Stack Overflow existing quite frankly, and I don't have any motor issues or vision issues or anything, but there are people that are in those situations that wouldn't have the benefit that I had if our products aren't accessible. And there's also the other things, the business concerns. We have people using our products that are buying them and they have their employees using them and we definitely want them to be able to use everything. And then also there's just legal requirements across the world on the web, so potentially anywhere you can use our products. 

RD So what was our previous approach to accessibility? 

GB I joined Stack Overflow around two years ago. I remember pretty clearly how it was the first time when I joined. I guess we were quite reactive. That's probably the way to describe it. We, of course, got reports from here and there, sometimes from clients, sometimes from our community as well with issues, and we were always postponing fixing those issues until quarterly when we would have an enterprise release and we would just say, “Okay, maybe we want to also fix some accessibility issues.” The core problem really was that for us engineers and designers, people that are builders, but also for our leadership, we didn't really have a way to measure the accessibility status of our products. And that also, I feel, disincentivizes people from fixing stuff because you would just do some hot fix here and there, and then you wouldn't see any KPI or anything moving. You just do those fixes and then you still don't know, “But are we doing good or not?” 

RD Yeah, there's no definition of good. 

GB Yeah. So that kind of describes the status that we were in when I joined back two years ago, and I think we came a long way since then. 

DC Most definitely.

RD The way you talk about it almost sounds like a form of tech debt, like accessibility debt. 

DC Absolutely, a hundred percent. Even before Giamir joined, I'm ashamed to say, we didn't devote focus to accessibility. So we would have someone who did random external audits, like a big company that uses Teams, and they would have expectations and we would go, “We need to make accessibility fixes so they see progress here. What can we do? Let's do it quickly. Okay, cool.” Or we might only make fixes when we notice something in the course of other work. We’d go, “Oh, this anchor should really be a button,” and then we’d fix that. But it was super piecemeal, super scattershot, and frankly we didn't have any smart prioritization. It was just, “What can we grab out of the air? Okay. We made a little fix. Moving on.” 

RD So you decided to have a sort of overall plan and philosophy and bake it into the architecture. So how did you go about doing that?

DC I think first we had to establish what we were targeting. So we didn't have any North Star goal. We had assumptions. The de facto is WCAG– WCAG’s accessibility guidelines– but that was never codified. We never said to the company, “This is what we expect.” So we created an ADR, architectural decision record, that details what the expectations are around accessibility. We figured out some issues. For instance, we use the color orange a lot. It's our brand color. Orange does not play well with WCAG's color contrast algorithm so we had to look for alternatives and we actually came across APCA, which is the Advanced Perceptual Contrast Algorithm, I believe. And it's a great tool to actually check color contrast based on human perception more than the starting place being how does a computer compute whether there's contrast. It kind of makes it a little more human, so that was a good place to land. 

RD That's interesting. That sounds like a significant amount of the accessibility was actually the colors that we were putting out there. What are the sort of accessibility issues that we were trying to address with that? 

GB So as Dan mentioned, our orange was unfortunately failing WCAG, but of course we couldn't just change our brand colors or anything like that. So we actually decided to actually use APCA because it's the algorithm that potentially will land in the WCAG 3 specification. So as you can imagine, for us, it was an interesting journey that we started because a lot of the tools that we also find out there that tells us if the color contrast in our UI is good or not is based on the current WCAG algorithm. And we had to start in certain cases developing a little bit of our own tools to make sure that we could test these things against APCA and then identify some custom threshold that makes sense for us in testing our UI against it. So this actually brought us to an interesting story where we ended up creating custom rules, custom APCA rules for axe-core, which is the de facto accessibility engine that is used for testing accessibility in an automated fashion, that would allow us to use an algorithm that is different from the current weaker one.

RD It's interesting. It makes me think. Thinking that orange is a tough color for contrast, I remember there was a period mid-2000s where every movie seemed to be orange and purple for the contrast. Is it just that orange doesn't play with anything else but purple? 

DC I fixate on orange because of the context I'm in. I’ve got an orange chair in honor of Stack Overflow. But we use orange a lot. The real big issue with the WCAG color algorithm as far as I'm aware has to do with it overestimating the contrast of darker colors when they're adjacent to one another, and it underestimates the contrast with lighter colors. So white text on an orange background will usually fail. Black text on an orange background will usually pass. But when you ask people which one seems more readable, they'll usually gravitate towards the lighter text on the orange background. And that applies to, I think, also an issue of red too, so we're not alone. 

RD So to get that human evaluation, how did you go about doing that? 

GB As I said, what we've done is that we had to develop tools. So the APCA color contrast algorithm has been implemented by researchers, actually, I think a group of researchers. And I would say that it's still also a bit of a work in progress to an extent. Sometimes it's been difficult to find the right documentation and information, so we had to upskill ourselves to understand the algorithm. And then once we actually understood the algorithm, we had to run those algorithms against our UI. So we essentially had to bring this algorithm to the tools that we are using on a daily basis and our engineers are using on a daily basis and test the UI to make sure that, as Dan was saying, an orange text on a light background has a certain level of luminosity contrast –that's actually the terminology that is used in APCA– that is more than a certain threshold. So that's pretty much what we ended up doing. So we didn't go to people and ask, “Does this look actually better or more readable than something,” but we actually trusted the research of the people that developed APCA, because they've invested a lot of energy and resources on this also. 

RD If I remember correctly, we got some input from a community member who had some color design expertise, is that right? 

DC I’ve been meaning to shout him out. He goes by MyIndex online, and he's been great. I think I posted something on our design system, but he found it. Our design system is open source and public, so anyone can come across it and he came in and he gave a lot of good context about APCA and what he thinks the strengths are and gave me a lot of good resources to go read. And he's been great just popping up here or there. I also use his site that's pretty devoted to APCA and I saw him on a podcast recently. He just has a lot of really interesting stuff to say. He has a unique background where he used to be in the film industry, so a very visual medium, and then he started having vision issues that made him kind of pivot. 

RD So let's get to the technical details. What was the actual changes, the process? How did we make this part of the Stacks design system and the overall architecture? 

DC Initially, we really started testing and improving our accessibility in Stacks first. I do think a design system is a really good place to start because when you make one change to one component somewhere, it can just propagate out to everything that's using it. So we were hyper-focused on color contrast at first. We implemented visual regression testing and within there did accessibility tests along with that to check color contrast. These tools were using the axe-core library in our aftercheck tool that Giamir built. These basically check for all sorts of issues automatically. And we have in our tool chain, every time we push a commit, it runs through and makes sure that we haven't introduced any new accessibility issues. 

RD You're talking about the measuring and testing. Did you have to create any new tooling to test that? Did you actually have to implement the APCA algorithms?

GB No, we didn't have to implement the APCA algorithm. Fortunately enough, there was already a package maintained I believe by MyIndex. I hope I'm pronouncing his username correctly. It was already out there so we were able to leverage that to actually use the algorithm. But that's just an algorithm, so you just pass in some data and you get some data out. The things that we ended up having to implement ourselves is making sure that this algorithm could run into axe-core, which is the engine that we use to actually test our accessibility automatically. By the way, I just also want to mention that those tools are super great because they're able to catch up to 57 percent of accessibility issues on average on a page, but they're not going to give you a full picture of how you are doing or how your UI is doing in terms of accessibility. So this is also one of our reasons why when we had to create our own accessibility score for our products so that we were able to measure how our products were doing, we used, in part, these automated tools that are running scans on a daily basis, and we call them automated accessibility checks. They create an automated accessibility score. But then we also went through cataloging issues that those tools cannot catch– the things that often are related to screen readers experience and this kind of stuff, and those we catalog in a dedicated centralized, in our case, we use a Jira board that then gets queried by the service that we created to calculate also a manual score. So we have an automated score and a manual score that gets together to generate an overall score for a product, so that we have these tools catching this 57%, but then we also do some manual work to make sure that we also consider stuff that these tools are not able to catch.

RD You said that they catch some percent each of them. How do you know what 100 percent is?

DC You don't. It's a common mantra around accessibility circles saying that there's no such thing as something being 100 percent accessible. I think we have this notion where a lot of people look at accessibility and think it means screen readers, right? Done. But accessibility can mean someone has cognitive disabilities that makes something harder to use. It can be auditory, so if you have video content, you might need captions or transcripts. There's all sorts of disabilities that you have to consider. It extends even beyond that little core set to where you can extend accessibility to bandwidth. Can someone access your websites when they're on a super constricted network? So there's no place where we hit a hundred percent, but we do have to have some sort of measurement. Ultimately it becomes a little bit arbitrary, where hypothetically we could hit a hundred in our measurement, but still there's always more that you can do. 

GB I think even if hypothetically one of our products would have a score of 100, that doesn't mean that the product doesn't have any accessibility issues. It means that there are no known accessibility issues. Maybe one day somebody comes up and figures out that, “Oh, okay. There is actually this other issue.” And then we will have to catalog that issue and we'll end up affecting the manual score and pushing down the score from 100 to something that's lower. So that's what 100 means in our context when we created this score. It means no known accessibility issues, but it might still be the one that we don't know about. 

RD Right. But we maintain our own catalog of potential accessibility issues. 

DC We have this Jira board that Giamir mentioned where we manually catalog anything that we come across. A good example would be, let's say there's an error message that comes up after you input something into a form. An automated check is never going to find that, not practically, at least. You might be able to build some tooling, but it'd be really complex to maintain. A person's going to come across that though and go, “There's an error message and it's not announced to screen readers.” And we put those issues in the Jira board and then we'll have this manual score. But if we got rid of them all, then we'd have a hundred percent manual score but we'd know there's still more out there. So since our score is a combination, an average of manual and automated, we just have to keep that in mind and stay vigilant. 

GB Absolutely. In the end, the score is helping them to demonstrate that we are making progress, also to keep our engineering department motivated to an extent. I don't want to jump ahead, but generally the way that we discuss accessibility issues in this board is done with this group that we have created. We call it ‘Accessibility Champions,’ this group of people that meet every other week and go through this board, talk about this issue and what we can do. So certainly, the work we've done has helped us, of course, to fix the concrete things, but there is also a cultural aspect of making sure that you take the occasion to upskill your organization around those topics. And there is no better way to do that by just going into a concrete example and say, “Okay, this is an issue,” and talk about it with others so that they can then upskill their own teammates as well. So we hope to have this cascade effect from this initial group of champions to advocate for accessibility in their respective teams.

RD So it definitely sounds like accessibility is never finished. Besides the Accessibility Champions, what are the sort of processes, reviews? Do you have accessibility sprints lined up, things like that for the future? 

GB We don't have accessibility sprints lined up. What we're trying to do is actually shift accessibility left in our product development life cycle as much as possible. So apart from doing some initial fixes and pushing our score a bit to a good state, after we've done that as a design system team, now we are in a phase where we're trying to make sure that engineers and designers as well in the product teams can do their part. And in order to do so, for example, we just launched these days what we started to call Accessibility Bites for the organization. So you can think of it as a short five-minute video where we go through a common accessibility issue that we have encountered often in our website and educate the engineers on how they can remediate the specific issue and other things that we've done recently to make sure that our accessibility score doesn't drop over time. So we are actually pushing those metrics, this score to our regular telemetry system. So we have this score in our telemetry system, and then we were able to actually consider the score a regular service level indicator. It could be latency for a request in the regular engineering world. So we use the accessibility score as a regular service level indicator, which means that we were able to establish a service level objective and say, “Hey. For this product, the service level objective is a score of 94.” And we have alarms in places that, for some reason someone is doing something, is developing a UI, and maybe they introduce some accessibility regression, the score would drop and we treat that as an incident. So these are some of the things also that we try to put in place to make sure that accessibility stays forefront for both our engineers, but also as a reminder to our leadership. 

DC I think broadly it was education. Accessibility is not something that follows a direct line of logic. It's not exactly like programming a lot of times where you have to learn a lot of arbitrary stuff, and a lot of times you can just do manual testing– put on a screen reader and tab through and see how things operate. So I think to remove some of that predilection to just kind of shoo accessibility because you don't really understand it, we've tried to share knowledge around a lot, do a lot of PR reviews and pair programming stuff. And then we also have tests that come in– so a test failure or an alarm going off when accessibility scores dip below a certain number. And then going back, we did kind of a big push to resolve a lot of accessibility issues all over a bunch of our products and kind of lead by example. So Stacks the design system is by no means perfect in this regard, but we do try to consider accessibility really heavily there because we have, A, this benefit for any product that uses it, but also as an example to cite when someone's building something else, they can look at Stacks and see how does Stacks do this and have a little more confidence that they're building something in an accessible way. 

RD Well, I know we appreciate the knowledge sharing aspect of this. That's what Stack Overflow is all about. And I've appreciated being educated on this myself. Every time I forget alt text, somebody calls me out on meta about it. I appreciate you, community. What are the things that you've learned or you wish other people would learn about accessibility? 

DC Broadly, just that it's really important. It really matters. It's the sort of thing that I think is a complete afterthought for a lot of people. It was for myself years ago, and I had an experience where a former coworker started to lose his vision. I heard about this. It made me reconsider and recontextualize the process of developing a UI where you can maybe go through and go, “How would he deal with using Stack Overflow?” He's also a programmer, so he's using Stack Overflow so there's a direct line between how our products operate and how he benefits. So just caring about it is a huge start. And I think using semantic HTML is going to get you 75 percent of the way there. Using the elements that actually reflect what they're doing, what their purpose is on the page, the structure, because a lot of the assistive technologies kind of expect certain elements to do certain things. Basically, don't make a button of a div unless you're really confident that it should be a div. If it's a link, use an anchor, stuff like that. So there's a million little bites we could say, but those are the things that come to mind to me right away. 

RD No pile of divs then. 

DC Use divs when they're appropriate. 

RD Giamir, how about you? 

GB For me overall, I feel very privileged that we got this opportunity to spend time improving accessibility of our products. Because one thing that I certainly learned is that we have a lot of awesome people out there doing often work for free, working on the specification. So I want to give a shout out to the WCAG working group because I feel that their documentation has improved so much over the years. If I remember, I started being an engineer 10 years ago or so, and the documentation in WCAG wasn't as close and as well-defined as it is today. So the documentation and what to do in order to make something accessible has improved massively. And this helped us a lot in our work and also educate other people because we have already a lot of very well maintained documentation that we can send people to, and we can also read for ourselves. So in general, I'm really grateful that the status of the accessibility documentation is much better today. It can always be improved of course, but it's pretty cool, and as I said, very grateful to the community. And I feel that in our small bubble, we were also able to contribute a little bit back because, I think Dan mentioned already, but we open sources this APCA check projects which essentially are those APCA rules that can run in the axe-core engine. And our design system is also open source, so all the work we've done in Stacks, which are basically the Lego bricks of stackoverflow.com, can be seen by anybody. Our documentation can be seen by anybody, so it's our way to give back a bit to the community.

[music plays]

RD It's that time again. We're at the end of the show and I'd like to shout out somebody who came on Stack Overflow and dropped a little knowledge. Today, shouting out a Great Answer Badge winner– somebody who answered to a score of a hundred or more. Today, it's awarded to Beejor who dropped an answer on, “What is the largest Safe UDP Packet Size on the Internet?” So if you're curious, there's a great answer waiting for you. I'm Ryan Donovan. I edit the blog here at Stack Overflow. You can find it at stackoverflow.blog. If you like what you heard today, drop a rating and a review, it helps. And if you want to reach out to us about topics, suggestions, feedback, you can email us at podcast@stackoverflow.com. 

GB I'm Giamir. I'm a Staff Front End Engineer working on the design system team at Stack Overflow. And you can find me on X these days. My handle is @Giamir. Otherwise I have a website: Giamir.com. 

DC And I'm Dan Cormier. I am a Senior Front End Engineer also on the Stacks team with Giamir. And I'm not a big social media guy, so I'll just say look for me on GitHub. My username is Dan Cormier. 

RD Thanks, everybody. We'll talk to you next time.

[outro music plays]