Measuring accessibility is hard. Here are some of the things that make it difficult.
Lagging vs Leading
Lagging measures are mostly about testing. For example: conducting an accessiblity audit. They’re good for getting a measure of how accessible the thing is now. But, things change, so this is a time-bound snapshot.
Leading measures are mostly about process. For example: accessibility-focused reviews at key points during design. They’re good for getting a measure of how accessible the team is now. But, it doesn’t directly tell us how accessible the thing is now.
The trade-offs are between the time-width of measure and the product- or team-focus of the measure.
Automated vs Manual
Automated measures are great for picking up errors and omissions. They’re also quick to do. For example: running the axe DevTools browser extension. But, they don’t tell necessarily us about the quality.
Manual measures are great for looking at quality. They can give you a more human-centered measure that automated checks can. For example: reviewing link and button text on a page. But, they’re slow to do.
The trade-offs are between the speed and negative- or positive-focus.
Small scope vs large scope
There are at least a few level of scope that we can measure accessibility at: Component, Page, Product area, User Journey. The smaller the scope the easier and faster accessibility is to measure. But, it’s less representative of the actual user experience. The larger the scope, the more human-centered the measure. But, it’s slower and more difficult to measure.
The trade-offs are between speed and UI- or human-centered-focus.
There’s a lot to consider, and there’s not really a right answer. The decision of what to measure can come down to what trade-offsyou want to make around speed, sentiment, and user-centeredness.