Practical accessibility, part 1: Markup semantics & ARIA

If I’ve learned anything while coding for accessibility, it’s that I don’t know what I don’t know.

There is always something I’ve yet to encounter, a pattern I’ve not seen, a version of WCAG that covers criteria not previously on my mental checklist, an assistive technology that challenges my perceptions of using the web (hello, voice control). If you’ve spent any time wading into the waters of accessibility standards, guidance, and ad hoc advice, you may be able to relate to the feeling of information overload. Perhaps that’s why WCAG helpfully reminds us that “no ARIA is better than bad ARIA”; it’s possible to do too little and too much.

But there are some absolute truths to coding for accessibility, rules and patterns that are reiterated throughout the documentation and testing practices. I’ve compiled a list that when followed ensures a solidly accessible and usable experience, shared in this and future “Practical accessibility” posts.

Semantic markup gets you most of the way there

Semantic HTML tags communicate a ton of information to browsers and assistive technology that’s interpreted and rendered in a fairly standard way:

  • landmarks (banner, navigation, main),
  • content hierarchy (headings, paragraphs, lists),
  • navigation and input controls (links, buttons, form elements),
  • mouse and keyboard events and states (hover, click, focus),
  • and basic visual affordance via browser-assigned styles.

For screen readers, semantic tags have implicit roles (no need to add role="button" to a <button>), and sometimes provide bonus feedback (most will helpfully read the total number of list items in a <ul>). A screen reader user would be able to find their way around a page coded with semantic HTML that renders default browser styles.

ARIA should be applied only where needed

If we could end the conversation at “just use semantic HTML," the state of accessibility on the web would be in better shape. But that’s not realistic. Web standards and a multitude of frameworks and custom utilities support an entire universe of designs, content types, and ways to interact.

ARIA roles, states, and properties let us extend HTML-like meaning to the specific page layouts and widgets that you create — that either don’t already have implicit roles and behaviors coded into the markup, or don’t have implicit roles that accurately communicate their purpose.

Create landmarks with section tags or ARIA roles

For longer than I’d like to admit, I mixed up how ARIA landmark roles and section elements work together, and only recently understood that most of the time they don’t need to work together at all. The gist is this:

  • Landmarks are ARIA roles that assign semantic meaning to the major sections of your page (e.g., role="navigation"). They’re critical for screen reader users to understand the overall layout and content, and often serve as shortcuts using screen reader-defined hotkeys.
  • Section elements — like <main>, <nav>, <header>, <aside>, to name a few — are handy tags that come with built-in landmark roles.

The simplest way forward is to structure your page with section elements, which take care of assigning ARIA landmark roles so you don’t have to.

If you need to retroactively apply landmark roles to your page sections, you can do that with the role attribute plus a landmark value:

<div role="main">
<h1>Practice accessible coding, pls</h1>
</div>

Or:

<ul role="navigation" aria-label="Shortcut links">
...
</div>

And this is helpful in righting a particular wrong:

<div class="subheading" role="heading" aria-level="2">
This applies H2 tag semantics
</div>

Regardless of which method you use, keep in mind there can only be one of the following per page:

  • <header> (child of <body>) or role="banner" — branding and global navigation elements usually located top of page
  • <main> or role="main" — content area that is the primary focus of the page

Give meaning and context to your custom widgets

Whether you build a widget as a web component using standards, Vue, React, or any other framework, or with a combination of HTML and JavaScript, screen readers that encounter your widget will look for clues in the rendered markup to discern what it is and tell users that they’ve encountered a search box, menu, or carousel.

Screen readers generally follow this order of precedence:

  1. Look for explicitly set ARIA attributes (roles, states, properties), and in the absence of those:
  2. Interpret any implicit roles from markup semantics (form elements, paragraphs, lists, etc).
  3. Read any text available in the markup.

If no discernable semantics exist (think ARIA-less <div> or <span> tags), it skips to #3. So when you omit ARIA, you’re rolling the dice on whether users will understand that your <ul> is a menu and not simply a list. When you omit ARIA and semantic tags, you’re leaving it up to your users to figure things out (and potentially abandon your product because it doesn’t work).

Use ARIA patterns as a guide

Thankfully, the W3C has published many commonly used widget and composite patterns in their WAI-ARIA Authoring Practices (v1.2), which include solid markup examples with ARIA role, state, and property attributes, and expected key events (i.e., Enter to make a selection). It’s a great place to start whether you’re learning ARIA or need a bookmarked reference.

When components in your site don’t have a matching ARIA pattern to follow, remember that they’re meant as a guide. Sometimes a strict guide — W3C clearly defines requirements in WCAG for reaching accessibility compliance — but ultimately the application of ARIA is up to you, the designer/developer (the section on Conformance in the ARIA spec sums it up). You can still use ARIA patterns for general direction, just look for similar outcomes: how the widget organizes or shows/hides content, allows single or multiple selections, whether it contains focusable elements. The exact pattern that you follow matters less than:

  • Using semantics and ARIA attributes to communicate purpose
  • Implementing your widgets in a consistent way to reinforce predictability
  • Testing your code with a range of inputs (mouse, keyboard) and in a screen reader (modern versions of Windows, MacOS, and iOS come with them installed, Narrator and VoiceOver, respectively; NVDA is also free to download)
  • Soliciting feedback from actual users of assistive technology, including screen readers, specialized keyboards or mice, or voice control software.

What do you think?

Are there ways in which you’ve incorporated accessibility into your coding practices that might be helpful to share with the rest of us? Please do!

All blog posts