Non-text Content
Every image, icon, and non-text element has a text alternative. Decorative ones are explicitly marked as such (alt="").
How to test: Automated (axe-core catches missing alts). Manual review for meaningful alt text.
A practical WCAG 2.2 compliance checklist organized by level (A, AA, AAA) and principle (perceivable, operable, understandable, robust). Includes every success criterion, what it means in plain English, and how to test it.
WCAG 2.2 is the current W3C recommendation for web accessibility (published October 2023), adding nine success criteria on top of WCAG 2.1. It's the de-facto standard US and EU regulators reference - Section 508 aligns with it, the European Accessibility Act effectively requires it by 2025, and ADA lawsuits lean on it even in jurisdictions where it's not codified.
Below is every success criterion in plain English, organized by the four WCAG principles, with a note on how to test each one. Level A is the floor.Level AA is what most laws require and what most procurement teams demand. Level AAA is aspirational - you don't need to hit it globally, but specific pages can.
Items flagged NEW in 2.2 are the nine added in the 2.2 update. If your last audit was against 2.1, these are the new ones to review.
Users must be able to perceive the information: text alternatives, captions, contrast, adaptable content.
Every image, icon, and non-text element has a text alternative. Decorative ones are explicitly marked as such (alt="").
How to test: Automated (axe-core catches missing alts). Manual review for meaningful alt text.
Prerecorded audio has a transcript; prerecorded video without audio has a text or audio alternative.
How to test: Manual - tooling can't evaluate transcript quality.
Prerecorded videos with audio have captions.
How to test: Manual - confirm captions exist and are accurate.
Prerecorded video has either audio description or a text alternative that describes what happens visually.
How to test: Manual.
Live audio content has captions.
How to test: Manual - sample live streams.
Audio description is provided for all prerecorded video.
How to test: Manual.
Semantic markup conveys structure: headings are <h*>, lists are <ul>/<ol>, tables have <th>, form fields have labels.
How to test: Automated catches most. Manual review for custom components.
The DOM reading order matches the visual reading order.
How to test: Manual with screen reader + CSS disabled preview.
Instructions don't rely on shape, size, location, or sound alone ("click the green button on the right" is not enough).
How to test: Manual content review.
Content doesn't force portrait-only or landscape-only orientation, unless essential.
How to test: Manual - rotate device.
Form inputs that collect user info have autocomplete attributes matching their purpose.
How to test: Automated (axe-core checks autocomplete coverage).
Color isn't the only way to convey information (error states, links, required fields).
How to test: Manual review + color-blind simulation.
Audio that plays for more than 3 seconds can be paused or muted.
How to test: Manual.
Text has a 4.5:1 contrast ratio against background (3:1 for large text).
How to test: Automated - OnChange computes this for every scanned page.
Text can be resized to 200% without loss of content or functionality.
How to test: Manual - browser zoom.
Don't use images of text unless essential (logos are fine).
How to test: Manual audit of image assets.
Content reflows at 320 CSS pixels wide without horizontal scrolling (except for tables, maps, data viz).
How to test: Manual - responsive testing.
UI components and graphical objects have 3:1 contrast against adjacent colors.
How to test: Automated (axe-core covers most; some custom UI needs manual checks).
Users can override line-height, paragraph spacing, letter-spacing, and word-spacing without breaking the layout.
How to test: Manual - override via dev tools.
Tooltips and popovers triggered by hover/focus can be dismissed, remain visible until dismissed, and stay visible when pointer moves onto them.
How to test: Manual interaction testing.
Users must be able to operate the interface: keyboard, timing, navigation, input modalities.
All functionality is reachable and operable from a keyboard alone.
How to test: Manual - tab through the page, no mouse.
Keyboard focus can move away from any component.
How to test: Manual.
Single-character keyboard shortcuts can be turned off, remapped, or are only active when focus is on a specific component.
How to test: Manual.
Time limits can be turned off, adjusted, or extended.
How to test: Manual for timed content.
Auto-playing, moving, or blinking content that starts on page load can be paused, stopped, or hidden.
How to test: Manual audit of carousels, animations.
Nothing flashes more than 3 times per second.
How to test: Manual with flashing-content checker.
A "skip to content" link or equivalent is available so keyboard users can skip repeated navigation.
How to test: Manual - first tab press should reveal the skip link.
Every page has a descriptive <title>.
How to test: Automated.
Tab order through interactive elements is logical and meaningful.
How to test: Manual keyboard walkthrough.
Link text (or its surrounding context) describes where it goes. Avoid "click here".
How to test: Manual review of link list via screen reader.
Users can find content via multiple paths: navigation, search, sitemap.
How to test: Manual site review.
Headings and labels describe their topic or purpose.
How to test: Manual - skim the heading outline and form labels.
Keyboard focus indicator is clearly visible on every interactive element.
How to test: Manual keyboard test.
The focused element isn't entirely hidden by sticky headers, cookie banners, or other fixed overlays.
How to test: Manual keyboard test with sticky UI visible.
Stricter version of 2.4.11: no part of the focused element is obscured.
How to test: Manual.
Focus indicator is at least as large as a 2px thick border around the element and has 3:1 contrast against the unfocused state.
How to test: Manual measurement with dev tools.
Functions that use multi-point or path-based gestures (pinch, swipe) have single-pointer alternatives.
How to test: Manual.
Mouse/touch actions fire on mouseup (not mousedown), so users can abort by moving off.
How to test: Manual.
Accessible name of a UI component includes its visible label text.
How to test: Automated (axe-core catches most mismatches).
Functionality triggered by device motion (shaking, tilting) has a conventional input alternative and can be disabled.
How to test: Manual if motion is used.
Any functionality using a dragging movement has a single-pointer alternative that doesn't require dragging.
How to test: Manual.
Interactive targets are at least 24×24 CSS pixels, with exceptions for inline text links and essential sizing.
How to test: Automated / manual measurement.
Content and operation must be understandable: readable, predictable, input assistance.
The page's primary language is set via <html lang="…">.
How to test: Automated.
Parts of the page in a different language are marked with lang attributes.
How to test: Manual.
Moving focus to an element doesn't trigger unexpected context changes (no navigating away, no form submission).
How to test: Manual.
Changing a form input's value doesn't trigger a context change unless the user was warned.
How to test: Manual.
Repeated navigation appears in the same relative order across pages.
How to test: Manual site audit.
Components with the same function across pages are identified consistently (same icon, same label).
How to test: Manual site audit.
If help is available (chat, contact link, FAQ), it's in the same relative location across pages.
How to test: Manual.
Form errors are identified and described in text.
How to test: Manual form testing.
Form inputs have labels or instructions.
How to test: Automated (axe-core catches missing labels).
Error messages suggest how to fix the problem.
How to test: Manual.
For pages handling legal, financial, or user data: allow reversal, review, or confirmation before submission.
How to test: Manual.
Don't ask users to re-enter information they already provided in the same process, unless essential.
How to test: Manual flow review.
Auth doesn't rely on solving cognitive puzzles unless an alternative is offered (biometric, password manager paste, magic link).
How to test: Manual.
Stricter version of 3.3.8 - no cognitive puzzles at all.
How to test: Manual.
Content must work across assistive technologies, now and in the future.
Every UI component has a programmatically determinable name, role, and value. ARIA or native semantics.
How to test: Automated + manual screen reader testing.
Status messages (success, error, loading) are announced to assistive tech without moving focus.
How to test: Manual screen reader testing.
Most of the automatable criteria above map directly to axe-core rules, which OnChange runs inside a real headless Chromium on every accessibility scan. Contrast ratios (1.4.3 / 1.4.11), missing labels (3.3.2), missing alt text (1.1.1), language attributes (3.1.1), and keyboard-reachable interactive elements (2.1.1) are all automatic.
OnChange's Free plan includes the full WCAG scanner, attested baselines, and branded reports.
Start free