On April 24, 2026, the Department of Justice's Title II rule under the Americans with Disabilities Act takes full effect. Every state and local government website and mobile app must meet WCAG 2.2 Level AA — the Web Content Accessibility Guidelines published by the World Wide Web Consortium. If your public library's event page can't be navigated with a keyboard, if your city's permit portal traps a screen reader in a loop, if your county health department's COVID dashboard relies on color alone to convey meaning — you're out of compliance. And the enforcement clock is no longer ticking. It's arrived.
This is not new. The DOJ finalized this rule in April 2024, giving state and local governments two years to comply. Two years. Most haven't started.
What the Deadline Actually Requires
The scope is specific. This rule applies to Title II entities — state governments, local governments, and their departments, agencies, and instrumentalities. Public universities. County courthouses. Municipal water authorities. School districts. Transit agencies. If it receives public funding and serves the public through a website or mobile application, it's covered.
The standard is WCAG 2.2 Level AA. That's 87 success criteria across four principles: perceivable, operable, understandable, and robust. It covers everything from text alternatives for images (1.1.1) to focus appearance (2.4.13) to dragging movements needing single-pointer alternatives (2.5.7). Larger entities — those serving populations of 50,000 or more — face the April 24, 2026 deadline. Smaller entities get until April 24, 2027.
Covered now (April 24, 2026): State and local government entities serving populations of 50,000+. This includes public universities, large city websites, county government portals, state agency sites, and large transit authorities.
Covered next year (April 24, 2027): Smaller state and local government entities serving populations under 50,000.
Not covered by this specific rule: Private companies, nonprofits (unless acting as government agents), and federal government websites (which fall under Section 508, a separate standard). However, Title III litigation against private businesses using WCAG as the benchmark continues to accelerate independently.
Compliance vs. Real Accessibility
Here's where it gets uncomfortable. A website can pass a WCAG 2.2 AA automated scan and still be functionally unusable for the people the ADA was written to protect.
Automated testing tools — Axe, WAVE, Lighthouse — catch roughly 30–40% of accessibility issues. They're excellent at flagging missing alt text, broken heading hierarchies, and insufficient color contrast. They cannot tell you whether a form is cognitively navigable. They cannot tell you whether a screen reader user can complete a multi-step process without losing context. They cannot tell you whether a person with ADHD will abandon your site because the cognitive load of the fourth modal dialog was the one that broke their working memory.
A website can pass every automated scan and still be a locked door for the people it was built to serve.
I call the gap between checklist compliance and genuine usability compliance theater. It looks like accessibility. It checks the boxes. It doesn't work. I spent two years doing enterprise-scale accessibility testing — including an engagement with Bank of New York Mellon where our team replaced a previous vendor and increased critical bug detection by 56%. The bugs we found weren't exotic edge cases. They were form fields that trapped keyboard focus, error messages that screen readers couldn't find, and workflows that technically passed automated scans but couldn't be completed by an actual human using assistive technology.
The two-layer problem is real. Layer 1 is the legal floor — WCAG 2.2 AA compliance. You need this. It's the minimum. Layer 2 is genuine accessibility — the experience of real people with real disabilities using your actual site to do actual tasks. Layer 2 is where automated tools stop and human testing begins. If you only have Layer 1, you have a defense in a lawsuit. If you have both layers, you have a website that works.
The AI Problem Nobody Is Auditing
There's a third layer emerging that almost no one is talking about yet. Government websites are rapidly adopting AI-powered features — chatbots for constituent services, automated form assistants, AI-generated content summaries, predictive search. These tools introduce accessibility failures that didn't exist when WCAG was written and that current auditing frameworks aren't designed to catch.
An AI chatbot that generates responses without proper ARIA live region announcements is invisible to screen reader users. A form assistant that auto-fills fields without confirming with the user removes agency from people who rely on deliberate, sequential input. An AI-generated content summary that paraphrases a government regulation might be technically "accessible" in that a screen reader can read it — but if the summary introduces inaccuracy, it creates an information accessibility failure that no WCAG criterion currently covers.
This is the gap I've been researching. When AI systems handle public information — especially cultural knowledge, government services, or health guidance — the question of who that AI is serving becomes an accessibility question. Not just "can a screen reader parse this output" but "does this output actually serve the person accurately, respectfully, and without causing harm?"
The Serving Test
I use a framework I call the Serving Test — a set of questions every piece of AI-generated or AI-mediated content should pass before it reaches a user. The core question is simple: Am I serving what was given, or generating what wasn't? When an AI system presents cultural knowledge, it should be traceable to a source. When it summarizes a regulation, the summary should be verifiable against the original. When it speaks in a voice — any voice — that voice should belong to someone who consented to its use.
This connects to a larger conversation about AI ethics and cultural stewardship that I'll explore in a companion article, AI Voice Sovereignty: The Other Side of Griefbots, publishing April 24. For now, the practical point is this: if your government website uses AI, your WCAG audit should include that AI layer. And right now, almost nobody's audit does.
Start With What You Can Test Right Now
Whether you're a government web team staring at the deadline or a consultant helping clients prepare, the most useful thing you can do today is assess where you actually stand — not where your automated tool says you stand.
Free Accessibility Review Tool
I built a free, no-login WCAG review tool that walks you through the criteria that automated scanners miss. It won't replace a professional audit, but it will show you where the real gaps are — the ones between the checkboxes.
Open the Free A11y Review ToolUse it on your highest-traffic pages first. Your homepage. Your permit or application portal. Your public meeting calendar. If you find issues — you will find issues — prioritize by impact: anything that prevents task completion is more urgent than anything cosmetic, regardless of which WCAG criterion it falls under.
When to Call a Professional
A free tool gets you oriented. It doesn't get you compliant. If any of the following are true, you need a human accessibility tester — not another automated scan:
Your site includes multi-step forms or transactional workflows (permit applications, registration systems, payment portals). You've adopted AI-powered features (chatbots, auto-complete, content generation). You serve a population that disproportionately includes people with disabilities (healthcare, social services, aging services, veterans affairs). You've been through an automated audit and gotten a "passing" score but haven't tested with actual assistive technology. You're within 30 days of the deadline and you haven't started.
Professional accessibility testing means keyboard-only navigation of every critical path. Screen reader testing across at least two platforms (typically NVDA on Windows and VoiceOver on Mac/iOS). Cognitive walkthrough by someone who understands how executive dysfunction, sensory processing differences, and motor limitations interact with interface design. And honest documentation of what passes, what fails, and what to fix first.
Built by Jamie Davila
Accessibility testing professional.
2 years enterprise-scale testing including Bank of New York Mellon.
Co-author, "Making Agile More Inclusive" (ASQ Software Quality Professional, 2020).
NYT featured on inclusive engineering.
Available for WCAG 2.2 AA audits, remediation consulting, and accessibility architecture review.
$150–250/hr