A Software Tester's Guide to Web Accessibility
By Matt Obee
Assistive Technology (AT)
Software applications and hardware devices that help make digital services accessible to people with disabilities. For example, people with visual impairments might use a screen reader application, screen magnification software, a braille display or a tactile keyboard. People with reduced mobility might use voice controls or switches instead of a mouse and keyboard.
The focus refers to the part of a user interface that is currently receiving input from the keyboard. It's often highlighted with a border or outline. When a form field is focused you can type inside it and when a button or link is focused you can activate it by hitting enter or the spacebar. Controlling which elements can receive focus and in what order is an important part of designing an accessible experience.
The Accessible Rich Internet Applications specification (ARIA), developed as part of W3C's Web Accessibility Initiative (WAI), is designed to make complex web applications more accessible to people with disabilities. It defines a set of attributes that can be added to HTML elements in order to provide additional information to assistive technologies such as screen readers.
Web Content Accessibility Guidelines 2.0 (WCAG), published by the W3C Web standards organisation, is the primary international standard for making accessible web content. WCAG has also been adopted by the International Organization for Standardization where it is referred to as ISO/IEC 40500:2015.
The Diversity of Disability
Most of the guidance and discussion around accessibility tends to focus on problems that affect visually impaired users. Blindness is a disability that people find easy to understand and the web is an inherently visual medium. Screen readers are also one of the most ubiquitous forms of assistive technology. This might make it easier for people to empathise and to recognise problems, but the big picture is actually far more complex and diverse.
Broadly speaking, disabilities can be split into four main categories: vision, hearing, motor and cognition. However, it's important to understand that a person might have more than one disability (blindness and deafness, for example) and the nature of their disability might change over time. For more examples and information about disability, take a look at An Alphabet of Accessibility Issues and How People with Disabilities Use the Web.
Explanation of Disability Categories
Disabilities affecting vision include blindness, low vision, and colour blindness. Assistive technologies used by people with visual disabilities include screen readers, like VoiceOver and JAWS (Job Access With Speech), that convert text into synthesised speech, braille displays (often used by deaf-blind people) and magnification software that increases the size of everything on screen. Blind users generally prefer a keyboard or keyboard emulator to navigate instead of a mouse. Devices that rely on touch, like smartphones and tablets, can be made accessible too.
Like visual impairments, deafness exists on a scale from mild to total. Some people with mild hearing loss might struggle to hear a conversation if there is lots of background noise while others with severe or total hearing loss might rely completely on sign language, lip reading, and text alternatives, such as transcripts and captions.
Conditions that affect movement have a wide range of causes including spinal cord injury, loss of limbs, cerebral palsy, multiple sclerosis, arthritis, Parkinson's disease and stroke. Common assistive technologies include eye tracking, voice control, adapted keyboards, switches and trackballs.
Conditions that affect thinking and comprehension include dyslexia, dyscalculia, attention deficit disorder, autistic spectrum disorder, Down syndrome and dementia.
Standards & Laws
In many countries, access to services delivered by websites is protected by legislation just like access to shops, restaurants, and transport. Unfortunately, the scope, implementation and enforcement of those laws do tend to vary.
WCAG 2.0: The Web Standard for Accessibility
Web Content Accessibility Guidelines 2.0 (WCAG), published by the W3C standards organisation, is the primary international standard for Web accessibility. WCAG 2.0 has also been adopted by the International Organization for Standardization who have given it the snappy pseudonym ISO/IEC 40500:2015.
WCAG 2.0 has 12 guidelines that are organised under 4 broad principles: Perceivable, Operable, Understandable and Robust (you might see those principles referred to by the acronym POUR). Each of the guidelines has testable checkpoints or 'success criteria' assigned to one of three levels: A, AA or AAA. These levels are based on striking a balance between how important the checkpoint is to users and how easy it is for developers to achieve, with A being the minimum level of conformance and AAA being the highest.
WCAG 2.0 was written so that it could be applied to any web technology, present or future. Unfortunately, that can make some of its guidelines quite abstract and difficult to interpret. Three other official documents were written to complement the formal guidelines:
- How to Meet WCAG 2.0 is a customisable checklist and is probably the document that you will find most useful as a tester. Several unofficial alternatives are also available, including WebAIM's WCAG 2.0 Checklist.
- Techniques for WCAG 2.0 offers practical examples for meeting the guidelines and basic instructions for testers.
- Understanding WCAG 2.0 provides a more detailed explanation and rationale for each of the guidelines. This document also includes recommended techniques and examples of common failures that you might encounter when testing.
WCAG guidelines are used as the inspiration for accessibility law in many countries.
Laws in the United Kingdom
In the UK, the Equality Act (2010) says that a service provider must make "reasonable adjustments" so that disabled customers can have access to the same services as everyone else. Service providers must make a proactive effort to anticipate the needs of disabled users and not just make changes when and if someone complains. On the high street, that might mean installing a ramp to allow wheelchair access or providing a menu in braille as well as print. On the Web, it could mean including subtitles on videos, offering a high-contrast colour scheme, or making sure that a user can navigate with a keyboard instead of a mouse.
There is no specific standard against which a breach of the legislation would be tested and the meaning of "reasonable adjustments" will vary according to the size and resources of the business. That said, WCAG is likely to be used as the most authoritative standard in any legal action. The UK government insists that all of its own services achieve at least Level AA conformance, so that's a good benchmark.
Laws in the United States
There are several laws covering accessibility the United States including Sections 504 and 508 of the Rehabilitation Act (1973) and the Americans with Disabilities Act (1990). Unlike the UK, this legislation generally applies only to government services and none of it explicitly requires private businesses to make their websites accessible.
Federal government websites must comply with a set of technical guidelines defined by Section 508. The Section 508 guidelines are based on the older WCAG 1.0, which was superseded by WCAG 2.0 in 2008. Much needed updates to the Section 508 guidelines, which would bring them up-to-date with WCAG 2.0, have been delayed repeatedly over the last decade but are now expected to arrive at the end of 2016.
Essential Skills for Testers
Web accessibility, at a basic level, is very much dependent on having valid and semantic HTML markup. Testing for accessibility, diagnosing problems, and proposing solutions often involves diving into the code.
While automated tools can and should be used to validate HTML, human judgement is needed to ensure that HTML elements and attributes have been used appropriately.
The following HTML is very simple and perfectly valid but a human tester should be able to recognise that it doesn't actually describe the meaning of the content using the correct HTML elements:
<div>Accessibility is really just <span>extreme usability</span>: making things easy to use for as many people as possible, regardless of disability. Empathy--being able to put yourself in somebody else's shoes--is the essential skill that lies at the heart of accessibility testing.</div>
Instead of using generic <div> and <span> elements for everything, the heading should be an <h2>, the paragraph should be a <p>, the <span> should be a <em> so that it is emphasised by assistive technology and the whole thing should be contained in a <section>:
<p>Accessibility is really just <em>extreme usability</em>: making things easy to use for as many people as possible, regardless of disability. Empathy--being able to put yourself in somebody else's shoes--is the essential skill that lies at the heart of accessibility testing.</p>
If you aren't already familiar with HTML there are lots of free tutorials and resources available online. Mozilla's Introduction to HTML is a great place to start. If you'd prefer something more interactive, Codeacademy and Treehouse both have courses on HTML. In fact, Treehouse offers a course on Accessibility as well.
While exploratory testing is a powerful technique in many situations, testing for accessibility tends to require a more methodical and analytical approach. Because accessibility is often treated as a matter of legal compliance and standards conformance, testing usually involves more planning and documentation than exploratory testers might be used to.
Conformance to WCAG can only be claimed on a page by page basis; it's not possible to say that a website conforms to WCAG at a particular level unless every page of the site has been tested. Looking at every page--particularly in large or complex websites and applications--isn't normally feasible. Instead, you'll need to identify a representative sample of pages and user journeys.
Accessibility is really just extreme usability: making things easy to use for as many people as possible, regardless of disability. Empathy--being able to put yourself in somebody else's shoes--is the essential skill that lies at the heart of accessibility testing.
The best way to build empathy is to work alongside people with disabilities. Involve them in the design and testing process, talk with them, see what problems they face, and watch them use your product.
Methods of testing
Because accessibility is so dependent on code (HTML in particular) testers can learn a lot about the accessibility of an application by examining the code directly. Developers can also look for accessibility problems during their own code review process.
Manual Testing and Analysis
Much like an expert usability review, testers can evaluate a website against accessibility standards and best practice. This is an exploratory process, often supported by documentation like How to Meet WCAG 2.0 or WebAIM's WCAG 2.0 Checklist.
You might try using a screen reader like VoiceOver (pre-installed on Apple devices) or NVDA (free on Windows) as part of your testing. That can reveal some obvious problems and allow you to experiment with solutions, with a couple of caveats. If you aren't an experienced screen reader user who relies on it every day, your tests probably won't replicate real user behaviour. Secondly, testing with one screen reader is much like testing in one browser in that it won't be fully representative.
Accessibility testing will always require human empathy and expert judgement but there is still a place for automated checking, particularly in large front end codebases that see constant changes. It can be used by developers to catch problems while the code is being written, as a tool for monitoring standards in live environments, and to make things easier for expert human testers. Performing automated checks first is a sensible approach.
API-based services like Tenon, aXe and WAVE can be started manually from a developer or tester's machine, or plugged into a continuous integration workflow so that checks are run, for example, when a pull request is opened or when code is deployed to a staging environment.
Pa11y is a free, open source and self-hosted alternative that runs daily accessibility checks against individual pages, generating reports that can be viewed in its dashboard or downloaded. It can also be run manually from the command line for one-off checks.
Don't forget, these services can only check for the most rudimentary accessibility problems. Like any other automated checking tool, they don't have empathy or common sense and they don't understand context.
Tenon's developers make a point of acknowledging the limitations of automation:
Some accessibility best practices are either too subjective or too complex to be accurately tested with automated means. Doing so increases the risk of so-called "false positives", which can reduce the usefulness of a testing tool.
Pa11y's makers say something similar:
Pa11y can't catch all accessibility errors. It'll detect many of them, but you should be manually testing (ideally with users) as well.
Just like when testing for usability, real users should be involved in accessibility testing as early and as often as possible. While expert testers and automated tools can identify many problems, there is no substitute for observing people interact with an application in a real world setting. They are, after all, the ultimate experts.
Many charities such as RNIB, AbilityNet and Shaw Trust are able to arrange testing sessions with disabled users. These often take place in a laboratory setting or, even better, in the user's own home or workplace, with their own assistive technology.
Testing for Common Problems
As mentioned above, accessibility starts with valid HTML. It's actually a basic Level A requirement in WCAG (SC 4.1.1). Validating HTML is trivial using a tool like W3C's validation service or one of the alternatives that can validate several pages or entire websites in one go. In most cases, HTML is validated on the developer's own machine before it even reaches a tester. Keep an eye out for invalid markup in things that weren't written by a developer; WYSIWYG editors aren't always reliable.
Incorrect Heading Structure
Headings are one of the most important parts of an accessible HTML document. Not only do they help describe the meaning and structure of the content, they also provide navigational landmarks. For example, screen reader users can jump from heading to heading using keyboard shortcuts or view a list of all headings on the page to get an idea of the outline.
How to Test Headings
Problems with headings can be spotted by looking directly at the code or by using a tool that highlights obvious issues. The WAVE Evaluation Tool for Chrome and aXe for Chrome and Firefox will all highlight these issues in the browser. The integrated testing systems mentioned above should also include them in their reports, as will many Search Engine Optimisation tools.
Example: Headings That Aren't Really Headings
In this example, the headings are incorrectly marked-up as paragraphs. While CSS might be used to make them look like headings (a bigger font, a different colour etc) a screen reader or braille display still treats them as paragraphs:
<p class="main-heading">Essential skills for testers</p>
<p>Accessibility is really just extreme usability...</p>
Example: Skipping Levels
Here, the h2 heading is immediately followed by an h4 heading. The h3 has been skipped, breaking the logical hierarchy and causing confusion:
<h2>Essential skills for testers</h2>
<p>Accessibility is really just extreme usability...</p>
Sighted keyboard users interact with a page by moving the focus from element to element with the tab key. By default most browsers highlight the focus with a border or outline but this can be changed (or worse, removed completely) in CSS. Without a visible focus style the user has no easy way of telling which part of the page they are interacting with.
How to Test Focus Visibility
Some automated tools will try to check for this, but it's best tested by a human. Tab through the page from top to bottom and make sure you can clearly see where the focus is at all times. At no point should the focus disappear completely.
Note that in Safari you might need to change a setting in order for tab key navigation to work (Preferences > Advanced > Check "Press tab to highlight each item on a webpage").
How To Test For Keyboard Traps
Move through the entire page by pressing the
tab key to move to the next element. Press
Shift + tab to move to the previous element. If you can tab onto an element you should be able to tab away from it as well.
Note that in Safari you might need to change a setting in order for tab key navigation to work (Preferences > Advanced > Check "Press tab to highlight each item on a webpage").
Form Fields Without Labels
When form fields are correctly paired with a descriptive label, assistive technology like screen readers can announce the label when the field receives focus. It also increases the clickable area for mouse and touch users.
Example: Label Without a 'for' Attribute
This incorrect example shows a text input with a "First Name" label. The purpose of the field might be clear visually but assistive technology has no way of knowing that the label describes the field.
<input type="text" id="firstName>
This example shows the label correctly paired with its field using a for attribute:
<label for="firstName">First Name</label>
<input type="text" id="firstName">
How to Test Labels
As well as glancing at the code, an easy way to test this is by clicking (or tapping) on the text of the label in the UI. You will see keyboard focus given to the field if it is correctly paired with its label. If it isn't, nothing will happen.
If you are testing with a screen reader, the label should be announced whenever the field receives focus.
Incorrect Text Alternatives For Images
Images are completely inaccessible to non-sighted users unless a text alternative is provided. While computers can have a go at interpreting and describing images (Alt Text Bot is one example) they can't yet do it as well as humans.
How To Test Text Alternatives
All automated tools can identify
img elements without alt attributes, those with an empty
alt attribute and sometimes instances of worryingly short or long text alternatives. What they can't do is work out whether the text itself is appropriate and well written. That needs a human tester.
There are some simple rules of thumb for images.
The text alternative needs to serve the same purpose or provide the same information as that contained in the image. It shouldn't just describe the image.
<img src="..." alt="Never Beaten on Price">
Similarly, if an image is inside a link and there is no other descriptive text, the alt attribute should describe the target of the link.
On the other hand, if an image is purely decorative or just repeats information that is already available on the page in text, the alt attribute should be empty so that it can be ignored by assistive technology. Here, the phrase "Tried, tested and recommended" is already included in the text directly below the image so the alt attribute should be empty to avoid duplication.
<img src="..." alt="">
Insufficient Colour Contrast
People with reduced vision or with certain types of colour blindness often have difficulty reading text if it doesn't contrast sufficiently against the background. WCAG 2.0 (SC 1.4.3) requires a minimum contrast ratio of at least 4.5:1, which is something that can only be tested with a tool.
How To Test Colour Contrast
There are a number of specialised tools that can test colour combinations against WCAG 2.0's requirements. Paciello Group's Colour Contrast Analyser for Windows and Mac can simulate common eye conditions as well as testing the contrast ratio of a given colour combination. CheckMyColours.com checks the foreground and background colour combinations of elements on a page. Many of the browser extensions mentioned above will also check for contrast issues, as will some tools used by designers.
Generic link phrases like "click here" and "more information" should be avoided for a number of reasons. Firstly, the only way to understand their purpose is to infer it from the surrounding content, which creates extra work for assistive technology users in particular. Secondly, the original content isn't always available because of the way links are presented by tools like screen readers and braille displays. Similarly, even if a more descriptive phrase is used, linking to another destination using the same phrase is also likely to cause confusion if the user doesn't have immediate access to the original context. The benefits of descriptive links actually extend beyond accessibility, improving usability for everyone and contributing to search engine optimisation.
Example: Non-descriptive links
Keyboard users often navigate a page by tabbing through the available links. Screen readers and braille displays can also generate a separate list of all links on a page. If links aren't descriptive, like the "More information" examples below, they won't make sense when they appear in this list.
How to Test for Confusing Links
Many of the automated accessibility testing tools are able to identify generic link phrases and instances of the same phrase linking to different destinations. What they can't do is evaluate them from a user's perspective and decide whether or not they are accurate and easy to understand. It's not always possible to avoid duplicate link phrases without resorting to overlong verbose alternatives so human judgement is needed when testing.
Hard to read content
Readability is important for everyone but even more so for people with visual impairments and cognitive disabilities. For users with limited vision, font size, line length, colour contrast and logical page layout are important. The same things are important for those with dyslexia, as is avoiding large blocks of text, underlined text and CAPITALISED text. For people on the autistic spectrum, abstract language, bright contrasting colours and complex layouts can be overwhelming. Animations and flashing content also pose a risk for those with photosensitive epilepsy.
Example: Information overload
The infamous Ling's Cars is often cited as an example of poor design. Although it is surprisingly successful in pulling off some clever psychological tricks to generate sales, there are some obvious accessibility issues. Lots of information is thrown at the user all at once, much of it animated and accompanied by bright colours and sound effects, with very little white space to break it up.
How To Test Readability
Several algorithms can be used to measure the readability of text including Gunning Fog, Flesch Reading Ease, and Flesch-Kincaid. These measure things like the number of sentences in a paragraph, number of words in each sentence, and the number of syllables in words. Juicy Studio's Readability Test and the Readable service from WebpageFX can analyse a web page and give a score using each of those algorithms.
The Trace Center's Photosensitive Epilepsy Analysis Tool (PEAT) is a free application for Windows computers that can test animations and video for flashing patterns that are at risk of causing seizures.
Beyond these basic tests, readability is something that must be judged by people, either expert testers or preferably end users.
The Continued Challenges of Accessibility
It's impossible to cover all aspects of web accessibility in a single article, not to mention the closely related world of mobile apps, which presents its own challenges. We've taken a brief look at the different types of disabilities, examined the standards and laws that apply to accessibility, explored some of the key testing skills and techniques and learned how we as testers can check for some of the most common accessibility problems.
Perhaps the most important thing to take away is that accessibility must be included as part of the design process and not just awkwardly bolted on at the end of the project. As Michael Larsen said in his presentation at Agile Testing Days 2015, inclusive design is about making things accessible from the start with the minimum amount of retrospective change. Our role as testers isn't just to catch problems at the end of development, but to educate our colleagues and to speak up for the user.
About Matt Obee
Matt is a designer, developer and software tester with a passion for UX and Accessibility. Having spent a decade at digital retail specialist Red Ant, he's now a Senior Software Tester and a member of the UX/UI group at Holiday Extras. Matt also built TestNote.