Using Automatic Testing to Predict Accessibility Results

This post is based on a scientific publication at HCII 2009 Is it possible to predict manual web accessibility results using automatic results? written by Carlos Casado, Loïc Martinez and Morten Goodwin Olsen which address measuring accessibility. The intention with this paper was to see to what extent is there a exists correlation between manual and automatic accessibility assessment, and to what extent automatic evaluation results could be used to predict manual data.
There is no clearly defined distinction between usability and accessibility and often accessibility is often seen as part of usability. Any usability or accessibility testing is challenging due to the diversity of users. What is usable or accessible for one user may present itself as a barrier for another. Because of this, even with extensive manual testing, you cannot claim that a web site is accessible and barrier free for all users. However, test sets intended to cover accessibility exists such as the Unified Web Evaluation Methodology (UWEM). UWEM presents 141 accessibility tests. Most of these needs to be applied manually, but 26 can be run completely automatically. An outline of the relation between usability, accessiliby, manual and automatic testing can be seen in the figure below.

Two active approaches for testing accessibility are actively used today; manual and automatic accessibility testing.

Automatic evaluation of accessibility is

  • Quick and systematic,
  • Enables almost instant evaluation,
  • Can provide accessibility results for complete web sites,
  • Can apply a small subset of the accessibility tests – it can not apply tests which rely upon human judgment.

Manual evaluation of accessibility is

  • can apply all presented accessibility tests,
  • compared to automatic evaluation time consuming,
  • more tools are needed such as several web browsers, assistive technologies, configurations including screen resolution,
  • for the tests which rely on human judgment, it is hard to produce repeatable results. As an example, an accessible web page should have clear and simple text. However, what is perceived as clear and simple may vary between different experts and is therefore not repeatable.

Casado et. al. addressed to what extent can the results from automatic evaluation of a web site  be used as an approximation for manual results. Using UWEM, Casado et. al. evaluated 30 web pages both manually (141 tests per page) and automatically (23 tests per page). From these results, using simple regression, they found out that the UWEM score from manual accessibility results could in 73% of the cases be predicted within 95% confidence interval based on only UWEM score automatic from automatic testing.

2 Responses to Using Automatic Testing to Predict Accessibility Results

  1. […] Using Automatic Testing to Predict Accessibility Results « E-Government Assessment […]

  2. thanks for giving me such a useful information about testing thanks….

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: