Two active approaches for testing accessibility are actively used today; manual and automatic accessibility testing.
Automatic evaluation of accessibility is
- Quick and systematic,
- Enables almost instant evaluation,
- Can provide accessibility results for complete web sites,
- Can apply a small subset of the accessibility tests – it can not apply tests which rely upon human judgment.
Manual evaluation of accessibility is
- can apply all presented accessibility tests,
- compared to automatic evaluation time consuming,
- more tools are needed such as several web browsers, assistive technologies, configurations including screen resolution,
- for the tests which rely on human judgment, it is hard to produce repeatable results. As an example, an accessible web page should have clear and simple text. However, what is perceived as clear and simple may vary between different experts and is therefore not repeatable.
Casado et. al. addressed to what extent can the results from automatic evaluation of a web site be used as an approximation for manual results. Using UWEM, Casado et. al. evaluated 30 web pages both manually (141 tests per page) and automatically (23 tests per page). From these results, using simple regression, they found out that the UWEM score from manual accessibility results could in 73% of the cases be predicted within 95% confidence interval based on only UWEM score automatic from automatic testing.