Most Common Barriers in Public Web Sites

June 18, 2009

Most Common Barriers

Here we present the most common barriers found on public European web sites.

(1) Invalid or deprecated (x)HTML and/or CSS

was detected in 82% of the evaluated web pages.(x)HTML and CSS are the most used technologies for web pages. The latest version of these technologies are built with accessibility in mind. This means that assistive technologies can more easily and successfully present the web page content when the latest(x)HTML and/or CSS is used correctly.

(2) Graphical elements without textual alternative

occurred in 63% of the evaluated pages. An example of this is images without alternative text, which causes challenges for people with visual impairments who are unable to see the pictures. Any information conveyed in an image is lost to these users whenever a textual alternative is missing.

(3) Form elements without labels

occurred in 62% of the evaluated pages. An example of this is not correctly marking a search button as ”search”. The fact that the web site is searchable, is sometimes understood by the context around the search field,such as a magnifying glass nearby. People with visual impairments and dyslexia sometimes have the web page text read out load using screen readers, and are unable to a see magnifying glass. If a button is not clearly marked as a search button, it is no way of knowing that it is intended for searching the web site.

(4) Links with the same title but different target

occurred in 32% of the evaluated pages. There is often a problem that links on web pages are not describing the target pages well. A typical example is having links with the text ”read more” (which does not say anything about what the link is actually linking to).Instead links should be more descriptive such as ”read more about the economic crisis”. For fast and efficient navigation, some accessibility tools present all links within an a web page to the user. However, if all links have the text ”read more”, presenting all links to the user is useless since it is impossible to know what information each link points to.

(5) Mouse required

occurred in 15% of the evaluated pages. For web sites which requires the use of a mouseit causes problems for people with motor impairment who often have challenges with using such devices. An example of such is web sites with menu items which can only be accessed by clicking with a mouse. Often,people with motor impairment are not be able to use such web sites at all.

Survey

This survey has been carried out using the eGovMon tool for measuring accessibility and was first published in Journal of Ph.D. of Papers in Technology and Science, 2008 at Aalborg University as How Accessible is the Public European Web by Morten Goodwin Olsen. Note that this is an internal journal for Aalborg University. The journal itself is not available online.

Advertisements

Using Automatic Testing to Predict Accessibility Results

June 17, 2009
This post is based on a scientific publication at HCII 2009 Is it possible to predict manual web accessibility results using automatic results? written by Carlos Casado, Loïc Martinez and Morten Goodwin Olsen which address measuring accessibility. The intention with this paper was to see to what extent is there a exists correlation between manual and automatic accessibility assessment, and to what extent automatic evaluation results could be used to predict manual data.
There is no clearly defined distinction between usability and accessibility and often accessibility is often seen as part of usability. Any usability or accessibility testing is challenging due to the diversity of users. What is usable or accessible for one user may present itself as a barrier for another. Because of this, even with extensive manual testing, you cannot claim that a web site is accessible and barrier free for all users. However, test sets intended to cover accessibility exists such as the Unified Web Evaluation Methodology (UWEM). UWEM presents 141 accessibility tests. Most of these needs to be applied manually, but 26 can be run completely automatically. An outline of the relation between usability, accessiliby, manual and automatic testing can be seen in the figure below.
usability_accessibility

Two active approaches for testing accessibility are actively used today; manual and automatic accessibility testing.

Automatic evaluation of accessibility is

  • Quick and systematic,
  • Enables almost instant evaluation,
  • Can provide accessibility results for complete web sites,
  • Can apply a small subset of the accessibility tests – it can not apply tests which rely upon human judgment.

Manual evaluation of accessibility is

  • can apply all presented accessibility tests,
  • compared to automatic evaluation time consuming,
  • more tools are needed such as several web browsers, assistive technologies, configurations including screen resolution,
  • for the tests which rely on human judgment, it is hard to produce repeatable results. As an example, an accessible web page should have clear and simple text. However, what is perceived as clear and simple may vary between different experts and is therefore not repeatable.

Casado et. al. addressed to what extent can the results from automatic evaluation of a web site  be used as an approximation for manual results. Using UWEM, Casado et. al. evaluated 30 web pages both manually (141 tests per page) and automatically (23 tests per page). From these results, using simple regression, they found out that the UWEM score from manual accessibility results could in 73% of the cases be predicted within 95% confidence interval based on only UWEM score automatic from automatic testing.


State of the Art Impact Indicators – Practical View

June 10, 2009

In this post we give an overview of some practical survey assessing impact. In contrast to the scientific approach addressed previously, these are surveys which have been conducted in practice where there exists results.

This post is based on a Impact Indicators: State of the art survey at eGovernment Monitoring Network workshop by Morten Goodwin Olsen and Annika Nietzio and eGovMon wiki on impact.

Deloitte and Indigov

Source: Study on the Measurement of User Satisfaction and Impact in the EU 27

Deloitte and Indigov has an annual survey addressing impact on all levels of society (from country to municipality). It provides both a possibility to get an holistic and detailed overview of the data.
Deloitte and Indigov impact covers both citizens and business and is includes saving time, being more flexible, simplifications, saving money, better control, more transparency and better quality.

Accenture

Source: Accenture Public Service Research and Insight

Accenture assess the national governments. The survey is carried out each year with a new focus. In 2008, the focus was on creating and sharing responsibility for better outcomes. It includes a quantitative approach in addition to addressing real life experiences and investigates elements for people satisfaction and governments availability to achieve desired outcomes, and government strategies.

eGovernement Practice Group of the World Bank 2007

Source: Impact Assessment Study of E-Government Projects in India

The world bank has done an assessment study of eGovernment with focus on corruption reduction in India, on levels from country to municipality. It includes impact for citizens (cost, service, quality and governance), agencies and society.

Impacts on Internet use in Public administration: A case study of Brazilian Tax Administration (2005)

Source:  Impacts of Internet use on Public Administration: A Case Study of the Brazilian Tax Administration

The survey investigates the impact of introduction of online tax administration in Brazil (2005). It looks at impact both from the view of tax payer and tax administration. They define impact of the online tax administration as number of tax returns filed online over total number of tax returns. In contrast to the other surveys, this depends on self assessment.


State of the Art Impact Indicators – Scientific Approach

June 9, 2009

Impact is defined as a forceful consequence; a strong effect. E-government polices, projects and services may have impact on the economy, society, administration. Impact is in most cases seen as a positive effect such as increased efficiency, participation and/or effectiveness. However, negative impact also exists such as reduction of staff which clearly has a negative for the people affected.

Impact is challenging to assess as the actual impact is often unknown until the policy, project of service has been set in effect, or too complex to measure.

This post is based on a Impact Indicators: State of the art survey at eGovernment Monitoring Network workshop by Morten Goodwin Olsen and Annika Nietzio and eGovMon wiki on impact.

The post includes a state of the art survey scientific work and practical surveys on assessing e-government impact.

eGovernment measurement for policy makers

Source: eGovernment measurement for policy makers, Millard.

According to Millard, the overall goal of a policy is expressed in the terms of its ultimate impacts. These will normally not be expressed as eGovernment objectives, but rather as societal objectives to which successful eGovernment development should contribute, such as:

  • economic productivity,
  • economic growth,
  • jobs,
  • competitiveness,
  • local and regional developments,
  • environmental improvement and sustainable development,
  • inclusion,
  • democracy, participation and citizenship,
  • quality of life / happiness,
  • increased justice and security and
  • universal human rights and peace.

Understanding and Measuring eGovernment: International Benchmarking Studies

Source: Heeks, Benchmarking eGovernment 2006
According to Heeks, the focus on eGovernment activities evolves from readiness to availability. From this it evolves to uptake and finally impact. Heeks claims that impact includes efficiency, effectiveness and awareness.

He further claims that impact should be measures as benefit for the citizen, financial benefits and back office changes.
Finally, he recommends that greater use of survey methods are needed to assess e-government outputs and impact.
(insert image)

impact from readiness to impact

impact from readiness to impact

Measuring eGovernment Impact

Source:  Measuring e-Government Impact: Existing practices and shortcomings (Peters et al, 2004)

According to Peters et. al., an efficienct e-government measurements need to: take into account the backoffice situations, establish a relationship between resources and results and include situation at different levels.

Benchlearning for eGovernment Measurements

Overview of eGep

eGep Framework

Overview of eGep

Overview of eGep


Assessing Web Accessibility Online

June 8, 2009

Several tools for automatic assessment of web accessibility exists online. Note that automatic checking of accessibility can not detect all possible barriers. Thus, automatic evaluation can be used to find barriers on web sites/web pages and be used to show that a web site is inaccessible. However, automatic evalutation alone cannot be used to claim conformance to web accessibility.

Despite this, studies have shown that results from automatic accessibility evaluation can be used to predict manual results.

In addition to accessibility of traditional web pages ( (X)HTML and CSS ), similarly other document formats, such as PDF, may be more of less accessible as well. Note that one of the criteria of WCAG 1.0 was that W3C formats were used (such as (X)HTMl and CSS and e.g. not PDF). However, WCAG2.0, which was launched in the end of 2008, no longer has such restrictions.

Web Pages – (X)HTML and corresponding CSS):

PDFs:

Open Document Format (ODF):


Survey on Benchmarking E-government

June 3, 2009

Together with a college, I wrote a survey on benchmarking e-government for ICDS 2009, which is relevant for this blog.

Benchmarking e-government – comparative review of three international benchmarking studies.