Towards Automated eGovernment Monitoring

September 26, 2011

Morten Goodwin’s Ph.D. thesis, with the title Towards Automated eGovernment Monitoring, is now available online.

Illustration photo of digital government

EGovernment solutions promise to deliver a number of benefits including increased citizen participation. To make sure that these services work as intended there is a need for better measurements. However, finding suitable approaches to distinguish the good eGovernment services from those which need improvement is difficult. To elucidate, many surveys measuring the availability and quality of eGovernment services are carried out today on local, national and international level.

Because the majority of the methodologies and corresponding tests rely on human judgment, eGovernment benchmarking is mostly carried out manually by expert testers. These tasks are error prone and time consuming, which in practice means that most eGovernment surveys either focus on a specific topic, small geographical area, or evaluate a small sample, such as few web pages per country. Due to the substantial resources needed, large scale surveys assessing government web sites are predominantly carried out by big organizations. Further, for most surveys neither the methodologies nor detailed result are publicly available, which prevents efficient use of the surveys results for practical improvements.

This thesis focuses on automatic and open approaches to measure government web sites.

The thesis uses the collaboratively developed eGovMon application as a basis for testing, and presents corresponding methods and reference implementations for deterministic accessibility testing based on the unified web evaluation methodology (UWEM). It addresses to what extent web sites are accessible for people with special needs and disabilities. This enables large scale web accessibility testing, on demand testing of single web sites and web pages, as well as testing for accessibility barriers of PDF documents.

Further, the thesis extends the accessibility testing framework by introducing classification algorithms to detect accessibility barriers. This method supplements and partly replaces tests that are typically carried out manually. Based on training data from municipality web sites, the reference implementation suggests whether alternative texts, which are intended to describe the image content to people who are unable to see the images, are in-accessible. The introduced classification algorithms reach an accuracy of 90%.

Most eGovernment surveys include whether governments have specific services and information available online. This thesis presents service location as an information retrieval problem which can be addressed by automatic algorithms. It solves the problem by an innovative colony inspired classification algorithm called the lost sheep. The lost sheep automatically locates services on web sites, and indicates whether it can be found by a real user. The algorithm is both substantially tested in synthetic environments, and shown to perform well with realistic tasks on locating services related to transparency. It outperforms all comparable algorithms both with increased accuracy and reduced number of downloaded pages.

The results from the automatic testing approaches part of this thesis could either be used directly, or for more in-depth accessibility analysis, the automatic approaches can be used to prioritize which web sites and tests should be part of a manual evaluation.

This thesis also analyses and compares results from automatic and manual accessibility evaluations. It shows that when the aim of the accessibility benchmarking is to produce a representative accessibility score of a web site, for example for comparing or ranking web sites, automatic testing is in most cases sufficient.

The thesis further presents results gathered by the reference implementations and correlates the result to social factors. The results indicate that web sites for national governments are much more accessible than regional and local government web sites in Norway. It further shows that countries with established accessibility laws and regulations, have much more accessible web sites. In contrast, countries who have signed the UN Convention on the Rights of Persons with Disabilities do not reach the same increased accessibility. The results also indicate that even though countries with financial wealth have the most accessible web sites, it is possible to make web sites accessible for all also in countries with smaller financial resources.

Full disclosure: I am the author of the thesis.


Global Web Accessibility

April 7, 2011

Cover of Journal of Information Technology and Politics

A scientific publication titled Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites (Morten Goodwin, Deniz Susar, Annika Nietzio, Mikael Snaprud, Christian S. Jensen) was recently published.

The publication presents web accessibility benchmarking methodology, and uses this methodology to present a survey on the accessibility of public web sites in the 192 United Nations Member States. It further identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while signing the UN Rights and Dignity of Persons with Disabilities has had no such effect yet. The article also demonstrates that, despite the commonly held assumption to the contrary, mature, high-quality Web sites are more accessible than lower quality ones. Moreover, Web accessibility conformance claims by Web site owners are generally exaggerated.

The countries with web sites that receive the best accessibility scores are:

  1. Germany
  2. Portugal
  3. Spain

The survey also shows that the economy of a country influences the accessibility of web sites, so that, not surprisingly, wealthy countries have more accessible web sites than poor countries. However, the study shows that accessibility laws have more impact than the financial status. Thus, it is not necessarily costly to make web sites accessible. It is however important to have well established accessibility laws which are actively followed up.

(Full disclosure: I am a co-author of the paper)
Morten Goodwin


Fix the web

March 8, 2011

An innovative crowd sourcing services for improving web accessibility has recently been launched called Fix the Web. The crowd sourcing approach works so that any person with a special need or a disability can report on barriers he or she encounters on public web sites. Fix the Web would then work as a pool of reported barriers.

Subsequently volunteers look at the reported problems and proposes solutions. When a solution is available, the data is communicated to the appropriate web site owner. The owners not only become aware of a barrier on their web sites but also receives a potential fix. This way, everyone benefits.

Their primary goal is to raise awareness of web accessibility. Their secondary goal is to get acknowledgement from a website owner and actual web site improvements. Lets hope people use this service and the crowd sourcing approach catches on, and both goals are reached making the a better place for everyone.

Morten Goodwin


Universal Design: Is it Accessible?

January 10, 2011

Universal Design: Is it Accessible?” is an interesting and controversial paper published in by Jane Bringolf in The RIT Journal of Plurality and Diversity in Design.

In the paper Bringolf argues that universal design sometimes fails to meet its own principal. For example, despite being far from universal, concepts such as accessibility and disability are often used to describe universal design. Bringolf further argues that this is partly why universal design is understood as a disability product rather than something made for all users.

The author claims that in a legislation with focus on people with disabilities, the benefits for all are lost. Instead the focus for designers and developers is to meet the regulations. This requires designers and developers to think in specially designed for disabled people when developing new products, which is exactly what universal design is trying to prevent.

The author controversially claims that neither legislation nor further research is the solution to a more universal designed world. To avoid people with disabilities becoming just another legal problem for designers, the author wants to re-brand universal design.


Automatically finding inaccessible alternative texts in web pages

September 24, 2010

A publication on Automatic Checking of Alternative Texts on Web Pages (Olsen, Snaprud, Nietzio) was recently published.

Often alternate texts for images, maps of audio files, are generated by web publishing software or not properly provided by the editors. For humans it is relatively straightforward to see which alternative texts have been generated automatically as the texts are in no way describing the corresponding image. Examples include texts such as ”Image1”, texts which resemble filenames such as ”image12.png” or ”insert alternative text here”.

The proper method for adding images to a document is when an editor uploads an image for article and can / must provide an alternative text in the CMS.

There are however several improper methods which results in in-accessible automatically generated alternative texts:

  • The editor uploads an image and uses the default alternative text.
  • The editor uploads an image for an article and the CMS generates some (often strange) alternative text.
  • The editor uploads an image but have no possibility to write an alternative text.

Following are some example of automatically generated alternative texts (Image source wikipedia)

A picture of a dog eating with a correct alternative text: Golden Retriever Eating

Correct Alternative text "Image12.png" HTML: <img alt=”Golden Retriever Eating” ... />

A picture of a dog eating with a wrong alternative text: image12.png

Wrong Alternative text "Image12.png" HTML: <img alt=”image12.png” ... />

A picture of a dog eating with a wrong alternative text: image12.png

Wrong Alternative text "Image12.png" HTML: <img alt=”image12.png” ... />

For people who cannot see non-textual content alternative texts are crucial to understand and use the content and automatically generated alternative texts may impose web accessibility barriers. Most automatic accessibility checkers only detects for the existence of alternative texts. The above mentioned texts, which are not describing the corresponding image well and are thus not considered accessible, will not be detected.

The paper introduces a pattern recognition approach for automatic detection of alternative texts that may impose a barrier. The introduced algorithms reach an accuracy of more then 90%, which should hopefully be a step towards improving the usefulness of automatic accessibility checking. Additionally, it could be useful input of manual accessibility checking.

(Full disclosure: I’m a co-author of the paper)


Remaining challenges of measuring the accessibility of web sites according to WCAG 2.0

August 11, 2010

The Web Content Accessibility Guidelines (WCAG 1.0) was launched in 1999 and was followed up by WCAG 2.0 in 2008. These guidelines have been the de facto standard for how to make web sites accessible for all people, including people with special needs.

Accessibility Sign

During the 9 year period from 1999 from 2008, many measurement methodologies for WCAG 1.0 was created. Furthermore, many national and international surveys have benchmarked the accessibility of public web sites according to WCAG 1.0. Since WCAG 2.0 differ from WCAG 1.0 in significant ways, the existing measurement methodologies cannot easily be translated to WCAG 2.0. Thus, very few applications for evaluation according to WCAG2.0 has been produced. Only two tools claiming to be WCAG 2.0 compliant are known to the authors: AChecker and TAW. The details of these tools are not known.

A paper titled Evaluating Conformance to WCAG 2.0: Open Challenges (Alonso, Fuertes, Gonzalez, Martínez) presented the remaining challenges of measuring accessibility of public web sites according to WCAG 2.0. In this paper, the authors identify the main challenges with measuring measuring accessibility in web sites in accordance to WCAG 2.0. The lessons have been learned by applying WCAG 2.0 tests in practice by university students.

The paper identifies the following challenges. The described challenges are in the authors experience unclear parts WCAG 2.0, which often means that the testers need interpret the texts and take decisions of how it should be understood. This could easily lead to inconsistency among testers as the testers may understand the texts differently.

Accessibility supported Technologies

WCAG 2.0 describes that only accessibility supported technologies can be relied upon for accessibility. It further states that the technology is accessibility supported only when user’s assistive technology will work with it. Since no list of supported technologies is provided, nor any formal way to measure if a technology is supported or not, this causes a challenge. There are no established method of saying that using one technology is accessibility, while using another is not.

Testability of Success Criteria

WCAG 2.0 consists of testable techniques. A technique is testable if it can be tested ether with machine or by human judgment. It is believed that around 80% of the criteria are testable by humans. However, the authors show that some of the description of the techniques for testing causes confusion. For example: in the sentences, “the test sequence of elements should be meaningful”, it is not evident what is meant by the wording meaningful. What is understood as “meaningful sequence of elements” for one person may not be meaningful for others. This is likely to cause confusion, which leads to inconsistency in any testing results.

Openness of Techniques and Failures

WCAG 2.0 is divided to separate documents: the guidelines and techniques. The guidelines are stationary and technology independent. In contrast the techniques is a living document which is updated as technology evolves. This makes it possible to update WCAG 2.0 with hands on techniques as the technologies used on the web evolve. One challenge is that W3C updates the techniques document for non-proprietary software only. This means that there will be no techniques collected by W3C for proprietary software, such as for example Adobe Flash. Thus, there will be no techniques from W3C on how to make Adobe Flash accessible.

Aggregation of Partial Results

How to present data from successfull techniques and common failures have not been presented by W3C. WCAG 2.0 identifies two types of criteria an element can match:

  • Positive: Elements which meet the criteria of successfull techniques. Any elements which uses the successfull techniques are known to be accessible.
  • Negative: Elements which is a common failure. Any elements which uses a common failure, is known to be in-accessible.

It is not so that the successfull techniques and common failures are opposite measures. Thus, not following a success technique does not mean that a barrier exist. Similarly, it is not so that avoid a common failure necessarily means that the element is accessible. Therefor, elements which nether match the successfull techniques nor common failures fall into some unknown state and cannot be claimed to be accessible nor in-accessible.

How to present data from a web page with common failures and successfull techniques are not clear.

Recommendations

The author further present some recommendations when measuring web accessibility according to WCAG 2.0. The recommendations are as following:

  • Accessibility-supported techniques should be clearly defined, and a methodology to identify if a techniques is accessible-suppported, or not should be established.
  • More experiments are needed for the testability of the techniques, failures and success criteria. This should be a step towards creating a common understanding of how the tests should be interpreted.
  • W3C should define how test results from successfull use of techniques, common failures, and not applicable should be aggregated and presented as a single result.

A collaborative approach for improving local government web sites

July 30, 2010

A publication on how to facilitate collaboration between local government and vendors entitled Accessibility of eGovernment web sites: Towards a collaborative retrofitting approach (Nietzio, Olsen, Eibegger, Snaprud) has recently been published.

Changing a local government web site is often a long process which normally includes vendors, editors and specialists in local regulations and legal enforcements. Results from benchmarking studies are often a good facilitators, but the results alone are of limited use when it comes to updates in practice. This is especially true if the web site updates are relatively small such as removing accessibility barriers. Thus, the paper presents an approach for rapid accessibility updates of government web sites. The approach uses benchmarking results together with forums and online checkers.

Collaborative process between municipalities, vendors and eGovMon. Vendors and municipalities collaborate through the eGovMon forum and through physical discussions. eGovMon organizes workshops and seminars for vendors and municipalities respectively.

Collaboration process between municipalities, vendors and eGovMon

The approach, visualised in the figure above, is applied to a group of Norwegian municipalities who want to improve the accessibility of their web site.

Accessibility benchmarking often fail to have an impact. This may be because of the following reasons:

  • The results are not detailed enough to be used for implementation purposes.
  • It is not clear what part of the publication chain the problem is located (in the CMS or introduced by the editor).
  • The maintainers do not have the technical knowledge to fix the problem.
  • The barriers are fixed in a one-off effort. However, since there are no quality process in place to detect if newly added content is in-accessible.
  • The benchmarking is carried out as a one-off study so that progress cannot be evaluated.

The presented approach includes three areas:

  1. Regular Benchmarking reports: Bi-monthly benchmarking reports of all municipality web sites. In these reports the editors of the local web sites can see how any web site updates affects accessibility.
  2. Online accessibility checkers: An interactive environment where editors and developers can instantly check their web pages and web sites. This allows for developers to incrementally remove accessibility barriers. (Blog post on Web Accessibility Checking)
  3. Online forum: Often times, it is clear where in the production chain an accessibility barrier is located. For example, when the logo of a web site is missing an alternative text, this is likely to be a problem caused by the CMS. However, if an individual image in a document is missing an alternative text, it could be because the editor did not provide this. Such discrepancies could lead to the situation where editors blame the CMS for accessibility problems, while the vendors claim that the editors are not using the CMS correctly. In the forum, editors can ask how to fix a specific barrier for a given CMS should be fixed, and the vendors can reply.

This approach allows for local web site editors to use e-government benchmarking results together with an online forum to fix any accessibility issues with the web site. Furthermore, the editors gets knowledge of which issues they cannot fix themselves, but has to be carried out by updates of the CMS software or web site template. Even though this collaborative concept was applied to web accessibility barriers, it may be useful for other areas of local e-government as well.

(Full disclosure: I’m a co-author of the paper)