Automatically finding inaccessible alternative texts in web pages

September 24, 2010

A publication on Automatic Checking of Alternative Texts on Web Pages (Olsen, Snaprud, Nietzio) was recently published.

Often alternate texts for images, maps of audio files, are generated by web publishing software or not properly provided by the editors. For humans it is relatively straightforward to see which alternative texts have been generated automatically as the texts are in no way describing the corresponding image. Examples include texts such as ”Image1”, texts which resemble filenames such as ”image12.png” or ”insert alternative text here”.

The proper method for adding images to a document is when an editor uploads an image for article and can / must provide an alternative text in the CMS.

There are however several improper methods which results in in-accessible automatically generated alternative texts:

  • The editor uploads an image and uses the default alternative text.
  • The editor uploads an image for an article and the CMS generates some (often strange) alternative text.
  • The editor uploads an image but have no possibility to write an alternative text.

Following are some example of automatically generated alternative texts (Image source wikipedia)

A picture of a dog eating with a correct alternative text: Golden Retriever Eating

Correct Alternative text "Image12.png" HTML: <img alt=”Golden Retriever Eating” ... />

A picture of a dog eating with a wrong alternative text: image12.png

Wrong Alternative text "Image12.png" HTML: <img alt=”image12.png” ... />

A picture of a dog eating with a wrong alternative text: image12.png

Wrong Alternative text "Image12.png" HTML: <img alt=”image12.png” ... />

For people who cannot see non-textual content alternative texts are crucial to understand and use the content and automatically generated alternative texts may impose web accessibility barriers. Most automatic accessibility checkers only detects for the existence of alternative texts. The above mentioned texts, which are not describing the corresponding image well and are thus not considered accessible, will not be detected.

The paper introduces a pattern recognition approach for automatic detection of alternative texts that may impose a barrier. The introduced algorithms reach an accuracy of more then 90%, which should hopefully be a step towards improving the usefulness of automatic accessibility checking. Additionally, it could be useful input of manual accessibility checking.

(Full disclosure: I’m a co-author of the paper)

Remaining challenges of measuring the accessibility of web sites according to WCAG 2.0

August 11, 2010

The Web Content Accessibility Guidelines (WCAG 1.0) was launched in 1999 and was followed up by WCAG 2.0 in 2008. These guidelines have been the de facto standard for how to make web sites accessible for all people, including people with special needs.

Accessibility Sign

During the 9 year period from 1999 from 2008, many measurement methodologies for WCAG 1.0 was created. Furthermore, many national and international surveys have benchmarked the accessibility of public web sites according to WCAG 1.0. Since WCAG 2.0 differ from WCAG 1.0 in significant ways, the existing measurement methodologies cannot easily be translated to WCAG 2.0. Thus, very few applications for evaluation according to WCAG2.0 has been produced. Only two tools claiming to be WCAG 2.0 compliant are known to the authors: AChecker and TAW. The details of these tools are not known.

A paper titled Evaluating Conformance to WCAG 2.0: Open Challenges (Alonso, Fuertes, Gonzalez, Martínez) presented the remaining challenges of measuring accessibility of public web sites according to WCAG 2.0. In this paper, the authors identify the main challenges with measuring measuring accessibility in web sites in accordance to WCAG 2.0. The lessons have been learned by applying WCAG 2.0 tests in practice by university students.

The paper identifies the following challenges. The described challenges are in the authors experience unclear parts WCAG 2.0, which often means that the testers need interpret the texts and take decisions of how it should be understood. This could easily lead to inconsistency among testers as the testers may understand the texts differently.

Accessibility supported Technologies

WCAG 2.0 describes that only accessibility supported technologies can be relied upon for accessibility. It further states that the technology is accessibility supported only when user’s assistive technology will work with it. Since no list of supported technologies is provided, nor any formal way to measure if a technology is supported or not, this causes a challenge. There are no established method of saying that using one technology is accessibility, while using another is not.

Testability of Success Criteria

WCAG 2.0 consists of testable techniques. A technique is testable if it can be tested ether with machine or by human judgment. It is believed that around 80% of the criteria are testable by humans. However, the authors show that some of the description of the techniques for testing causes confusion. For example: in the sentences, “the test sequence of elements should be meaningful”, it is not evident what is meant by the wording meaningful. What is understood as “meaningful sequence of elements” for one person may not be meaningful for others. This is likely to cause confusion, which leads to inconsistency in any testing results.

Openness of Techniques and Failures

WCAG 2.0 is divided to separate documents: the guidelines and techniques. The guidelines are stationary and technology independent. In contrast the techniques is a living document which is updated as technology evolves. This makes it possible to update WCAG 2.0 with hands on techniques as the technologies used on the web evolve. One challenge is that W3C updates the techniques document for non-proprietary software only. This means that there will be no techniques collected by W3C for proprietary software, such as for example Adobe Flash. Thus, there will be no techniques from W3C on how to make Adobe Flash accessible.

Aggregation of Partial Results

How to present data from successfull techniques and common failures have not been presented by W3C. WCAG 2.0 identifies two types of criteria an element can match:

  • Positive: Elements which meet the criteria of successfull techniques. Any elements which uses the successfull techniques are known to be accessible.
  • Negative: Elements which is a common failure. Any elements which uses a common failure, is known to be in-accessible.

It is not so that the successfull techniques and common failures are opposite measures. Thus, not following a success technique does not mean that a barrier exist. Similarly, it is not so that avoid a common failure necessarily means that the element is accessible. Therefor, elements which nether match the successfull techniques nor common failures fall into some unknown state and cannot be claimed to be accessible nor in-accessible.

How to present data from a web page with common failures and successfull techniques are not clear.


The author further present some recommendations when measuring web accessibility according to WCAG 2.0. The recommendations are as following:

  • Accessibility-supported techniques should be clearly defined, and a methodology to identify if a techniques is accessible-suppported, or not should be established.
  • More experiments are needed for the testability of the techniques, failures and success criteria. This should be a step towards creating a common understanding of how the tests should be interpreted.
  • W3C should define how test results from successfull use of techniques, common failures, and not applicable should be aggregated and presented as a single result.

Is financial wealth leading to high quality government services?

August 6, 2010

It is natural to assume that financial wealth leads to better government. It is further reasonable to expect that wealthy countries have higher quality of the e-government services compared to countries with less financial wealth. But how much does the finances alone influence quality e-government services? This short study gives a peek of how finances affects e-government services.

UN E-government 2010 report

In this study the data used for quality of e-government services is the E–Government Development Index (E-readiness score) from the United Nations E-Government Survey 2010. Thus, it is directly assumed that a government with high quality e-government services will receive a high score, and visa versa. The remaining data used is from the World Bank Data Catalog.

The following figure presents a box plot of the differences between the E–Government Development Index of Developing and Developed countries. The plot shows that developing countries have in average score of 0.4 while developed countries have an average score of about 0.7. Furthermore, all developing countries have scores less than 0.7, while all the developed countries have a score higher than 0.5. Thus based on the United Nations E–Government Development Index score it is, not surprisingly, significant difference between e-government services in developing and developed countries.

Developing countries have in average of 0.4 while developed countries have an average of about 0.7. All the developing countries have e-readiness score less than 0.7 while all the developed countres have a score higher than 0.5.

E-readiness score versus developing and developed countires.

Thus, the quality is clearly dependant on the finances, but how much of the quality e-Government services are influenced by finances alone?

The development of government services is complex procedure shaped by many factors. There exists no general conclusion of which factors influence the quality of the government service. It is however possible to determine to what extent data from the financial situation in a country can be used to predict the e-readiness score.

The following graph presents the plot between E–Government Development Index and GNI per capita. The graph also includes a regression, which can be used to calculate the E–Government Development Index based on the GNI per capita alone.

A dotplot showing the trends between E-readiness and GNI per capita.

E-readiness versus GNI per capita

The trends in the data are clearly visible. The regression can be seen as the black line, the mean response is shows as a green dashed line while the prediction interval is presented as the blue dashed line.

The regression line (black line) shows the relationship between the E–Government Development Index and GNI per capita. If no correlation existed between the two data sets, the line would be completely horizontall. The regression line can be used to predict the E–Government Development Index using only the GNI per capita. The graphs shows us that the relationship is not linear, but more complex.
The mean response interval (green dashed line) tells the estimated mean of the data.
The prediction interval (blue dashed line) tells where future data is expected be located (similar to confidence interval).

The data shows that the mean response interval and prediction interval changes as the GNI per capita increases. Generally, we are more certain of the prediction when these intervals are small. From this we can draw the following conclusion. It is relatively easy to predict the E-readiness score when a country has a low GNI per capita. In contrast, to predict the E-readiness score based on the GNI Per Capita alone for wealthy countries is a lot less precise. I.e. lack of finances generally means low quality services, while wealth alone is not sufficient to ensure quality in e-government.

A collaborative approach for improving local government web sites

July 30, 2010

A publication on how to facilitate collaboration between local government and vendors entitled Accessibility of eGovernment web sites: Towards a collaborative retrofitting approach (Nietzio, Olsen, Eibegger, Snaprud) has recently been published.

Changing a local government web site is often a long process which normally includes vendors, editors and specialists in local regulations and legal enforcements. Results from benchmarking studies are often a good facilitators, but the results alone are of limited use when it comes to updates in practice. This is especially true if the web site updates are relatively small such as removing accessibility barriers. Thus, the paper presents an approach for rapid accessibility updates of government web sites. The approach uses benchmarking results together with forums and online checkers.

Collaborative process between municipalities, vendors and eGovMon. Vendors and municipalities collaborate through the eGovMon forum and through physical discussions. eGovMon organizes workshops and seminars for vendors and municipalities respectively.

Collaboration process between municipalities, vendors and eGovMon

The approach, visualised in the figure above, is applied to a group of Norwegian municipalities who want to improve the accessibility of their web site.

Accessibility benchmarking often fail to have an impact. This may be because of the following reasons:

  • The results are not detailed enough to be used for implementation purposes.
  • It is not clear what part of the publication chain the problem is located (in the CMS or introduced by the editor).
  • The maintainers do not have the technical knowledge to fix the problem.
  • The barriers are fixed in a one-off effort. However, since there are no quality process in place to detect if newly added content is in-accessible.
  • The benchmarking is carried out as a one-off study so that progress cannot be evaluated.

The presented approach includes three areas:

  1. Regular Benchmarking reports: Bi-monthly benchmarking reports of all municipality web sites. In these reports the editors of the local web sites can see how any web site updates affects accessibility.
  2. Online accessibility checkers: An interactive environment where editors and developers can instantly check their web pages and web sites. This allows for developers to incrementally remove accessibility barriers. (Blog post on Web Accessibility Checking)
  3. Online forum: Often times, it is clear where in the production chain an accessibility barrier is located. For example, when the logo of a web site is missing an alternative text, this is likely to be a problem caused by the CMS. However, if an individual image in a document is missing an alternative text, it could be because the editor did not provide this. Such discrepancies could lead to the situation where editors blame the CMS for accessibility problems, while the vendors claim that the editors are not using the CMS correctly. In the forum, editors can ask how to fix a specific barrier for a given CMS should be fixed, and the vendors can reply.

This approach allows for local web site editors to use e-government benchmarking results together with an online forum to fix any accessibility issues with the web site. Furthermore, the editors gets knowledge of which issues they cannot fix themselves, but has to be carried out by updates of the CMS software or web site template. Even though this collaborative concept was applied to web accessibility barriers, it may be useful for other areas of local e-government as well.

(Full disclosure: I’m a co-author of the paper)

Fighting corruption with e-Government

July 1, 2010

A very interesting study called E-government as an anti-corruption strategy showed that establishing e-Government reduces corruption. This should not be a surprise to anyone working with e-Government since it commonly believed that introduction of e-government diminishes the contact between corrupt officials and citizens, as well as increases the transparency and accountability. Unfortunately, hard evidence for these claims have been lacking (United Nations Development Programme, Fighting Corruption with e-Government Applications – APDIP e-NOTE 8, 2006).

The study is innovative as it uses a statistical approach to examine trends between e-Government and anti-corruption. Most other papers presenting quantitative data in the area do not use a statistical approach, which makes it more challenging to trust the results.

However, in this publication the author inspected, in a sound statistical way, the changes in corruption, using the control of corruption index presented by the World Bank, versus the changes in e-Government, using data from a Global e-Government Survey.

Unfortunately, for the OECD countries the author was not able to find any clear trends. This could be explained by less corruption in the OECD countries (compared to non-OECD countries), which means that the OECD countries had less to win, when it comens to anti-corruption, by introducing e-Government. Note that this is not evidence for absent of reduced corruption because of e-Government in OECD countries, just that the trends are not clearly visible in the data.

However, for the non-OECD countries, there are clear trends in the examined data. The results strongly imply that the introduction of e-Government has led to a significant reduction of corruption. Thus, supporting the view that e-Government is a very useful for reducing corruption – on a global scale.

Weighing Indices in the UN E-Government Survey

May 14, 2010

The United Nations E-government Survey index is a weighted combination of three indices:

  • Web Measure, which represents the sophistication level of online citizen services.
  • Human Capital, which represents the education level of a country.  This index is again weighted with two-third weight of adult literacy and one-third to weights to enrollment.
  • ICT Infrastructure, which represents the infrastructure in a country. This is again average weighted including  number of computers per person, telephone lines, mobile phones etc.

These three are all weighted equally contributing 1/3 to the score, which means that formally the e-readiness is as following:
E-readiness =

1/3 * Web Measure +

1/3 * Human Development +

1/3 * ICT Infrastructure

An interesting question that follows is what happens if we assign other weights to these indices For example, if we change the weights, can we also change the ranking a country gets?

Using Monaco as an example, it was ranked as member state number 112 in the UN e-readiness survey 2010. However, by adjusting weights of the three indices, we can change the ranking of Monaco from 112 up to 25, or down to 184.

In the following plot, possibile combination from 10% up to 80% of the three indicies are plotted and the corresponding ranking of Monaco.

Ranking of Monaco when weighting the indicies Web Measure, Human Capital and ICT Infrastructure differently

Similarly, the following graphs how the top five member states, according to the E-readiness ranking in the 2010 survey, would rank if different weights would be used.

Ranking of the top five countries with different weights

(Note that for reason of clarity some weightings have deliberatly been removed).

The question which naturally arises is:

Why does the current E-readiness index use equal weights, and is this any more correct than any weights?

Thanks do Deniz Susar for input on this idea.

Web Accessibility Checking

April 22, 2010

eGovMon logoA new version of the eAccessibility Checker has been launched by the eGovMon-project.

The tool targets checking how accessible web pages and web sites are for people with special needs. This new release focus on being understandable both for content providers and web developers. People no longer need to be web accessibility experts to find out both the accessible status of a web page and how to improve it.
The tool also includes an accurate presentation of the code ((X)HTML and CSS) which creates barriers. As well as good and bad examples of web accessibility.

Can you make your web site accessible and get the Checked by eGovMon-logo?