United Nations Global E-government survey 2012

March 1, 2012

The United Nations Global E-government Survey 2012 is now available. This sixth UN E-government survey focuses on E-government for the People:

UN E-government Survey 2012

Executive summary:
Progress in online service delivery continues in most countries around the world. The United Nations E-Government Survey 2012 finds that many have put in place e-government initiatives and information and communication technologies applications for the people to further enhance public sector efficiencies and streamline governance systems to support sustainable development. Among the e-government leaders, innovative technology solutions have gained special recognition as the means to revitalize lagging economic and social sectors.

The overall conclusion that emerges from the 2012 Survey in today’s recessionary world climate is that while it is important to continue with service delivery, governments must increasingly begin to rethink in terms of e-government – and e-governance – placing greater emphasis on institutional linkages between and among the tiered government structures in a bid to create synergy for inclusive sustainable development. An important
aspect of this approach is to widen the scope of e-government for a transformative role of the government towards cohesive, coordinated, and integrated processes and institutions through which such sustainable development takes place.

Please see the official site for more details.


Towards Automated eGovernment Monitoring

September 26, 2011

Morten Goodwin’s Ph.D. thesis, with the title Towards Automated eGovernment Monitoring, is now available online.

Illustration photo of digital government

EGovernment solutions promise to deliver a number of benefits including increased citizen participation. To make sure that these services work as intended there is a need for better measurements. However, finding suitable approaches to distinguish the good eGovernment services from those which need improvement is difficult. To elucidate, many surveys measuring the availability and quality of eGovernment services are carried out today on local, national and international level.

Because the majority of the methodologies and corresponding tests rely on human judgment, eGovernment benchmarking is mostly carried out manually by expert testers. These tasks are error prone and time consuming, which in practice means that most eGovernment surveys either focus on a specific topic, small geographical area, or evaluate a small sample, such as few web pages per country. Due to the substantial resources needed, large scale surveys assessing government web sites are predominantly carried out by big organizations. Further, for most surveys neither the methodologies nor detailed result are publicly available, which prevents efficient use of the surveys results for practical improvements.

This thesis focuses on automatic and open approaches to measure government web sites.

The thesis uses the collaboratively developed eGovMon application as a basis for testing, and presents corresponding methods and reference implementations for deterministic accessibility testing based on the unified web evaluation methodology (UWEM). It addresses to what extent web sites are accessible for people with special needs and disabilities. This enables large scale web accessibility testing, on demand testing of single web sites and web pages, as well as testing for accessibility barriers of PDF documents.

Further, the thesis extends the accessibility testing framework by introducing classification algorithms to detect accessibility barriers. This method supplements and partly replaces tests that are typically carried out manually. Based on training data from municipality web sites, the reference implementation suggests whether alternative texts, which are intended to describe the image content to people who are unable to see the images, are in-accessible. The introduced classification algorithms reach an accuracy of 90%.

Most eGovernment surveys include whether governments have specific services and information available online. This thesis presents service location as an information retrieval problem which can be addressed by automatic algorithms. It solves the problem by an innovative colony inspired classification algorithm called the lost sheep. The lost sheep automatically locates services on web sites, and indicates whether it can be found by a real user. The algorithm is both substantially tested in synthetic environments, and shown to perform well with realistic tasks on locating services related to transparency. It outperforms all comparable algorithms both with increased accuracy and reduced number of downloaded pages.

The results from the automatic testing approaches part of this thesis could either be used directly, or for more in-depth accessibility analysis, the automatic approaches can be used to prioritize which web sites and tests should be part of a manual evaluation.

This thesis also analyses and compares results from automatic and manual accessibility evaluations. It shows that when the aim of the accessibility benchmarking is to produce a representative accessibility score of a web site, for example for comparing or ranking web sites, automatic testing is in most cases sufficient.

The thesis further presents results gathered by the reference implementations and correlates the result to social factors. The results indicate that web sites for national governments are much more accessible than regional and local government web sites in Norway. It further shows that countries with established accessibility laws and regulations, have much more accessible web sites. In contrast, countries who have signed the UN Convention on the Rights of Persons with Disabilities do not reach the same increased accessibility. The results also indicate that even though countries with financial wealth have the most accessible web sites, it is possible to make web sites accessible for all also in countries with smaller financial resources.

Full disclosure: I am the author of the thesis.


Global Web Accessibility

April 7, 2011

Cover of Journal of Information Technology and Politics

A scientific publication titled Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites (Morten Goodwin, Deniz Susar, Annika Nietzio, Mikael Snaprud, Christian S. Jensen) was recently published.

The publication presents web accessibility benchmarking methodology, and uses this methodology to present a survey on the accessibility of public web sites in the 192 United Nations Member States. It further identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while signing the UN Rights and Dignity of Persons with Disabilities has had no such effect yet. The article also demonstrates that, despite the commonly held assumption to the contrary, mature, high-quality Web sites are more accessible than lower quality ones. Moreover, Web accessibility conformance claims by Web site owners are generally exaggerated.

The countries with web sites that receive the best accessibility scores are:

  1. Germany
  2. Portugal
  3. Spain

The survey also shows that the economy of a country influences the accessibility of web sites, so that, not surprisingly, wealthy countries have more accessible web sites than poor countries. However, the study shows that accessibility laws have more impact than the financial status. Thus, it is not necessarily costly to make web sites accessible. It is however important to have well established accessibility laws which are actively followed up.

(Full disclosure: I am a co-author of the paper)
Morten Goodwin


Digitizing Public Services in Europe: Putting Ambition into Action

February 23, 2011

A report entitled Digitizing Public Services in Europe: Putting Ambition into Action was recently released by Capgemini for the European Commission. The report takes the pulse of eGovernment in Europe and is the ninth measurements of digital services of its kind.

The main focuses in the report includes how well digital services meet the the European i2010 action plan; How well the available eGovernment services are efficient, are able to include easy access to online services for all citizens, implement high impact services and strengthen participation and democracy.

The presented data shows that Ireland, Malta, Austria and Portugal rank best of the European countries on online sophistication.

Current and future challenges

Even though the report recognizes that the basic 20 services are available for almost all evaluated countries, it shows that the online sophistication levels significantly differs between national regional levels. Not surprisingly, the online national services score better than regional services, and online services in cities score better than in non-urban areas. A conclusion to be drawn from this is that even though eGovernment are mature on a national level in Europe, much work is left on regional levels.

Further challenges include take-up and impact. Even though services exists only 42% of individuals aged 16 to 74 use the Internet for interaction with public authorities. Another challenge is efficient trans-European interoperability.

Morten Goodwin


Remaining challenges of measuring the accessibility of web sites according to WCAG 2.0

August 11, 2010

The Web Content Accessibility Guidelines (WCAG 1.0) was launched in 1999 and was followed up by WCAG 2.0 in 2008. These guidelines have been the de facto standard for how to make web sites accessible for all people, including people with special needs.

Accessibility Sign

During the 9 year period from 1999 from 2008, many measurement methodologies for WCAG 1.0 was created. Furthermore, many national and international surveys have benchmarked the accessibility of public web sites according to WCAG 1.0. Since WCAG 2.0 differ from WCAG 1.0 in significant ways, the existing measurement methodologies cannot easily be translated to WCAG 2.0. Thus, very few applications for evaluation according to WCAG2.0 has been produced. Only two tools claiming to be WCAG 2.0 compliant are known to the authors: AChecker and TAW. The details of these tools are not known.

A paper titled Evaluating Conformance to WCAG 2.0: Open Challenges (Alonso, Fuertes, Gonzalez, Martínez) presented the remaining challenges of measuring accessibility of public web sites according to WCAG 2.0. In this paper, the authors identify the main challenges with measuring measuring accessibility in web sites in accordance to WCAG 2.0. The lessons have been learned by applying WCAG 2.0 tests in practice by university students.

The paper identifies the following challenges. The described challenges are in the authors experience unclear parts WCAG 2.0, which often means that the testers need interpret the texts and take decisions of how it should be understood. This could easily lead to inconsistency among testers as the testers may understand the texts differently.

Accessibility supported Technologies

WCAG 2.0 describes that only accessibility supported technologies can be relied upon for accessibility. It further states that the technology is accessibility supported only when user’s assistive technology will work with it. Since no list of supported technologies is provided, nor any formal way to measure if a technology is supported or not, this causes a challenge. There are no established method of saying that using one technology is accessibility, while using another is not.

Testability of Success Criteria

WCAG 2.0 consists of testable techniques. A technique is testable if it can be tested ether with machine or by human judgment. It is believed that around 80% of the criteria are testable by humans. However, the authors show that some of the description of the techniques for testing causes confusion. For example: in the sentences, “the test sequence of elements should be meaningful”, it is not evident what is meant by the wording meaningful. What is understood as “meaningful sequence of elements” for one person may not be meaningful for others. This is likely to cause confusion, which leads to inconsistency in any testing results.

Openness of Techniques and Failures

WCAG 2.0 is divided to separate documents: the guidelines and techniques. The guidelines are stationary and technology independent. In contrast the techniques is a living document which is updated as technology evolves. This makes it possible to update WCAG 2.0 with hands on techniques as the technologies used on the web evolve. One challenge is that W3C updates the techniques document for non-proprietary software only. This means that there will be no techniques collected by W3C for proprietary software, such as for example Adobe Flash. Thus, there will be no techniques from W3C on how to make Adobe Flash accessible.

Aggregation of Partial Results

How to present data from successfull techniques and common failures have not been presented by W3C. WCAG 2.0 identifies two types of criteria an element can match:

  • Positive: Elements which meet the criteria of successfull techniques. Any elements which uses the successfull techniques are known to be accessible.
  • Negative: Elements which is a common failure. Any elements which uses a common failure, is known to be in-accessible.

It is not so that the successfull techniques and common failures are opposite measures. Thus, not following a success technique does not mean that a barrier exist. Similarly, it is not so that avoid a common failure necessarily means that the element is accessible. Therefor, elements which nether match the successfull techniques nor common failures fall into some unknown state and cannot be claimed to be accessible nor in-accessible.

How to present data from a web page with common failures and successfull techniques are not clear.

Recommendations

The author further present some recommendations when measuring web accessibility according to WCAG 2.0. The recommendations are as following:

  • Accessibility-supported techniques should be clearly defined, and a methodology to identify if a techniques is accessible-suppported, or not should be established.
  • More experiments are needed for the testability of the techniques, failures and success criteria. This should be a step towards creating a common understanding of how the tests should be interpreted.
  • W3C should define how test results from successfull use of techniques, common failures, and not applicable should be aggregated and presented as a single result.

Is financial wealth leading to high quality government services?

August 6, 2010

It is natural to assume that financial wealth leads to better government. It is further reasonable to expect that wealthy countries have higher quality of the e-government services compared to countries with less financial wealth. But how much does the finances alone influence quality e-government services? This short study gives a peek of how finances affects e-government services.

UN E-government 2010 report

In this study the data used for quality of e-government services is the E–Government Development Index (E-readiness score) from the United Nations E-Government Survey 2010. Thus, it is directly assumed that a government with high quality e-government services will receive a high score, and visa versa. The remaining data used is from the World Bank Data Catalog.

The following figure presents a box plot of the differences between the E–Government Development Index of Developing and Developed countries. The plot shows that developing countries have in average score of 0.4 while developed countries have an average score of about 0.7. Furthermore, all developing countries have scores less than 0.7, while all the developed countries have a score higher than 0.5. Thus based on the United Nations E–Government Development Index score it is, not surprisingly, significant difference between e-government services in developing and developed countries.

Developing countries have in average of 0.4 while developed countries have an average of about 0.7. All the developing countries have e-readiness score less than 0.7 while all the developed countres have a score higher than 0.5.

E-readiness score versus developing and developed countires.

Thus, the quality is clearly dependant on the finances, but how much of the quality e-Government services are influenced by finances alone?

The development of government services is complex procedure shaped by many factors. There exists no general conclusion of which factors influence the quality of the government service. It is however possible to determine to what extent data from the financial situation in a country can be used to predict the e-readiness score.

The following graph presents the plot between E–Government Development Index and GNI per capita. The graph also includes a regression, which can be used to calculate the E–Government Development Index based on the GNI per capita alone.

A dotplot showing the trends between E-readiness and GNI per capita.

E-readiness versus GNI per capita

The trends in the data are clearly visible. The regression can be seen as the black line, the mean response is shows as a green dashed line while the prediction interval is presented as the blue dashed line.

The regression line (black line) shows the relationship between the E–Government Development Index and GNI per capita. If no correlation existed between the two data sets, the line would be completely horizontall. The regression line can be used to predict the E–Government Development Index using only the GNI per capita. The graphs shows us that the relationship is not linear, but more complex.
The mean response interval (green dashed line) tells the estimated mean of the data.
The prediction interval (blue dashed line) tells where future data is expected be located (similar to confidence interval).

The data shows that the mean response interval and prediction interval changes as the GNI per capita increases. Generally, we are more certain of the prediction when these intervals are small. From this we can draw the following conclusion. It is relatively easy to predict the E-readiness score when a country has a low GNI per capita. In contrast, to predict the E-readiness score based on the GNI Per Capita alone for wealthy countries is a lot less precise. I.e. lack of finances generally means low quality services, while wealth alone is not sufficient to ensure quality in e-government.


A collaborative approach for improving local government web sites

July 30, 2010

A publication on how to facilitate collaboration between local government and vendors entitled Accessibility of eGovernment web sites: Towards a collaborative retrofitting approach (Nietzio, Olsen, Eibegger, Snaprud) has recently been published.

Changing a local government web site is often a long process which normally includes vendors, editors and specialists in local regulations and legal enforcements. Results from benchmarking studies are often a good facilitators, but the results alone are of limited use when it comes to updates in practice. This is especially true if the web site updates are relatively small such as removing accessibility barriers. Thus, the paper presents an approach for rapid accessibility updates of government web sites. The approach uses benchmarking results together with forums and online checkers.

Collaborative process between municipalities, vendors and eGovMon. Vendors and municipalities collaborate through the eGovMon forum and through physical discussions. eGovMon organizes workshops and seminars for vendors and municipalities respectively.

Collaboration process between municipalities, vendors and eGovMon

The approach, visualised in the figure above, is applied to a group of Norwegian municipalities who want to improve the accessibility of their web site.

Accessibility benchmarking often fail to have an impact. This may be because of the following reasons:

  • The results are not detailed enough to be used for implementation purposes.
  • It is not clear what part of the publication chain the problem is located (in the CMS or introduced by the editor).
  • The maintainers do not have the technical knowledge to fix the problem.
  • The barriers are fixed in a one-off effort. However, since there are no quality process in place to detect if newly added content is in-accessible.
  • The benchmarking is carried out as a one-off study so that progress cannot be evaluated.

The presented approach includes three areas:

  1. Regular Benchmarking reports: Bi-monthly benchmarking reports of all municipality web sites. In these reports the editors of the local web sites can see how any web site updates affects accessibility.
  2. Online accessibility checkers: An interactive environment where editors and developers can instantly check their web pages and web sites. This allows for developers to incrementally remove accessibility barriers. (Blog post on Web Accessibility Checking)
  3. Online forum: Often times, it is clear where in the production chain an accessibility barrier is located. For example, when the logo of a web site is missing an alternative text, this is likely to be a problem caused by the CMS. However, if an individual image in a document is missing an alternative text, it could be because the editor did not provide this. Such discrepancies could lead to the situation where editors blame the CMS for accessibility problems, while the vendors claim that the editors are not using the CMS correctly. In the forum, editors can ask how to fix a specific barrier for a given CMS should be fixed, and the vendors can reply.

This approach allows for local web site editors to use e-government benchmarking results together with an online forum to fix any accessibility issues with the web site. Furthermore, the editors gets knowledge of which issues they cannot fix themselves, but has to be carried out by updates of the CMS software or web site template. Even though this collaborative concept was applied to web accessibility barriers, it may be useful for other areas of local e-government as well.

(Full disclosure: I’m a co-author of the paper)