eGovMoNet
eGovernment Monitor Network
Project no.: CIP 224998
Template to describe and evaluate the user satisfaction & impact measurement practice 1.1
Deliverable Number: D 1.5
Version: / 1.2Date: / 2009-01-27
Editors: / Mikael Snaprud
Authors: / Kim V. Andersen, Lasse Berntzen, Miriam Braskova, Rudolph Brynn, Luca Caldarelli, Rune Halvorsen, Xavier Heymans, Stephen Jenner, Evika Karamagioli, Bernadett Köteles, Anton Lavrin, Barbara Lorincz, Nacho Madrid, Christine Mahieu, Chiara Mancini, Jeremy Millard, Annika Nietzio, Uros Pivk, Roberto Pizzicannella, Michela Pollone, Peter Röthig, Jenny Rowley, Agata Sawicka, Mikael Snaprud, Christophe Strobbe, Christian Thomsen, Andrea Velazquez, Eric Velleman, Eleni Vergi, Charlie Wallin, Patrick Wauters and Diane Whitehouse.
Dissemination Level: / Public
Status: / Release Candidate
License:
This work is licensed under the Creative Commons Attribution-ShareAlike License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/2.5/ or send a letter to Creative Commons, 543 Howard Street, 5th Floor, San Francisco, California, 94105, USA.
This document consists of 20 pages including this cover (2 pages).
Abstract
This template is intended to support the description of eGovernment measurement methods to share the current practices and the experience deploying them.Using the template, the features in common and the differences among the methods may be identified, to support steps towards converging practices in a collaborative process.
More comparable eGovernment measurement results can support the search for good practices from a larger set of measured cases. /
Version Control
Version / Status / Date / Change / Editors0.1 / DRAFT / 2008-08-04 / First draft / Mikael Snaprud and Andrea Velazquez
0.2 / DRAFT / 2008-08-20 / Draft with comments from partners:
Stephen Jenner, Chiara Mancini, Christine Maheieu, Miriam Braskova, Anton Lavrin. / Mikael Snaprud and Morten Goodwin Olsen
0.3 / DRAFT / 2008-09-01 / Draft with comments from partners:
Peter Røthig, Evika Karamagioli.
Some sections needing more clarification are kept in yellow. / Mikael Snaprud and Andrea Velazquez
0.4 / DRAFT / 2008-09-16 / Draft with comments from partners: Nacho Madrid, Annika Nietzio and Jenny Rowley / Mikael Snapurd and Andrea Velazquez
1.0 / RC / 2008-09-16 / Polish to produce Release canidate / Andrea Velazquez
1.1 / DRAFT / 2009-01-27 / Update according to comments discussed in the Copenhagen meeting including those from Jeremy Millard, Rune Halvorsen and Lasse Berntzen. / Mikael Snaprud and Andrea Velazquez
1.2 / RC / 2009-02-04 / Update according to comments from Annika Nietzio / Mikael Snaprud and Andrea Velazquez
5
Table of Contents
Introduction 7
General Information 8
Evaluation of implementations 8
Context 8
A. General properties of the measurement methodology 10
B. Intended use of measurements results 11
1.Who are the intended users of the measurement results? 11
2.Measurements what for 11
C. Deployment properties 12
1.Who carries out the measurement? 12
2.How is the measurement carried out? 13
3.Measurement coverage: subset or exhaustive sampling 13
4.Where to measure? 13
5.What to test - Measurement subject? 14
6.How to test? 14
7.When to test? 15
8.Score calculation and statistics used 15
9.Reporting of the evaluation results 16
10.How to interpret the results 16
D. Experience from using the methodology 16
E. Maintenance of Measurement Method 17
F. Other important properties not included in the template 18
APPENDIX A 19
APPENDIX B 20
Introduction
While as many as 15 Member States may currently have already adopted an eGovernment measurement framework there is no consistent way to describe these frameworks.[1] The objective of developing a template is to facilitate the comparison of descriptions of eGovernment measurement frameworks, including descriptions of the used methods and tools, followed practices, and implementation stage and challenges.
The template will be used to describe and to share the current practices and the experience with deploying them. In this way, the common features, and the differences among the methods can be identified in a collaborative way to support the converging practices.
The template will be improved by taking in the main issues discovered when using it for descriptions and comparisons in the network meetings. These results will be actively disseminated among relevant stakeholders, also beyond the network consortium. In that way the eGovMoNet will contribute to coordination and harmonization of national measurement initiatives.[2]
When filling the form below please provide references (links) to further information on the method properties, and examples if available, for example to questionnaires or lists of tests etc.
We may also try to cover what is useful for services providers to measure.
General Information
- Name and country/countries/organisation(s) of the measurement methodology.
- Does the measurement cover evaluation of implemented solutions and, or support strategic decisions related to eGovernment?
- What is the purpose of making measurements?
· To deliver better services?
· Improve efficiency of governments?
· Saving money?
Evaluation of implementations
- Examples of services that can be covered by the measurement methodology. See also Appendix B.
- Support for strategic decisions, please indicate examples, e.g. selection of projects to invest in etc.
- Nature of evaluation result. Please indicate the outcome of the measurement such as estimated amount saved, citizen take up ratio, score for level of measured value, or some other form of result.
Context
Please include an overview setting out the context for the Impact or user Satisfaction measurement.
- Limitation of scope for the methodology.
- Can it for example be used to determine user satisfaction of both citizens and businesses?
- Is the method only applicable for a certain group of citizen (e.g. those who have internet access)?
- How are egovernment service users involved?
- Is there a channel in place to suggest improvement of services?
A separate template should, if necessary, be completed for each eGovernment service covered (and the template should identify what that service is).
The template should report which eGovernment service are covered by measurement methodology.
A. General properties of the measurement methodology
A.1. Repeatability – is there in the methodology a way to quantify to what extent two independent measurements of the same measurement object will lead to the same result?
A.2. Independence of size – to what extent will the measurement method cope with measuring different sizes of eGovernment applications. Does the method have a scale factor that allows comparison of different results? E.g. is there a way to measure e.g. a small service for a garage building permit or a service for a building permit for a shopping centre with the same methodology?
A.3. Estimated cost of using the methodology
· Monetary, e.g. license fees for needed software or proprietary parts of the methodology.
· Resources needed to carry out measurements in terms of person hours.
· Other resources needed to carry out the measurement.
· Note that the cost may depend also on whether the measurements can be carried out within the organisation or whether a 3rd party has be commissioned.
A.4. Cost effectiveness- Relation between: how much does the measurement cost and what impact do it have?
A.5. Prerequisites to carry out and use the measurement
· involvement and motivation needed
· understandability for audience, is there any evidence that the results from the measurements can be efficiently communicated.
A.6. Stability of the methodology over time to allow for comparisons of measurement results over time.
A.7. Accuracy, does the method give any indication of how accurate the measurement results are?
A.8. Degree of standardization, does the method refer to established standards such as WAI from W3C.
A.9. Number and type of administrations using the method, preferably a list of them.
A.10. Multidimensionality (multi or uni)? See details in Appendix A.
A.11. Cause and effect analysis basis for indicators, relevance? Is there evidence that the methodology actually measures e.g. user satisfaction?
A.12. The role of the person in charge for using the measurement results in the government agency which is responsible for the measured subject where the measurement is carried out.
B. Intended use of measurements results
B.1. Is the methodology designed to promote targeted change? If so please specify in which level:
· Implementation
· Organisational procedures
· Policy
B.2. Is the measurement intended so support strategic decision e.g. such as selection of project to invest in?
1. Who are the intended users of the measurement results?
Please select among the given target groups and indicate the expected use of the measurement results.
B.1.1. Policy makers, e.g. to facilitate the evaluation of a given policy.
B.1.2. Service providers, e.g. to provide evaluation services.
B.1.3. Software vendors, e.g. to support product profiling and comparisons.
B.1.4. Developers, e.g. to support quality assurance of an on line service.
B.1.5. Web site owners, e.g. to support quality assurance.
B.1.6. Researchers, e.g. to identify good practice.
B.1.7. NGO's, e.g. to monitor given properties of interest such as accessibility or transparency.
B.1.8. Other, please specify
2. Measurements what for
B.2.1. Identify good practice.
B.2.2. Inform and learn (Benchlearning).
B.2.3. Measure impact of policies.
B.2.4. Assess current status.
B.2.5. Strategic or operational, e.g. inclusion, efficiency, etc.
B.2.6. Justify expenditure.
B.2.7. Determine user satisfaction- to identify the user’s specific needs in order to customize the services to fit users' specific needs in the future, to obtain better knowledge of the citizens’ attitudes, opinions, expectations, habits, perceptions and satisfaction levels with the delivery of public services, to identify the users’ perceptions of the quality of services provided, the perceived institutional reputation and credibility, etc.
B.2.8. Evaluate current status to
· corrective actions
· identify bottlenecks
· prioritize improvements
· continuous improvement
B.2.9. Marketing.
B.2.10. Community building.
B.2.11. NGO's.
C. Deployment properties
The description of the following properties should enable a practitioner to understand the basics of how the methodology is to be used.
1. Who carries out the measurement?
C.1.1. Independence of measurement subject
· Does the methodology require independence between those who carry out measurement and the measurement subject (Third party organisation carries out measurement)?
C.1.2. Number of internal staff members are involved in measurement for each public institution/each service.
C.1.3. Number of external persons are involved in measurement for each public institution/each service
C.1.4. Training requirements for evaluators (do they need any certification?)
2. How is the measurement carried out?
C.2.1. Automatic, please refer to tool(s) used.
C.2.2. Manual.
· Feedback from egovernment service users.
· Experts.
C.2.3. A combination of automatic and manual, please specify.
C.2.4. On line, please refer the method(s) used. E.g. on line questionnaires, crawler technologies etc.
C.2.5. Off line, please refer the method(s) used. E.g. phone calls, face to face interviews or focus groups.
3. Measurement coverage: subset or exhaustive sampling
For example, are all pages of the website evaluated, or all citizens of e.g. a municipality interviewed? If not, how is the sampling carried out.
C.3.1. Manual / Automatic sampling.
C.3.2. Which sampling approaches are used?
· Sequence of steps in scenarios for end-users, e.g. an online service to apply for Kindergarten.
· Stop criteria based on achieved accuracy of measurement result.
· Other stop criterion to end the sampling.
C.3.3. How are interviewees/respondents for a survey/questionnaire selected?
4. Where to measure?
C.4.1. Scope: where is the method used constituency? (organisation, country, region etc.)
5. What to test - Measurement subject?
C.5.1. Web sites:
· Static or informational web pages, .pdf’s, or documents
· Downloadable documents or forms (including forms the user must send in or e-mail)
· Search engines or functions providing searches on static web pages or documents
· Intranet services or applications;
· Forms submitted online requesting information
· Submitting a complaint, or similar function
C.5.2. eGovernment web service, such as those cover in the CapGemini 20 services report, or the Top of the web-User Satisfaction and Usage Survey of eGovernment services. Please provide link to the service.
C.5.3. Public Access Terminals.
C.5.4. Further channels like RSS, SMS etc. (e.g. are there other options to use the service for a citizen who can't / doesn't want to use the eGovernment channel?)
C.5.5. Users: multi-perspective consideration, single expert consideration or web service users.
C.5.6. Enterprises
C.5.7. Projects
C.5.8. Project proposals
C.5.9. Project beneficiaries
C.5.10. Case study: ePractice
C.5.11. Public procurement process
C.5.12. Technical back office
· Log from back office software, such as web logs
· Re-use of input from users, to avoid that users need to access several web sites and fill out repeated form fields with the same information.
· Other subjects?
6. How to test?
C.6.1. Definition of tests, e.g. http://www.wabcluster.org/uwem1_2/UWEM_1_2_TESTS.pdf
C.6.2. Definition of questionnaires and their questions, preferably a reference to the actual questions.
C.6.3. Procedure for selective deployment of tests / questions e.g. depending on the outcome of previous results.
7. When to test?
C.7.1. Once /repeatability
· Is the measurement carried out before the event (e.g. implementation of eGov service), when using it or after it has been used. (ex ante,[3] in between or ex post[4])
C.7.2. Measurement frequency (periodicity)
C.7.3. Timing of Measurement: external trigger (event driven), e.g. new service, complaint error, elections, tax declaration.
C.7.4. Time frame
· duration of measurement: one/all
· time span to cover
C.7.5. What part of the value chain is being measured?
· Inputs
· Outputs
· Impact of the services
8. Score calculation and statistics used
C.8.1. Indicator
· Qualitative
· Quantitative.
u Please indicate the nature of the quantitative information. Such as financial (e.g. value in Euros), count (number of submitted forms) or score computed according to a given calculation or some other number.
C.8.2. How to compute the score.
C.8.3. How to aggregate scores .
C.8.4. Statistical properties.
C.8.5. Comparisons, e.g. time series or needs and wants/actual perception.
9. Reporting of the evaluation results
C.9.1. Score card, classifications etc.