COMPUTER TECHNOLOGY INSTITUTE 1999
______
Abstract
This paper presents CESM, a tool that could easily be integrated within any quality department’s toolkit. It also briefly discusses the external measurement philosophy, the techniques used for the perceived software quality assessment and the overall software measurement methodology. Although the paper presents in relative detail the CESM tool, emphasis has been placed on presenting how this tool has been integrated within our quality department toolkit and, most of all, how CESM or similar tools could be used by any software measurement team. The aim of this paper is twofold; firstly, it presents the CESM tool and, secondly, it offers an external measurement approach that could be adopted by any quality assurance team. It also aims to demonstrate how this, or similar tools that might be designed and produced in the future, could aid towards the difficult road of assessing and improving software quality.
CESM: a Perceived Software Quality
Assessment Tool
1. Introduction
This paper presents the CESM tool. However, it is not limited to the presentation of this tool. On the contrary, it has ramifications on all aspects of software quality assessment and measurements methodology. CESM is a working acronym for ‘Collecting External Software Measurements’ and this tool does exactly what its name implies; it is used to collect external software quality measurements, aimed at aiding software quality assessment.
CESM is platform–independent and based on the World Wide Web. It uses a multipurpose questionnaire that is based on either ISO 9126 [1] decomposition of software quality characteristics or on Factor–Criteria–Metrics [2] factors, in order to offer multilingual questionnaires. Using this weighed questionnaire, it facilitates a perceived software quality survey for a specific product. The survey is multilingual and is conducted over the Web. The tool offers the author (the person responsible for conducting the survey) the ability to revise the questionnaire and to change each characteristic’s weight, to handle users’ logging and responses, to store the results into a database and to interoperate with internal measurement tools used for statistical analysis and data presentation.
In section 2, the use of surveys in order to collect data used to assess software quality is presented. In section 3, the overall philosophy of using such a tool within a quality measurement framework and under an underlying philosophy is presented. In section 4, CESM and its functionality is presented. Finally, in section 5, apart from the paper’s conclusion, future goals and open problems are discussed.
2. Using Surveys for Perceived Software Quality Assessment
CESM is a soft measurement collection tool. As ‘soft measures’ are defined measures of what Jones [3] calls ‘soft data’. Such data are related to areas in which human opinions must be evaluated and, since human opinions vary, absolute precision cannot be achieved for soft data. In respect to Stevens [4] measures scales, soft measurements are initially in ordinal scale but, as it will be shown in the following sections, they could be corresponded to interval or ratio scales.
On the contrary, hard measurements (measures of things that can be quantified with no subjectivity) are, in most cases, in ratio scale. However, the problem with hard measurements is the difficulty to interpret the results and the inability to lead directly to conclusions about higher abstraction factors of software quality. An obvious example of this problem is the design of a hard metric to measure usability, which is one of the six ISO 9126 quality characteristics and one of eleven Factor–Criteria–Metrics quality factors.
Furthermore, as our previous research indicates [5], success in hard internal measures does not guarantee a priori success in fulfilling customers’ demand for quality. Programs that will succeed in hard measures may not receive the same acknowledgement by the customers (as indicated by soft measurements).
A large number of hard metrics has been proposed and a good collection of such metrics could be found in the bibliography; from the former books [6], to the latest [7] and to the specialised ones [8]. Moreover, many hard metrics are proposed yearly, mainly aimed at satisfying needs derived from new programming styles and techniques. Such as in the case of the massive adoption of object oriented programming, which stirred research to propose new metrics for object oriented programming. For the majority of these metrics, some tools have been proposed in order to automate measurements and more will continue being presented yearly.
Although the research area of hard measures and tools is very active, the same activity had not emerged in the area of soft measures. Probably the main reason for this is that the measurable elements are somehow hidden and the measuring method is based on subjective data. However, the collection of soft data is not only encouraged by most software quality methodologies but also enforced by the vast majority of standards. ISO 9001 [9], Capability Maturity Model [10], Baldrige Awards [11] emphasise the collection of soft data.
Such soft measurements are collected by means of surveys. As argued by Kaplan [12], surveys allow one to focus on just the issues of interest since they offer complete control of the questions being asked. Furthermore, surveys are quantifiable and therefore are not only indicators in themselves, but also allow the application of more sophisticated analysis techniques appropriate to organisations with higher levels of quality maturity.
Moreover, the use of soft measures is essential not only because the results are used immediately within a quality program but also because, without such soft measurements, one cannot rely solely on hard measurements. Soft measurements must be used to calibrate hard metrics tools. With the arguments presented above, it is not suggested to stop using hard measures, or to use only soft measures. Nevertheless, it was in our labs that one of the pioneer tools ‘Athena’ [13] was designed and hard measurement methodologies were proposed and applied [14]. Our suggestion is to conduct both hard and soft measurements and to measure soft data in a rigorous, systematic and in as automated manner as possible. The CESM tool facilitates the use of soft measures and the collection of data and aids towards globally addressing software quality.
3. The CESM Tool’s Philosophy
CESM uses a set of customer perceived quality measurement techniques [15] in order to effectively measure soft data and to minimise subjectivity and errors.
The questionnaire designer has the choice of selecting either the QWCO (Qualifications Weighed Customer Opinion) technique, which is measured using the formula shown in equation (1), or the QWCODS (Qualifications Weighed Customer Opinion with Double Safeguards) technique, which is measured using the formula shown in equation (2).
(1)
(2)
The aim of these techniques is to weigh customers’ opinions according to their qualifications. In order to achieve this, ‘Oi’ measures the normalised score of customer ‘i’ opinion, ‘Ei’ measures the qualifications of customer ‘i’, while ‘n’ is the number of customers who participated in the survey. In order to detect errors, we used a number of safeguards embedded into the questionnaires. Safeguard is defined as a question placed inside the questionnaire so as to measure the correctness of responses.
In equation (2) ‘Si’ is the number of safeguards that the customer ‘i’ has replied correctly to, ‘ST’ is the total number of safeguards and ‘Pi’ is a boolean variable which is zero when even a single error has been detected by this safeguards when measuring the qualifications of customer ‘i’.
The CESM tool has been integrated within our quality toolkit. Hard measurements were conducted using the previously mentioned Athena metrics environment and analysed using the QSUP environment [16]. Soft measurements were conducted periodically using the CESM tool and their results were used to predict [17] the customers’ belief revisions and to calibrate the hard measurement tools.
TECHNICAL REPORT No. TR99/08/01 2
COMPUTER TECHNOLOGY INSTITUTE 1999
______
Figure 1: The CESM Architecture
TECHNICAL REPORT No. TR99/08/01 2
COMPUTER TECHNOLOGY INSTITUTE 1999
______
The frequency of the collection of soft measurements is determined by analysing the previous results and evaluating the results stored in the CESM database. Using belief revision techniques [18], we are then able to estimate the appropriate time to retrieve the new soft data sample.
4. The CESM Tool
The CESM tool is comprised of three components:
· QD (Questionnaire Designer) an application where the author designs questionnaires for each survey;
· WP (Web Page) of each questionnaire, which can be accessed by the customers participating in a survey;
· QSA (Questionnaire Statistical Analyser) an application which provides the author the statistical results of surveys.
Other basic parts of CESM are a database, CESM–DB, where the questionnaires and the customers’ answers are stored and a Web server, where the Web pages of the questionnaires are published. Figure 1 illustrates the basic architecture of CESM.
4.1 Questionnaire Designer
The QD is a ‘wizard’–like front–end application, where the author responsible for the survey designs a questionnaire for a specific product. This component is installed only in the author’s PC and is directly connected to the CESM’s database. Although QD is currently available only in Greek and English, questionnaires of any language can be designed provided that the author declares the language of his questionnaire.
The first step when setting up a new questionnaire in QD is to specify which user–oriented quality characteristics derived from the ISO 9126 (or from the quality factors of FCM model) will be dealt with. The questions of the questionnaire must be clustered in groups, according to which quality characteristic they refer (figure 2). Each group of questions can be given a different weight, depending on the emphasis given to the corresponding quality characteristic by the author.
The second step is to compose the questions for each group. Furthermore, the weight of each question of a group must be defined, according to its significance. These questions must have be in multiple–choice format in order to guide the customer in selecting predefined responses that will be ordered in interval scales (with choice bars, percentage estimations, etc.). Finally, the third step is to determine the multiple–choice responses to each question and to define their value.
CESM provides the ability to use an already stored questionnaire for a specific product for other products of the software company as long as it is clearly declared for which product each customer answers the questionnaire. Furthermore, a questionnaire stored in the database can be easily accessed in order to be reformed or has its characteristics’ weight, its questions’ weight or its responses’ values changed. Finally, the QD supports a cloning function, in order to have a republication of an already stored questionnaire.
TECHNICAL REPORT No. TR99/08/01 2
COMPUTER TECHNOLOGY INSTITUTE 1999
______
Figure 2: A form from CESM Questionnaire Designer
TECHNICAL REPORT No. TR99/08/01 2
COMPUTER TECHNOLOGY INSTITUTE 1999
______
4.2 Web Page
After a questionnaire is designed, it is automatically published to the software company’s Web server. The scripts responsible for producing the Web page of the questionnaire are platform–independent, i.e. they are portable to any kind of Web server and the Web page they produce can be accessed by any Web browser. Afterwards, the customers of a software product are informed of the existence of this questionnaire and they are requested to answer it in order to measure their opinion of the quality of the product.
In order to access the Web page of a questionnaire, an authentication of the identity of each customer is required so as not only to handle their logging and responses but also to prevent the questionnaire from being answered by irresponsible persons. After a successful logging, the appropriate data are drawn from the database (i.e. the questions of each group and their multiple predefined responses) and the questionnaire is automatically created.
The customer is requested to choose one response for every question. Apart from the predefined responses, which are presented in descending scale, every question is accompanied by a blank field, where the customer can give his own answer in the event that none of the suggested responses satisfy him. In this case he must also rate his response, by choosing one of the predefined ones. An example of a questionnaire’s Web page is illustrated in figure 3.
TECHNICAL REPORT No. TR99/08/01 2
COMPUTER TECHNOLOGY INSTITUTE 1999
______
Figure 3: An example of a Web page
After answering all the questions, the logging info, the access date and the responses given are stored in the database. When conducting a survey for the measurement of the quality of a specific product, each customer can answer the relevant questionnaire only once. If the author decides to repeat this survey after a justifiable period of time, he must give permission to the customer to access the questionnaire’s Web page for more than one times.
4.3 Questionnaire Statistical Analyser
The third component of CESM is the QSA, which is an application installed only in the author’s PC and provides him with the statistical results of the survey. It is directly connected to the CESM’s database, from which the customers’ responses are retrieved. When using the QSA, the author must firstly choose a product that a survey has already been conducted for, the questionnaire that has been used and the time period during which that survey has been conducted (in the case of multiple surveys with the same questionnaire for the same product).
The basic functions implemented in QSA are the estimation of the formulas shown in equations (3) and (4) and the QWCO and QWCODS. The formula GOCi (Group Opinion of customer ‘i’) measures the opinion of a single customer ‘i’ for the quality of the product, according to the user-oriented software quality characteristics. In equation (3) ‘m’ is the number of questions for this quality characteristic in the questionnaire, ‘Qj’ is the weight given to the question ‘j’ and ‘Vj’ is the value of the response that the customer selected.