Chapter Comments 15-1
Chapter 15
Product Metrics for Software
CHAPTER OVERVIEW AND COMMENTS
This chapter discusses the use of software measurement and metrics as a means of helping to assess the quality of software engineering work products. Most of the metrics discussed in this chapter are not difficult to compute.
15.1Software Quality
This section defines software quality as:
Conformance to the explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that are expected of all professionally developed S/W.
This implies the existence of a set of standards used by the developer and customer expectations that a product will work well. Conformance to implicit requirements (e.g. ease of use and reliable performance) is what sets software engineering apart from simply writing programs that work most of the time. Several sets of software quality factors are described.
The definition serves to emphasize three important points:
- S/W reqs are the foundation from which quality is measured. Lack of conformance to req. is lack of quality.
- Specified standards define a set of development criteria that guide the manner in which S/W is engineered. If the criteria are not followed, lack of quality will almost surely result.
- There is a set of implicit reqs that often goes unmentioned. If S/W conforms to its explicit reqs but fails to meet implicit reqs, S/W quality is suspect.
15.1.1 McCall’s Quality Factors
McCall’s quality factors were proposed in theearly 1970s. They are as valid today as they werein that time. It’s likely that software built to conform to these factors will exhibit high quality well intothe 21st century, even if there aredramatic changesin technology.
The factors that affect S/W quality can be categorized in two broad groups:
- factors that can be directly measured (defects uncovered during testing)
- factors that can be measured only indirectly (Usability and maintainability)
The S/W quality factors shown above focus on three important aspects of a S/W product:
- Its operational characteristics
- Its ability to undergo change
- Its adaptability to new environments
Referring to these factors, McCall and his colleagues provide the following descriptions:
Correctness:The extent to which a program satisfies its specs and fulfills the customer’s mission objectives.
Reliability: The extent to which a program can be expected to perform its intended function with required precision.
Efficiency: The amount of computing resources and code required to perform is function.
Integrity: The extent to which access to S/W or data by unauthorized persons can be controlled.
Usability: The effort required to learn, operate, prepare input for, and interpret output of a program.
Maintainability: The effort required to locate and fix errors in a program.
Flexibility: The effort required to modify an operational program.
Testability: The effort required to test a program to ensure that it performs its intended function.
Portability: The effort required to transfer the program from one hardware and/or software system environment to another.
Reusability: The extent to which a program can be reused in other applications-related to the packaging and scope f the functions that the program performs.
Interoperability: The effort required to couple one system to another.
15.1.2 9126 Quality Factors
ISO 9126 is an international standard for the evaluation of softwarequality.
- Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.
- Suitability
- Accuracy
- Interoperability
- Compliance
- Security
- Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time.
- Maturity
- Recoverability
- Usability- A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
- Learnability
- Understandability
- Operability
- Efficiency- A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.
- Time Behavior
- Resource Behavior
- Maintainability- A set of attributes that bear on the effort needed to make specified modifications.
- Stability
- Analyzability
- Changeability
- Testability
- Portability- A set of attributes that bear on the ability of software to be transferred from one environment to another.
- Installability
- Replaceability
- Adaptability
15.2A Framework for Technical Software Metrics
General principles for selecting product measures and metrics are discussed in this section. The generic measurement process activities parallel the scientific method taught in natural science classes (formulation, collection, analysis, interpretation, feedback).
If the measurement process is too time consuming, no data will ever be collected during the development process. Metrics should be easy to compute or developers will not take the time to compute them.
The tricky part is that in addition to being easy compute, the metrics need to be perceived as being important to predicting whether product quality can be improved or not.
15.2.1 Measures, Metrics and Indicators
- A measure provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of a product or process
- The IEEE glossary defines ametric as “a quantitative measure of the degree to which a system, component, or process possesses a given attribute.”
- An indicator is a metric or combination of metrics that provide insight into the software process, a software project, or the product itself
15.2.3Measurement Principles
- The objectives of measurement should be established before data collection begins;
- Each technical metric should be defined in an unambiguous manner;
- Metrics should be derived based on a theory that is valid for the domain of application (e.g., metrics for design should draw upon basic design concepts and principles and attempt to provide an indication of the presence of an attribute that is deemed desirable);
- Metrics should be tailored to best accommodate specific products and processes.
Measurement Process
- Formulation. The derivation of software measures and metrics appropriate for the representation of the software that is being considered.
- Collection. The mechanism used to accumulate data required to derive the formulated metrics.
- Analysis. The computation of metrics and the application of mathematical tools.
- Interpretation. The evaluation of metrics results in an effort to gain insight into the quality of the representation.
- Feedback. Recommendations derived from the interpretation of productmetrics transmitted to the software team.
S/W metrics will be useful only if they are characterized effectively and validated to that their worth is proven.
- A metric should have desirable mathematical properties.
- When a metric represents a S/W characteristic that increases when positive traits occur or decreases when undesirable traits are encountered, the value of the metric should increase or decrease in the same manner.
- Each metric should be validated empirically in a wide variety of contexts before being published or used to make decisions.
15.2.4 Goal-Oriented Software Measurement
- The Goal/Question/Metric Paradigm
- (1) establish an explicit measurement goal that is specific to the process activity or product characteristic that is to be assessed
- (2) define a set of questions that must be answered in order to achieve the goal, and
- (3) identify well-formulated metrics that help to answer these questions.
A goal definition template can be used to define each measurement goal.
- Goal definition template
- Analyze {the name of activity or attribute to be measured}
- for the purpose of {the overall objective of the analysis}
- with respect to {the aspect of the activity or attribute that is considered}
- from the viewpoint of {the people who have an interest in the measurement}
- in the context of {the environment in which the measurement takes place}.
15.2.4 The Attributes of Effective S/W Metrics
- Simple and computable. It should be relatively easy to learn how to derive the metric, and its computation should not demand inordinate effort or time
- Empirically and intuitively persuasive. The metric should satisfy the engineer’s intuitive notions about the product attribute under consideration
- Consistent and objective. The metric should always yield results that are unambiguous.
- Consistent in its use of units and dimensions. The mathematical computation of the metric should use measures that do not lead to bizarre combinations of unit.
- Programming language independent. Metrics should be based on the analysis model, the design model, or the structure of the program itself.
- An effective mechanism for quality feedback. That is, the metric should provide a software engineer with information that can lead to a higher quality end product
15.3Metrics for the Analysis Model
Collection and Analysis Principles
- Whenever possible, data collection and analysis should be automated;
- Valid statistical techniques should be applied to establish relationship between internal product attributes and external quality characteristics
- Interpretative guidelines and recommendations should be established for each metric
Analysis Metrics
- Function-based metrics: use the function point (FP)as a normalizing factor or as a measure of the “size” of the specification. FP can be used to:
- Estimate the cost required to design, code, and test
- Predict the number of errors that will be encountered during testing
- Forecast the number of components and/or the number of project source lines in the implemented system.
- Specification metrics: used as an indication of quality by measuring number of requirements by type
- The function point metric (FP), first proposed by Albrecht [ALB79], can be used effectively as a means for measuring the functionality delivered by a system.
- Function points are derived using an empirical relationship based on countable (direct) measures of software's information domain and assessments of software complexity
- Information domain values are defined in the following manner:
- number of external inputs (EIs)
- number of external outputs (EOs)
- number of external inquiries (EQs)
- number of internal logical files (ILFs)
- number of external interface files (EIFs)
Computing Function Points
15.4Metrics for the Design Model
Design metrics for computer S/W, like all other S/W metrics, are not perfect. And yet, design without measurement is an unacceptable alternative.
15.4.1 Architectural Design Metrics
- Structural complexity = g(fan-out), fan-out is defined as the number of modules immediately subordinate to the module, that is, the number of modules that are directly invoked by module i. Fan-in is defined as the number of modules that directly invoked module i.
- Data complexity = f(input & output variables, fan-out), provides an indication of the complexity in the internal interface for a module i.
- System complexity = h(structural & data complexity), is defined as the sum of structural and data complexity.
HK metric: architectural complexity as a function of fan-in and fan-out
Morphology metrics: a function of the number of modules and the number of interfaces between modules
15.4.2 Metrics for OO Design
Whitmire [WHI97] describes nine distinct and measurable characteristics of an OO design:
Size
Size is defined in terms of four views: population, volume, length, and functionality
Complexity
How classes of an OO design are interrelated to one another
Coupling
The physical connections between elements of the OO design
Sufficiency
“the degree to which an abstraction possesses the features required of it, or the degree to which a design component possesses features in its abstraction, from the point of view of the current application.”
Completeness
An indirect implication about the degree to which the abstraction or design component can be reused.
Metrics for OO Design-II
Cohesion
The degree to which all operations working together to achieve a single, well-defined purpose
Primitiveness
Applied to both operations and classes, the degree to which an operation is atomic
Similarity
The degree to which two or more classes are similar in terms of their structure, function, behavior, or purpose
Volatility
Measures the likelihood that a change will occur
15.4.3 Class-Oriented Metrics--The CK Metrics Suite
Weighted methods per class (WMC): The number of methods and their complexity are reasonable indicator of the amount of effort required to implement and test a class.
Depth of the inheritance tree (DIT): The maximum length from the nodeto root of the tree.
Number of children(NOC): The subclasses that are immediately subordinate to a class in the class hierarchy are termed its children.
Coupling between object classes (CBO): is the number of collaborations listed for a class on its CRC card. Keep CBO low.
Response for a class (RFC): is a set of methods that can potentially be executed in response to a message received by an object of that class. RFC is the number of methods in the response set. Keep RFC low.
Lack of cohesion in methods (LCOM): is the number of methods that access one or more of the same attributes. Keep LCOM low.