Evaluation of software designs

Abstract

This paper is intended to present the study of different phases and stages of software design evaluation.

In software design evaluation there are two types of approaches,

-Tool based approach

-Process based approach.

Here we study both these approaches but mainly concentrate on the tool base approach.

Tool-based approach uses subjective evaluations as input to tool analysis. These approaches or their combination are expected to improve software design and promote organizational learning about software design.

In process based approach developers study and improve their system’s structure at fixed intervals.

We discuss both these aspects in this paper and present well furnished details regarding the functionality and the application of the above.

Introduction

The past decade has seen a dramatic change in the break down of projects involving hardware and software, bothwith respect to functionality and economic considerations.

For some large applications, the software can exceed 75%of the total system cost. As the price of computer hard ware continues to fall, larger and more complex computersystems become economically feasible; hence the abilityto design large complex software systems of high qualityat minimum cost is essential. The increasing demand forlow-cost highquality software can be satisfied by identifyingpossible problem areas in the early part of system development. This in turn means measuring the quality of the software product in the infancy of its lifecycle.It is generally accepted that measuring the quality ofsoftware is becoming increasingly important. Unfortunately, most of the work in this area to date has centredaround the source program. This has the disadvantagethat it emphasizes only one aspect of the entire lifecycle(the lifecycle being described as a sequence of steps, beginning with requirement specification, design, codingand testing phases, through to the maintenance phase).

Measurement of quality should consider each of thesephases and, in particular, emphasis should be placed onthe early phases such as requirement and design. Monitoring the quality of these two phases has been shown toprovide significant improvements in software quality and asignificant decrease in development costs [2].Design measurement is desirable because it allows youto capture important aspects of the product early in thelifecycle. Studies have concluded that many of the problems associated with software can be detected beforetesting begins. From studies of design methodologies [3], we can find that 90% of theproblems found in the testing phase could have beenfound in earlier stages.Much work in software quality is centred around qualitymetrics. It was therefore decided that a set of metricsshould be investigated to evaluate quality at the designstage.

Software design metrics

Design metrics fall into two categories

Productmetrics:

Derived from design representations,these can be used to predict the extent of a future activityin a software project, as well as assessing the quality of thedesign in its ownright. Product metrics are to be furtherdivided into network, stability and information flow metrics

Process metrics:

Metrics derived from the variousactivities that make up the design phase. They includeeffort, timescale metrics, fault and change metrics. Theseare normally used for error detection, the time spent ateach phase of development, measuring the cost etc. Whenthey are recorded on a unit basis, they can also be usedfor unit quality control.

Ofthe two types, product metrics are the most suitable forevaluating software design quality, and sothese are discussed further

Network metrics

These metrics sometimes referred to as call graphmetrics, are based on the shape of the calling hierarchywithin the software system. Their com-plexity metric was based on measuring how far a designdeviates from a tree structure with neither common callsto modules nor common access to a database. The theoryon which this metric was based is that both common callsand common database access increase the couplingbetween the modules.

Stability metrics

Stability metrics are based on the resistance to changethat occurs in a software system during maintenance. Theprinciple behind this type of metric is that a poor system isone where a change to one module has a high probabilityof giving rise to changes in other modules. This, in turn,has a high probability of giving rise to further changes inother modules. The work is an expansion of a metric, which relies on the subjective estimation of the effect that a change to one module had onanother.

This early work has now been refined . Design stability measures can now be obtained at any point in thedesign process, allowing examination of the program earlyin its life-cycle for possible maintenance problems. Designstability measurement requires a more in-depth analysis ofthe interfaces of modules and an account of the ‘rippleeffect’ as a consequence of program modifications(stability of the program). The potential ‘ripple effect’ isdefined as the total number of assumptions made by othermodules, which invoke a module whose stability is beingmeasured, share global data or files with modules, or areinvoked by the module.

During program maintenance, if changes are made thataffect these assumptions, a ‘ripple effect’ may occurthrough the program, requiring additional costly changes.It is possible to calculate the ‘ripple effect’ consequent onmodifying the module.

The design stability of a piece of software will be calculated on the basis of the total potential ‘ripple effect’ of allits modules. This approach allows the calculation ofdesign stability measures at any point in the designprocess. Areas of the program with poor stability can thenbe redesigned to improve the situation.

Evaluation of Software Design Quality

Here we discuss the development of an approach for softwaredesign quality improvement. This approach will utilizesoftware developers’ subjective opinions on softwaredesign. The solution should bring forward the softwaredevelopers’ tacit knowledge of good or poor softwaredesign. The ideas on what is considered as good or poordesign by different developers will likely differ.However, being aware of these differences should allowthe organization to define good software design.Simultaneously it should help the less skilledprogrammers to produce software with better design.

Currently thereare two viable solutions to this issue, namely

  • The process approach and
  • Tool-basedapproach.

The process approach assumes that iterative andincremental process is used to develop the software. Inthis approach every time the developers see code that is inneed for refactoring they will make a note in a technicaldebt list. With a technical debt list the organization cantrack parts of the software that need refactoring. Aftereach iteration the developers would go through theseitems to see what is the actual problem, how has it gottensuch, and how they plan to fix it to make the designbetter. The study of the poorly structured code pieces andthe design improvement ideas would allow the less skilleddevelopers to learn about good software design.

The tool-based approach would utilize the developers’opinions on software design as an input to a tool.Developers would first subjectively evaluate softwareelements as good or bad design. The tool would thenanalyze each of these software elements. After sufficientamount of subjectively evaluated software elements hadbeen analyzed with the tool, it should be possible to createheuristic rules.

Regarding the tool based design, an automated aid helps the expert personnel to even boost up the performance level in development.

The tool which we are going to discuss about is called selector [1], which works on the decision support system. It selects among the alternative decisions hence called as selector.

Overview of SELECTOR [1]

1. By prompting the user as to the effect each attribute has on the choice of the final product, the system will evaluate the importance of each over all solution, generate a figure of merit and order the potential solutions from most favorable to least favorable

2. Prototyping is used to provide the additional information that often is needed to make a decision. Selector will guide the manager in developing appropriate prototypes. Using techniques from decision theory

a).the risk associated with each standard solution is evaluated.

b). Attributes which should be tested by a prototyping experiment to provide the most information are indicated.

c). Potential payoff from using the proto type can be estimated.

d).The maximal amount to spend on the prototype can be computed.

3.The system can be used to allow the manager to try a series of “what if” scenarios. The manager can repeatedly enter a series of assumptions in order to determine their effect on alternative design strategies. This might provide additional data before a complex expensive implementation or prototype is undertaken.

Given a specification, how does one choose an appropriate design which meets that specification? The study of formal methods and program verification only partially address this issue. We certainly want to produce correct programs. However correct functionality is only one attribute our system must have. We need to schedule the development to have the product built with in our budget, with in our available time frame, and not to use more computing resources than you wish to allocate for this task. However , how do we make such decisions?

We consider two cases for this problem. In the first the manager knows the relevant information about trade-offs and relative importance to the various attributes of the solutions. We have developed an evaluation measure, called the performance level, that allows a manager to choose from among several solutions when the relative desirabilitiesof the attribute values are known. We call this the certainty case. We then extend the model to include the more realistic case where the effects of each decision are not exactly known, but we can a give a probabilistic estimation for the various possibilities. We call this the uncertainty case. The following subsections briefly describes each model.

A. Decisions under Certainty:

Let X be the functionality of a program x. The program x is correct with respect to specification B if and only if X is a subset of B. We extend this model to include the other attributes as well. Since these other attributes are often concerned with non functional characteristics such as resource usage, schedules and performance. Now assume that our specifications are vector of attributes, including the functionality as one of the elements of the vector. For example X and Y are vectors of attributes that specify alternative solutions to specification B. Let S be a vector of objective functions, with domain being the set of specification attributes and range[0..1]. We call Si a scaling function and it is the degree to which a given attributes meets its goals. We state that X solves Y if for all

We extend our previous definition of correctness to the following.

Design x is viable with respect to specification B and scaling function vector S if and only if P solves SB.

B. Decisions under uncertainty:

We have assumed that the relative importance of each attributes is known a priori. However we rarely know this with certainty. We therefore consider the following model based up on aspects from economic decision theory.

We can present the performance level as a matrix PL, where PL i,j of performance level matrix PL as the pay off for solution i understate j . For example assume that we have two potential solutions x and x² and and assume that we have 3 potential states of nature st1, st2 and st3, which are represented as the six possible pay offs in the matrix.

Conclusion

More research should be undertaken in the measurement of software design, adopting different design methodologies using industrial software data. In this study, data complexity and control flow were used to measure of the quality of the program. In addition to these two metrics, software can be said to have other as of quality, such as the measures of maintainability and reliability, which can be affected by the quality of design. Therefore, further research in these two aspects of quality is recommended. This study deals with perhaps one of the more sensitive areas of software quality and has shed some light on the problems faced in this type of research.

References:

[1] Developing New Approaches for Software Design Quality Improvement Basedon Subjective Evaluations

Mika V. Mäntylä

[2]Test Software Evaluation Using Data Logging

Winston Chou

John L.Anderson Jr.

[3]A management tool for evaluation of software designs

Sergio Cardenas-Garcia and Marvin V.Zelkowitz