Frequently asked questions in Software Engineering

(testing)

  • Is there a difference between an error, a fault, and a failure?

The preferred term for an error in requirement, design, or code is “error” or “defect.” The manifestation of a defect during the operation of the software system is called a fault. A fault that causes the software system to fail to meet one of its requirements is called a failure.

  • Is there a difference between verification and validation?

Verification, or testing, determines whether the products of a given phase of the software development cycle fulfill the requirements established during the previous phase. Verification answers the question “Am I building the product right?”

Validation determines the correctness of the final program or software with respect to the user’s needs and requirements. Validation answers the question “Am I building the right product?”

  • What is the purpose of testing?

Testing is the execution of a program or partial program with known inputs and outputs that are both predicted and observed for the purpose of finding faults or deviations from the requirements. Although testing will flush out errors, this is just one of its purposes. The other is to increase trust in the system.

Testing can only detect the presence of errors, not the absence of them; therefore, it can never be known when all errors have been detected. Instead, testing must increase faith in the system, even though it may still contain undetected faults, by ensuring that the software meets its requirements.

  • What is unit level testing?

Several methods can be used to test individual modules or units. Thesetechniques can be used by the unit author (sometimes called desk checking)and by the independent test team to exercise each unit in the system. Thesetechniques can also be applied to subsystems (collections of modules relatedto the same function). The techniques to be discussed include black box andwhite box testing.

  • What is Black-Box testing?

In black box testing, only inputs and outputs of the unit are considered; how the outputs are generated based on a particular set of inputs is ignored. Such a technique, being independent of the implementation of the module, can be applied to any number of modules with the same functionality.

  • What is White-Box testing

White box testing (sometimes called clear or glass box testing) seeks to test the structure of the underlying code. For this reason it is also called structural testing.

  • What is equivalence class testing?

Equivalence class testing involves partitioning the space of possible test inputs to a code unit or group of code units into a set of representative inputs.

  • What is incremental integration testing?

This is a strategy that partitions the system in some way to reduce the code tested. Incremental testing strategy includes:

top-down testing

bottom-up testing

other kinds of system partitioning

In practice, most integration involves a combination of these strategies.

  • Why is testing object-oriented code different from testing other types of code?

First, the components to be tested are object classes that are instantiated as objects. Because the interactions among objects have a larger grain than do individual functions, systems integration testing approaches have to be used.But the problem is further complicated because there is no obvious “top” to the system for top-down integration and testing.

  • What are the levels of testing in object-oriented testing?

The three testing levels are:

  • object classes
  • clusters of cooperating objects
  • the complete object-oriented system
  • How are object classes tested?

Inheritance makes it more difficult to design object class tests, as the informationto be tested is not localized. Object class testing can be achieved by:

•testing all methods associated with an object

•setting and interrogating all object attributes

•exercising the object in all possible states

Note that methods can be tested using any of the black or white box testingtechniques discussed for unit testing.

(Evaluating Quality)

What are the main differences between MCCall’s model and ISO 9126 model.

•Quality factors in McCall’s model are called characteristics in ISO 9126 Model.

•ISO 9126 focuses on characteristics visible to the users whereas McCall’s model emphasizes internal quality as well (reusability -> Modularity: important for developers …).

•No consensus about what is a top-level factor and what is a low-level quality attribute/ subcharacteristic (Testability !!).

(Reviews)

There are many objectives attached to the various review methods.

  1. List the different review methods.
  • Formal Design Review,
  • Peer Reviews (Inspection, Walkthrough),
  • Expert Opinions
  1. List the objectives for each method.

  1. What is the common objective for all methods.
  • Detect errors

(Software Metrics)

  1. What are some motivations for measurement?

The key to controlling anything is measurement. Software is no different in this regard, but the question arises “what aspects of software can be measured?” Metrics can be used in software engineering in several ways. First, certain metrics can be used during software requirements development to assist in cost estimation. Another useful application for metrics is benchmarking. For example, if a company has a set of successful systems, then computing metrics for those systems yields a set of desirable and measurable characteristicswith which to seek or compare in future systems.

Most metrics can also be used for testing in the sense of measuring the desirable properties of the software and setting limits on the bounds of those criteria. Or they can be used during the testing phase and for debugging purposes to help focus on likely sources of errors. Of course, metrics can be used to track project progress. In fact, some companies reward employees based on the amount of software developed per day as measured by some of the metrics to be discussed (e.g., delivered source instructions, function points, or lines of code).

  1. So what kinds of things can we measure in software?

We can measure many things. Typical candidates include:

•lines of code

•code paths

•defect rates

•change rates

•elapsed project time

•budget expended

  1. What are the disadvantages of the LOC metric?

One of the main disadvantages of the using lines of source code as a metric is that it can only be measured after the code has been written. While lines of code can be estimated, this approach is far less accurate than measuring the code after it has been written. Another criticism of the KLOC metric is that it does not take into account the complexity of the software involved.

  1. What is McCabe’s metric?

To attempt to measure software complexity, McCabe [1976] introduced ametric, cyclomatic complexity, to measure program flow-of-control. Thisconcept fits well with procedural programming but not necessarily withobject-oriented programming, though there are adaptations for use with thelatter. In any case, this metric has two primary uses:

•to indicate escalating complexity in a module as it is coded and therefore assisting the coders in determining the “size” of their modules

•to determine the upper bound on the number of tests that must be designed and executed

  1. How does McCabe’s metric measure software complexity?

The cyclomatic complexity is based on determining the number of linearly independent paths in a program module; suggesting that the complexity increases with this number and reliability decreases.

  1. What are function points?

Function points (FPs) were introduced in the late 1970s as an alternative to metrics based on simple source line count. The basis of FPs is that as more powerful programming languages are developed, the number of source lines necessary to perform a given function decreases. Paradoxically, however, the cost per LOC measure indicated a reduction in productivity, as the fixed costs of software production were largely unchanged.

The solution to this effort estimation paradox is to measure the functionality of software via the number of interfaces between modules and subsystems in programs or systems. A big advantage of the FP metric is that it can be calculated before any coding occurs.

  1. What are the primary drivers for FPs?

The following five software characteristics for each module, subsystem, or system represent its FPs:

  • Number of inputs to the application (I)
  • Number of outputs (O)
  • Number of user inquiries (Q)
  • Number of files used (F)
  • Number of external interfaces (X)