Configure and Test Server

Configure and test server

Overview

Image: Overview icon

You should already know about confirming server specifications and building a server. This resource will help you to configure and test a server within an information technology environment.

In this topic you will learn how to:

  • configure server hardware
  • configure server software
  • plan and manage testing
  • diagnose and resolve faults and problems

This topic contains:

  • reading notes
  • activities
  • references
  • topic quiz

As you work through the reading notes you will be directed to activities that will help you practise what you are learning. The topic also includes references to aid further learning and a topic quiz to check your understanding.

Download a print version of this whole topic: Configure and test server (183 KB 2800.doc)

Readingnotes

Image: Reading notes icon

Configure server

Configuring server hardware and software means setting up the way the hardware and software operates to suit the IT environment and organisational or user requirements.

Generally, server hardware is configured before the server operating system is installed or afterwards, if hardware components in an operating server are being changed or added. Software may be configured when installed, as part of the installation process, or afterwards, if a default installation has been performed.

Some specific considerations for configuring server hardware and software configuration follow.

Server hardware configuration

Server hardware configurations will depend on what components make up the server. Configurations you may need to consider include those for

  • storage
  • boot sequences
  • specific devices
  • redundant components.
Storage

Options like the hardware redundant array of independent disks (RAID), the system which uses multiple hard drives to share or replicate data among the drives, are configured independently of operating systems. You may need to configure RAID options and logical volumes. You may be using remote storage with special adapter cards that may need configuration.

Boot sequences

A boot sequence is the set of operations the computer performs when it is switched on which load an operating system. Usually you have the option to select boot orders such as network, CD, which hard disk, and so forth. The Intel WFM (Wired for Management) options may need to be set.

Specific device configurations

Things like the addresses for small computer system interface (SCSI), which is a standard interface and command set for transferring data between devices on both internal and external computer buses, may need to be set on old SCSI devices. Generally bus, port, interrupt request (IRQ) and other settings are usually automatically determined for you with current server hardware. There may be external devices (for example tape drives) that require hardware configuring to connect to the main server hardware.

Redundant components

Hardware such as that for standby power supplies or network adaptors may need configuration.You may need to consult the hardware manufacturer or vendor for information and configuration instructions.

Server software configuration

Configurations for server software depend on the purpose or function of the server. Generally, a server may be configured for one or more of the following roles:

  • An application server which runs specific software applications for end users, such as a server that runs a central Oracle Database that is accessed by users across an organisation.
  • A storage server which provides a central storage place for data that can be accessed by computer users around a network.
  • A network services server which provides specific services such as print, user authentication and authorisations, dynamic host configuration protocol (DHCP), and domain name system (DNS) are some examples of the services that can be provided.

Configuration for each of the above roles will be different and will depend on the client’s IT environment and client requirements.

Server items to be configured

Generally, the following items will need to be configured on a server:

  • Network setting, which includes network protocol to be used, network addressing, server name and network adaptor settings.
  • Services, whichinclude enabling and configuring specific services to run on the server, such as setting the server to run dynamic host configuration protocol (DHCP) and domain name system (DNS) services for an organisation.
  • Authentication,which involves setting how users of the server will be identified. This may involve setting up local user accounts with passwords on the server or setting the server to authenticate users via some other mechanism.
  • Authorisation, whichis setting up which authenticated users are permitted to access and use the server, such as allocating user permission to access data storage or server applications or programs.
  • Environment setting and policies, which are settings for the server to operate as required or settings dictated by organisational policy. Having data backup schedules for the server is an example of environment setting. Policy settings are used to enforce organisational policies and may include disabling certain functions or enforcing a particular setting on end-user computers, such as stopping a non-administrative user from login on the server console or forcing users to change their password after 30 days.

All server operating systems have the above configuration options, but the processes to set them will vary. Generally, configurations will be carried out using a graphical user interface (GUI) configuration program that is provided as part of the server operating system.

Define the scope of testing

Integrated tests are performed during a server development project. Stress and load testing of an integrated platform examines the interoperability of software and hardware elements under conditions more extreme than expected. Testing ensures that the performance levels at which the server fails to operate are acceptable. To perform the tests, the software is broken down into functional components. Each functional component is made up of at least two interrelated elements. A test is performed on the interoperability of each component.

When complete, the testing process should verify that all the tests performed support the acceptance by the user of the totally integrated product.

Tasks performed during stress and load testing of an integrated platform include

  • establishing testing acceptance criteria and procedures
  • performing test events
  • diagnosing test results
  • resolving software defects.

Test events are designed to establish operational levels at which the new server starts to fail and to measure how it performs under overloaded conditions. The failure and performance levels are compared with the acceptance criteria and are either accepted or rejected.

Test events aim to reveal failures such as

  • total system crashes
  • bottlenecks in interfaces between components
  • data corruption
  • process overloading
  • performance degradation below a usable level.

Planning tests

The purpose of planning the testing process is to identify, conduct and review test events. In large or complex systems it is not possible to install a whole system and examine the impact of high volumes of data. The number of different operational parameters would be astronomical. Integrated testing breaks the product down into more manageable components. Stress and load testing is applied to each of these components. The results from the test process should confirm the operation of the total product running under extreme conditions.

The planning process accesses user documentation and identifies the hardware and software components of the product. For each component, individual modules are examined to determine the types of stress and load tests that can be performed. These then become the test events. The details for each test are documented in the test requirement. This document lists procedures and test conditions.

Documents used for planning include the

  • project plan, which states the objectives of the product and the operational environment
  • test plan, which details the scheduling and resources for all test events
  • functional specifications, which contain technical details of software modules.

Image: Shows flow chart: Break integrated platform into interoperable components arrow to Determine performance levels, arrow to Establish acceptance criteria for each test, arrow to Prepare procedures, arrow to Review test results, arrow to Test Valid. YES arrow to Prepare test environment, NO arrow back to Break integrated platform into interoperable components.

Figure 1: Test planning

Determining test objectives, scope and tests

Objectives

The general objective of the testing process is to demonstrate to the user that the developed software will perform as expected when experiencing stress and load conditions. Objectives are stated in terms of the software functionality and performance conditions. This information can be found in the project plan for the developed software. An example of an objective could be 'To confirm the payments subsystem will not crash when “higher” than 20,000 transactions are processed.'

Scope

The scope of the testing process defines the extent of the test performed. To perform every single test, even for simple products, can be complex. The term integrated tends to suggest every single element that can be identified. This can be extensive, especially if every single operating system library and hardware chip is included. The scope narrows the focus to only those elements necessary for testing and those items to be included and excluded. An example of both could be 'tests will exclude all system libraries, except those scripting engines accessed by the HTTP server.'

Test methodologies

Testing processes can be performed on the complete integrated platform or on a logical collection of components that represent the integrated platform. Testing of components is usually performed in large, complex software projects where observations are difficult to make on the integrated platform. This form of testing is known as incremental testing.

The full, integrated test is performed by assembling all the modules developed after they have been unit tested. It is usually the easiest to perform, as it does not need the customising of components to simulate being part of the whole solution.

Incremental tests can be performed using either a top-down or bottom-up methodology. In the top-down approach, the tests start with the integrated product and components are removed one at a time and tested. In the bottom-up approach, the components are added one at a time and tested. Incremental testing requires the addition of utility programs and the modification to components to simulate working as part of a fully integrated product.

The aim of incremental testing is to perform stress and load testing on low-level components and then perform integrated tests as components are added. When all components have passed tests and been added, it is inferred that the integrated platform will perform as expected under stress and load conditions.

Test construction

Tests are constructed by applying stress and load conditions to those identified components of the integrated platform. Stress and load conditions are experienced when a high volume of transactions are processed over a concentrated period of time. In the example above, the module updating the database table would experience a load if it added 10,000 records in five minutes. However, the level of stress and load need not be excessive. Load and stress is usually applied at a level above the expected maximum performance and acceptable to the user.

Some common test methodologies include

  • bash testing throws everything possible at the new software in an attempt to crash the system or cause components to fail
  • multi-user testing is performed by the simultaneous access to the software by user levels in excess of the expected level.

Test documentation

When objectives, scope and tests have been identified, they are used to prepare the requirements documentation for each test.

Test requirements include

  • test objectives: list what the test is supposed to test
  • acceptance criteria:conditions for the test passing or failing
  • test environment: conditions under which the test is to be performed
  • roles and responsibilities
  • test script: steps to be performed during the test steps
  • results: procedures for processing and authorising results.

Establishing acceptance criteria

Each test event has acceptance criteria, expressed in measurable terms. The observed results of the test are compared to these criteria and evaluated as 'pass' or 'fail'.

Passing the test will indicate the software is acceptable to user requirements. The acceptance criteria are derived from the test objectives.

Each acceptance criterion can be made up of more than one criterion. In the case of the criterion being expressed as a multiple, all have to be met to pass the test. The levels of performance are negotiated with the user and are derived from the base load.

Stress and load tests

Criteria state

  • what is to be performed
  • the level of performance
  • the expected result.

An example of a simple integrated test could be 'The software will not crash if the credit card payments exceed 10,000 per day.' Credit card payments are performed at a level of 10,000 per day and the software is expected not to crash. If the software crashes below the stated level, then the test fails.

Failure to state acceptance criteria objectively causes ambiguity when trying to analyse test results. To state performance as large, fast, slow or poor is open to interpretation. An example of poor criteria is 'Response time is slow when large volumes of transactions are processed.' The criteria do not indicate how many transactions need to be performed and how response time is to be measured.

Integrated tests

Criteria state

  • what modules are to be integrated
  • tasks which demonstrate interoperability of modules
  • the expected result.

In the above example, payment details are entered by customers using a web page, then sent to a financial institution for authorisation before being sent to the supplier. A simple criteria would be 'Suppler receives authorised credit card payments from customer.' The modules examined are the payment entry, payment authorisation and payment retrieval. The task performed is making a payment, and the expected result is that the supplier receives a valid customer payment.

Calculate performance levels from base loads

The performance levels used in the acceptance criteria are calculated from the base load of the software. The base load is the level of performance during normal operations of the software. This is usually stated in the project brief of a project plan as the maximum performance level. The maximum performance levels are expressed in terms of transactions over a time period. Examples of levels include website hits per week, orders processed each hour, and accounts accessed per minute.

Performance levels are calculated from the maximum performance levels by generating probable scenarios. If a maximum level is expressed as 2000 transactions a day, it could be possible that most of them occur in one hour during the day, but not possible that they will all occur in five minutes.

The simplest scenario is the maximum performance level performed over a concentrated period of time. The time period should be practical for testing purposes. If a base load is 100 transactions per second, it may not be possible to accurately measure a performance level in milliseconds.

If the performance level of the test is acceptable to the user, then it is adopted into the acceptance criteria.

Determining expected results

Expected results are used in the acceptance criteria. They are the outcomes expected by the user when the software is operating under stress and load performance levels. The expected results of a test are observable and measurable. They are used in comparison with the actual results of the test, to determine the outcome of the test. If the result does not match the expected result, the test fails. Expected results are usually expressed in terms of performance. Performance can be measured in terms of levels of activity, indicated by response times, transaction processing rates or system crashes.

Examples of expected results stated in terms of performance include

  • the system does not crash
  • the response time for data entry does not exceed 5 seconds
  • the time taken to process 100 orders does not exceed 35 minutes.

Expected results are derived from walking through different scenarios using the components of a test. The purpose of walkthroughs is to identify component failures and bottlenecks. They represent weakest links and under testing are critical in determining the overall performance.

A component fails when it ceases to function. It fails when it completely crashes or reaches a saturation point and hangs. Components can fail when they are exposed to conditions beyond their intended operating capacity. An example could be a simple client/server relationship. A server application could reject any new client connection when the maximum number of connections has been reached. The server application does not crash, but in terms of the new connection it fails to operate.

Bottlenecks are interfaces or processes that slow the whole system's performance when they become saturated. Bottlenecks are not failures - the component still continues to function. Examining components and interfaces and the sequence in which the components are performed can identify problems. An example could be a program printing invoices. The software component queries the database and formats data into output at 1000 invoices per second; however, the printer only operates at 20 pages per minute.

Establishing test procedures

The processes to perform a test are stated in the test plan, test requirements and test scripts. The project plan outlines the overall methodology adopted for the testing process. The test requirements describe the conditions, procedures and the documentation for performing the test. The test scripts list the steps required for individual test activities and state expected observable results.