MEXUS – main design ideas
Author: Lars-Ivar Sellberg
Ver: 1.3
MEXUS – main design ideas 1
Abstract 4
About the author 4
Introduction 4
Reducing development and maintenance cost of test cases 5
Automatic or manual testing 5
Choice of platform 5
Trivial tests 6
Transaction received by mexus 6
Transactions sent by mexus (negative testing) 6
Consistency check of transaction sequences received by mexus 7
Specifying the stimuli 7
The problem with the naïve approach when specifying stimuli 7
Use of abstraction while specifying the input 8
Test case environment setup 9
Generating the stimuli 9
Evaluating the result 10
The problem with the naïve approach when evaluating the result 10
Analyzing the result 10
Trading simulations 11
Why use trading simulations? 11
Setting up and executing simulations 12
Evaluating the result of a simulation 13
Conclusions 13
Appendix A. Test case example 14
About the author
Lars-Ivar Sellberg have been working with test, development of automated test tools, application development, requirements and product management for a large international vendor that specializes in development of trading system software. Lars-Ivar Sellberg have M.S.c in Electrical Engineering (Royal Institute of Technology, Stockholm) and a BA in finance (University of Stockholm).
Introduction
The Mexus tool is today actively used and maintained. The number of test cases specified using Mexus exceeds 15000. It has also been used to run millions of transaction in trading simulation tests. At a late stage of the development of the exchange trading system it was discovered that the used testing methodology and foremost the supporting testing tools were inadequate. A decision was taken to replace the used testing tool by an in-house built tool named mexus.
What caused such a radical decision at a late stage of the development effort? The main reasons were the following:
- The cost for developing and maintaining existing test cases was too high.
- The time taken to execute test cases and evaluating was too slow.
- Coverage was to low
The document is divided in two parts, one dealing with how mexus supports building and execution of test cases and one part how it support simulation, or Monte Carlo, based testing.
Reducing development and maintenance cost of test cases
Automatic or manual testing
In order to get decent test coverage of a complex system like a trading system, en estimated 2000-4000[1] test cases are needed. Since the system was still undergoing rapid development requiring daily regression tests manual testing was not deemed to be a viable alternative.
Choice of platform
There were two major choices that needed to be made when starting the development of mexus.
- Choice of implementation language (mexus is in itself a sizeable application today consisting of approx 200.000 lines of code excluding test cases)
- Choice of language for specifying test cases.
Java was chosen as the implementation language. The main reason was the ability to do rapid development while still having reasonable performance. Another important “soft” aspect was that working testing is often (erroneously!) considered to be boring. By choosing a language, which was considered new and exciting,[2] it was easier to recruit experienced developers to the project.
Java was also chosen at the language for specifying test cases. The obvious alternative would have been to invent some sort of a script language. The advantage of a script language is that it is often easier to learn than a fully-fledged programming language like java. The drawback is however that such languages are often have limits to what they can be used to express. Even though it is possible to extend such languages with more and more complex features as the need arises the risk is that in the end one have developed a fully-fledged programming language. Even though this constitutes a very interesting exercise it is hard to justify the cost.
In order to make it easier for non-expert programmers to specify test cases, mexus test cases are specified in a framework environment. Normally each test case is represented by a java method containing very standardized code. This means that one gets the simplicity of script language while still retaining the possibility to use the full flexibility of java in the rare circumstances where the normal standardized framework code is not expressive enough.
Trivial tests
Before describing how more complex test cases are constructed it is a suitable to describe how more trivial tests are handled. Examples of trivial test are checks that conditions like mandatory fields and data types are honored.
The construction of test cases covering these type of trivial tests are to a very high degree automated in mexus, thus freeing the test case designer for more important work. Manually specifying proper test cases to handle these trivial tests are time consuming and typically these test cases are inherently expensive to maintain. Furthermore they are very boring to write which increases the risk of typos when specifying the test cases. Below is a description of trivial test performed by mexus.
Note Mexus performs these tests at a lower level, from an architectural point of view, than at the test case level. Thus even though the test cases or simulation themselves do not state anything explicitly regarding these trivial tests mexus will automatically apply these test on each individual transaction.
It was considered vital that these types of test are made without the involvement of test case designers. Otherwise this category of tests can consume an un-proportional amount of time and cost
Transaction received by mexus
All transaction received by mexus are examined to make sure that all the field values meet the mandatory and data type conditions.
The code that performs these tests is automatically generated. This is facilitated by the transaction protocol being specified in XML.
Transactions sent by mexus (negative testing)
Construction of negative testing i.e. mexus sends in transactions that are syntactically incorrect is also almost fully automatic even though some manual intervention is needed by a test case designer.
The manual intervention consists of specifying a syntax and semantically correct transaction. Automatically generated code will then automatically mutate the transaction into syntactically incorrect transactions.
The reason why the manual intervention is needed is that the XML specification does not contain enough information to determine how a semantically correct transaction looks like; it only provides information about syntax.
Consistency check of transaction sequences received by mexus
The sequence of transactions follows certain rules. An order insert always precedes the corresponding cancel of the same order etc. Another example is that an update message must refer to an existing order. All of these rules are automatically checked by mexus.
Our empirical result indicates that these types of checks are surprisingly efficient errors when running complex simulations. An error in the matching logic of the trading system typically sooner or later manifest itself by breaking one of these rules. This is further discussed in coming chapters.
Specifying the stimuli
The problem with the naïve approach when specifying stimuli
Most types of testing involve generating some kind of stimuli that is inserted into the tested application and then the evaluation of the result. In the specific case of testing an exchange system trading system this would for example consist of sending in a transaction containing an order to buy 1000 Ericsson shares at 12 SEK and then evaluating the resulting transaction containing trades, order updates etc.
The naïve and commonly used method of supporting this in a tool for automatic testing regression testing is to simply allow the user to somehow specify the entire transactions that is to be sent to the tested system. A simple script language can easily be constructed to meet this task. It is then easy to implement a tool that parses the script and then inserts the transactions into the tested system. In this chapter we will not deal with the evaluation process, but only with specifying the stimuli.
Very often the naïve approach described above is quite sufficient. There was however a severe problem with applying it for the testing of the exchange trading system application.
The problem was that each transaction typically consisted of 20-60 fields. A typical test case dealt with a limited part of the functionality. In order to test that functionality maybe 5-10 fields were relevant. This does not mean that the other fields could be filled with arbitrary data. On the contrary they must be filled with proper data that facilitates the test case, if not the test case could for example be rejected for a reason totally irrelevant for the function that the test case aims to verify.
This means that when using the naïve method the developer of the test case spends a lot of time specifying data in transactions that are really irrelevant for the function he/she is trying to test. This leads to a higher cost for developing a test case.
What is much worse than the initial higher cost for developing a test case is that the cost for maintaining these test cases rises dramatically. This is due to the fact that even a rudimentary change of the format of the transactions generates a need for modifying existing test cases even though the change itself does to affect the relevant fields of the transactions used in a particular test case.
Another problem with the irrelevant fields are that their proper default values are very often dependent on the current configuration of the system. A test case dealing with triggering of stop-loss orders should not have to be modified if it is executed in the context of trading Ericsson or Cisco shares. This is a trivial example but there are countless similar examples.
An absolute requirement of the new mexus tools was that:
- Semantic changes of the transaction must have a very limited if any impact on test cases not dealing with that specific functionality.
- Syntax changes must have a very limited if any impact on all test cases.
- Changes in system configuration must have a very limited if any impact on all test cases not dealing with functionality that deals with that particular configuration.
This was achieved by allowing the test case developer to choose the level of abstraction when specifying test cases. How this was done is discussed in the next section.
Use of abstraction while specifying the input
As mentioned previously the important idea is to allow the author of the test case to choose the level of abstraction that is appropriate for his/hers test case. It must be possible to specify each individual field of a certain transaction if the author so deems necessary. The author of such a test case must however be certain that the dramatically increased cost of developing and maintaining such a test case is motivated.
In practice the needs for such test cases were found to be very low, normally less that 1% of the total amount of test cases use such detailed mode of specification of stimuli. The vast majority of test cases use a much higher level of abstraction.
A typical test case consist of the following sections:
- Test case environment setup i.e. make sure the test case executes in an environment that can support the functionality being tested.
- Generate stimuli i.e. create and send in transactions
- Evaluate the result i.e. examine the transactions received by mexus as a result of the stimuli.
The first two bullets will now be discussed. The remaining bullet will be discussed in the next chapter.
Test case environment setup
In order for a test case to execute it needs an environment that can support the tested functionality. A trivial example is that in order to test odd lot matching the used order book configuration must support odd lot matching. Another example could be that certain attributes of the used communication session are present.
As previously discussed the naïve approach of manually selecting and hard coding, for example an order book, is inadequate. Instead the test case author requests an order book with a number of properties. The system, mexus, will then provide an order book fulfilling the selected properties.
In the following simple example the test case author request an order book that is traded on price (not yield), have not been used previously (affects trade statistics) and do not allow odd lot trades to create a new price.
// Get an orderbook
OBUnaryPredicate predicate[] = new OBUnaryPredicate[3];
predicate[0] = new PriceTypeReq(OBConstants.NormalPTReq);
predicate[1] = new VirginReq(OBConstants.VirginReq);
predicate[2] = new PaidLastDiffOlReq("N");
OBData obData = obHndl.getRandomOBs(1, predicate)[0];
What happens if a suitable order book is not available? Mexus simply skips that test case writes this fact to an error log.
Generating the stimuli
Once the correct environment is setup the actual test case can be constructed. Once again it is important not to hardcode parameters like price or volume. A certain volume may for example be incompatible with the round lot size of an order book. Such problems are avoided by requesting prices and volumes from the setup test case environment. The following example shows how the test author request what the round lot size is and then asks for a suitable price that is compatible with the selected order book tick-size[3]