The Test Management Summit – 6 February 2013 – FINAL Programme

09.00 / Registration, Tea/Coffee
09.50 / Welcome and Introductions
Waterloo Room / Burton Room / Trafalgar 2 / St. James 1
10.00 / BA and QA Communities of Practice at Deutsche Bank - Working to improve Quality of Requirements
Michal Janczur, Deutsche Bank / How to Manage Technical Testers
Alan Richardson, Compendium Developments / Where Next For Agile?
Dan Webb, BJSS / Agile Application Security Testing
Adam Brown, UK Manager, Quotium
11.15 / Tea/Coffee Break
11.45 / Managing Clients who Test your Patience
Emma Langman, Progression Partnership Ltd / Death by User Story
Brindusa Gabur, Independent Agilist / Who is the customer of testing?
Niels Malotaux, Project Coach, N R Malotaux – Consultancy / Implementing a New Generation Testing Process
Jonathan Pheils, Transition & Quality Assurance Manager, P&O Ferries and Tim Chinchen, Solution Consultant, Compuware
13.00 / Lunch
14.00 / Pulling Testing into 2013 Kicking and Screaming
Paul Gerrard, Gerrard Consulting & Gojko Adzic, Neuri Ltd / You are Doing Too Much Testing
Ingo Phillipp, Tricentis / Making automation work in an agile world
Jonathan Pearson, Original Software / Just How Different is Mobile Performance Testing?
Thomas Ripoche, Neotys
15.15 / Tea/Coffee Break
15.45 / Be Agile or Do Agile
Matt Robson, Mastek and Chris Ambler, NMQA / Moving to Weekly Releases – Some Problems, Solutions and Ideas
Rob Lambert, New Voice Media / How do we contract test services to get ‘the best bang for the buck’?
Alan Laverack, BT Plc / True Performance Testing
Chris Thompson and Ash Gawthorp, The Test People
17.00 / Reconvene in Nash Room
17.15 / Keynote Speaker – Paul Gerrard, Principal, Gerrard Consulting
“Leadership”
17.45 / Drinks Reception
18.30 / Dinner in Burton Room
21.00 / Close

Session Abstracts

The Test Management Summit – 6 February 2013 – FINAL Programme

Michal Janczur, Deutsche Bank: BA and QA Communities of Practice at Deutsche Bank - Working to improve Quality of Requirements

  • DB is an organisation of over 90,000 people with thousands in technology delivery
  • DB have promoted Communities of Practice within Technology groups
  • Collaboration across COPs should result in improved practices across the organisation
  • Quality and timely requirements are essential to quality software delivery
  • What are the priorities and opportunities for cross BA and QA professions?

Alan Richardson, Compendium Developments: How to Manage Technical Testers
Premise: To test well, we increasingly require better technical skills.
Over the last five to six years, the companies that I have worked in have required increased technical skills and technical focus from their testers. In part because of increased use of Agile techniques and more focus on automated testing from developers, but also because of the increasing complexity in the applications under test.
As a test manager I fully supported this increased emphasis on technical skills, and changed the way that I manage testing to make this effective. I will share my experiences and lessons learned during this session, as I hope you will too. Have you experienced this shift? Do you support it like I do, and have positive experiences and lessons learned to share? Or do you have reservations and negative experiences that we can learn from?
I hope this session brings together people who both agree and disagree with the premise, especially people prepared to share their experiences. And if you read the above premise and thought, "that doesn't happen in my environment, we don't need to increase our technical focus" then, why not come along and summarise your reasoning for those of us that do? And who knows, you might find that if ...and I think, when... your company does change focus, that this session prepares you for a change. The type of questions we will consider during the session:

  • Do we have any evidence that testing now requires more technical skill?
  • What does technical testing mean?
  • Does this change the risk profile of the test process?
  • How does your recruitment process change?
  • How does this impact the role of the test manager?

Dan Webb, BJSS: Where Next For Agile?
Agile has been widely accepted by the software development And testing communities, with the core principles being that of stripping away unnecessary practices that do not add any tangible value.
With this in mind, is it possible to apply agile principles to Other areas of business that we interact with daily, the constant Friction of agile meeting a defined process, be it Prince2, ITIL Or Waterfall, or should we accept that Agile must become a process?

In this session we will engage in a 'fishbowl' discussion About whether lean principles can be applied elsewhere within the Wider business context or whether the time has come for agile to Be adopted as a process.

Adam Brown, UK Manager, Quotium: Agile Application Security Testing
Firstly a discussion on Agile trends and secure development & testing techniques.

Then we’ll take a look at some statistics about recent breaches and understand what role application weaknesses play in those attacks.

We’ll go on to into more detail about application security vulnerabilities, discuss the risks associated with these vulnerabilities and I’ll show some demonstrations of exploits of application security vulnerabilities - you’ll see actual data theft through a vulnerable application.

Finally I want to show you what we see as the best way to mitigate these risks.

Emma Langman, Progression Partnership Ltd: Managing Clients who Test your Patience
In this session we will be looking at how to avoid Testing conversations with tricky customers that push the limits of Any tester or analyst's patience with their constant chasing and demands. We'll be sharing simple, but powerful methods to help head this problem off at the pass. These include visual, high-level updates as well as some other effective influencing and communication skills. We'll also explore the Andon Cord metaphor to help clients understand the importance of test-driven development and project-long analysis and support. Expect a high-energy, entertaining and content-rich session.

Brindusa Gabur, Independent Agilist: Death by User Story
User Stories are a universally adopted technique for agile teams. They represent customer requirements that are flexible, support rapid change, help to mitigate risk, enhance communication in the team and in the company. They are easy to create, refactor and provide a live ‘overview’ of the product as it unfolds. Is it me or does it sound like an amazing drug commercial with a lot of hidden small print?

Many companies in their quest to become faster and better have had negative consequences from their user stories practice: wasted time and effort, rework and bugs and, ultimately, failure, disappointed customers and financial losses. They face resistance to change, little or no communication, frustration builds and confidence in the use of user story and ultimately in their ability to be agile is damaged.

In this session, we are going to explore the pitfalls of user stories and discuss practices and techniques for improving the shared understanding needed between business people and development teams. We are going to tackle concerns that trouble many agile teams: How do I start writing user stories? How much should be written and by whom? How do we ensure we have a correct overview over the product unfolding? How do we assure the consistency and quality of our product?

Niels Malotaux, Project Coach, N R Malotaux – Consultancy: Who is the customer of testing?
If I ask testers: “Who is the main customer of testing?” I get many answers, but hardly ever the answer I think is right. Once I guide testers to that answer, they usually recognize with a shock that they have been focusing on the wrong audience. How can we do a good job if our focus is in the wrong direction? In this group we’ll discuss the question, I’ll guide you to the answer I think is inevitable and then, in case you agree, we’ll discuss the consequence of this recognition. In case you disagree, we can have a valuable discussion as well.

Jonathan Pheils, Transition & Quality Assurance Manager, P&O Ferries and Tim Chinchen, Solution Consultant, Compuware: Implementing a New Generation Testing Process
This session provides feedback of how a traditional travel company implemented and uses a new approach to its Test and Quality management processes to improve system performance and availability of business critical applications. Jonathan will explain how P&O Ferries:

  • managed their test processes before, showing real life examples of the problems and challenges faced
  • used detailed diagnostic and monitoring capabilities of Compuware dynaTrace to solve them
  • resolved disputes with third parties and those most important go-live decisions

Jonathan will explain the technical and business improvements that were possible with dynaTrace and how it fits in the overall quality process.

Tim will provide an overview of the techniques that can be used to tackle the challenge of performance engineering. He will show how applying transactional analysis can improve the value of system and load testing across the development lifecycle.

Paul Gerrard, Gerrard Consulting & Gojko Adzic, Neuri Ltd: Pulling Testing into 2013 Kicking and Screaming
Gojko and Paul will conduct a dialogue that evaluates the pressures and fault-lines in our industry that we think make change inevitable. The need to deliver faster and save money are ever-present, but the opportunities for software teams to take advantage of the redistribution of testing are there for all to see. What changes do we expect to see in 2013? How do testers react to, take advantage of or lead these changes? Expect to be challenged (and entertained).

Ingo Phillipp, Tricentis: You are Doing Too Much Testing
It is common practice today for test managers to report on The progress of testing by presenting a list of existing test cases, stating the number of test cases executed and providing additional information on the execution results (green and red) of these test cases. The same results in the form ofx test cases OK, y test cases failed,however, may not accurately portray the business risks in terms of releasing the software.

Thus, the efficiency and effectiveness of a test case portfolio cannot be gauged in terms of quantity, but instead this must be measured in terms of test coverage. So far there have not been any suitable methods on the market for completely determining the test coverage achieved.

TheInner Valuesconcept from Tricentis offers for the first time a meaningful and easy-to-use method for determining the contribution of each individual test case to the overall test coverage, directly related to value and risk.

In this session we will discuss the principles and the challenges of using such a concept to effectively reduce the size of the test suite.

Jonathan Pearson, Original Software: Making Automation Work in an Agile World

Agile developments are all about speed and adaption and are becoming ever more present in the application lifecycle landscape. With the short cycle times that are available, it is becoming increasingly important to gain a competitive advantage through timely and thorough QA processes. Yet to burden agile developments with long winded repetitive manual regression testing would be both time consuming and resource heavy with larger teams needed, either that or the coverage gained will be less than required. Typically this leads to aneed for automation which requires specialist technical resources to create and maintain , or developers are taken off production code to join the testing effort of building automation.

In this session we will explore some of the pitfalls and challenges of effectively automating in an agile environment, and through the use of innovative software solutions, how it may no longer be necessary for the technical overhead to be required, to ensure the this is done efficiently.
Thomas Ripoche, Neotys: Just How Different is Mobile Performance Testing?
Mobile usage is predicted to overtake desktop usage in the next 2 years. Add to this the sharp rise in the number of mobile devices and user expectation that mobile response is faster than desktop web access, and you have a very demanding landscape.

How do you test the performance of your websites & apps in an environment of multiple operating systems, browsers, mobile service providers and varying network conditions?

In this session Neotys will cover all of the key aspects of testing mobile applications and demonstrate how you should do this.

You will learn:

  • How mobile bandwidth and latency affects the server performance and response times.
  • How to test for different operating systems and browsers.
  • The effect of running a mix of Android and iOS on your server response times.
  • How you can easily test the performance of various mobile client configurations on your servers.

If you thought mobile performance testing was straightforward - think again!

Matt Robson, Mastek and Chris Ambler, NMQA: Be Agile or Do Agile
All too often the term “Agile” denotes dogma method and an almost religious adherence to a particular mode of change delivery. However, some of the most effective delivery can be a mix and match of the best tools and techniques, appropriate to the risks to be managed and mitigated, and also appropriate to the constraints of the delivery environment. Equally Agile techniques and processes can contribute greatly to other methods at a very practical level. There are as many Waterfall zealots out there as there are Agile extremists.

Understanding overall delivery context, what is to be delivered and how, and then the capability available to deploy in support of this delivery can help shape an effective testing method both in terms of cost effectiveness (“bang for buck”) and testing effectiveness, finding defects as close to the point of injection as possible.

So if “Agility” is the goal, is this slavishly following a method, or is it a state of mind, leadership culture and approach that “is” agile rather than blindly “doing” Agile?

This is intended to be a lively debate with plenty of audience participation.

Rob Lambert, New Voice Media: Moving to Weekly Releases – Some Problems, Solutions and Ideas
During this session I will present some of the problems We encountered when we moved from yearly releases to weekly releases and how we addressed some of these problems. I'll also discuss some of the ideas behind why we moved to weekly releases, the benefits it has brought us and why we need to keep challenging ourselves. We'll then have an open and lively discussion about rapid releasing, its pros and cons and some of the challenges it presents.

Alan Laverack, BT Plc: How do we contract test services to get ‘the best bang for the buck’?
Am I getting value for money from my testing? There are no Problems in Production, am I over testing and paying too much? I have Loads of testers but why isn’t the quality in Production improving? As leaders of testing, we all encounter trends within our programmes that seem hard to shift. While we have the experience to take action to make improvements, we find the end result will rarely end up as we hoped. But why? This session will look the fundamental holy grail test teams work against... their contract! We will explore how simple and complex we can make the contract but will we get the best bang for the buck? We will look at the differences in contractinga test team of 10 people to 3,000. Can there be a nirvana where both clients and test service organisations succeed or has one party always got to ‘win’?

Chris Thompson and Ash Gawthorp, The Test People: True Performance Testing
Web performance testing is commonly based on the HTTP request/response paradigm, with a significant number of commercial and open source tools available, allowing the recording and customisation of HTTP traffic in order to then play it back at volume, simulating load on the server side . Performance testing of other technologies also typically follows this approach, with traffic captured and played back at the protocol layer.

There are an increasing number of applications that either break this traditional paradigm (eg bi-directional Websockets, server to client push technologies) or are simply becoming so complex that “scripting” at the protocol level to simulate a user of the application’s interface is too time consuming, complex adding significant risk, or is even impossible due to encryption or the use of custom binary protocols. In addition, as we see a shift back to more complex rather than thin clients, measuring and understanding performance at the user interface layer is becoming increasingly important in understanding how your application performs not just on the server side but also from a user experience perspective.

This presentation discusses the traditional versus an alternative methodology of performance testing, whereby the driving of real application clients at scale enables “true” end to end performance to be tested and measured whilst solving the challenges posed by emerging and increasingly complex technologies. Finally, the presentation ends with how to apply a balanced, multi-facetted approach using a blend of both technologies ultimately reducing costs and time-to-test.