INDUSTRY

Financial Banking

PROJECT

Implementing Process, Approach and Test Tooling to a new organisation where Non-Functional Testing (NFT) had not previously been carried out.

SYNOPSIS

Testing Performance / Fimatix - Performance Testing / Non-Functional Testing Case Study

THE BACKGROUND

The customer was a relatively new entity having split from one of the UK’s largest banks for compliance reasons. The customer was obliged to pay a license fee to continue using legacy applications and wanted to develop its own newer and better-performing applications that would support them now and into the future.

THE CHALLENGES

Performance Testing had not previously been carried out but this was now seen as essential, not only to prove the concept of some transformation components but also to ensure performance prior to release into production.

Non-Functional Requirements (NFRs) did not exist, and although Volumetric Workload data existed it was not coherently organised, as it did not readily map to the structure or organisation of the new application platform, and as such, was therefore largely unusable.

While system design was available, this was out of date as the design had changed much faster than documentation could be maintained. New technology, especially around storage of large volumes of data constituted a high risk for performance as well as challenges around how to test. The appropriate testing tools and processes were not in place, and with tight deadlines to delivery, it would prove difficult to deliver these whilst moving forward with performance validation.

THE SOLUTION

A Performance Test Lead was deployed for a period of 5 weeks in a Discovery Phase to define the scope, approach and deliverables for the Design Phase of Performance Testing. This included a review of the 12 key applications that were being delivered over the next 15 months, discussions and interviews with key project personnel and assessment of dates and timescales with reference to the delivery model that would be used.

The activities centred around the following:

•  Understanding the architecture and design of each component, component interfaces and how each component communicated with preceding and following components.

•  Obtaining, analysing, organising and documenting workload volumetric information looking at peak processing times.

•  Planning the build and execution phases including monitoring backend components where no user interface existed, generating component level performance test plans.

•  Evaluated performance testing tools, notably Green Hat that was used to stub environments and OATS for front-end and messaging based performance testing.

•  Creating large volumes of functionally accurate test data that could be used to drive performance tests and populate databases in test environments.

THE OUTCOME

Huge amounts of data and information were processed to form the basis of Performance Testing Requirements, Approach and Scope. The Volumetric Workload model was fully documented and easily maintainable, accounting for current workload volumes and for observing developing trends. This analysis and planning underpinned the performance testing for the transformation project allowing the customer to confidently predict expected performance in production.

THIS EXAMPLE DEMONSTRATES

•  Testing Performance’s ability to take raw data, architecture diagrams and system designs to build a coherent approach to Performance Testing.

•  Plan and deliver Performance Testing at a Component Level early in the Software Development Lifecycle (SDLC) before component integration had occurred.

•  Ability to structure and plan Performance Testing allowing for clearly defined milestones and deliverables.