Performance Testing Software Systems
Analyzing Performance Test Data
Overview:
Performance Testing frequently generates very large volumes of data. That data usually requires significant analysis before findings are made and recommendations are delivered. To make matters more complex, even though there is a large volume of data, there are typically an insignificant number of tests conducted for most data reduction methods to be statistically valid. Finally, many of the statistical methods that are frequently used are either mis-used or mis-understood.
This unique workshop extends the Performance Testing Software Systems heuristic approach to that focuses on mitigating risks to the business and satisfying end users in commercially driven software development environments. This approach marries the software testing insights of James Bach, Rob Sabourin, Cem Kaner and many other members of the Context-Driven School of software testing with the performance testing insights of Alberto Savoia, Ross Collard, Roland Stens, and the rest of the WOPR (Workshop On Performance and Reliability) community. The approach has a track record of success with regard to adequately mitigating business risk in time to keep pace with the commercial aspects of the project. The Microsoft patterns & practices book Performance Testing Guidance for Web Applications by J.D. Meier, Scott Barber, Carlos Farre, Prashant Bansode, and Dennis Rea complements the material presented in this workshop.
This version of the PTSS series of workshops is targeted for anyone who analyzes performance test results data. It focuses on how to make sense out of performance test data to improve findings and recommendations to help achieve business objectives, reduce project risk, and avoid bad press. Further, it teaches methods for visually reporting results of performance tests that are less prone to misinterpretation than reporting complex statistics the audience is unlikely to understand. Finally, this workshop provides you with the knowledge you need to use statistics correctly to help you understand the data.
This 1-day workshop is primarily offered as an on-site course, but sometimes we work with organizations that arrange to make it public. Contact us for more information.
Course Objectives:
In this course, you will learn:
- How to identify criteria to analyze against
- How to quantify user perception for analysis
- Key statistical principles
- How to analyze collaboratively
- How analysis drives future test design
- How to report the analysis against business objectives
Course Outline:
Introduction
- Overview
- Why Analysis is Challenging
CORE PRINCIPLE:Criteria
- Understanding Context
- Goals
- Requirements
- Thresholds
- Constraints
- The Pain Chart
- The Oracle Problem
CORE PRINCIPLE:Statistics
- Overview
- Data Distribution
- Outliers
- Significance
- Confidence
CORE PRINCIPLE: Analyze
- Overview
- A Collaborative, Cross Functional Process
- Follow-on Testing
- Feeds Re-Design
- Configurations
- Significance & Repeatability
- Trends
- Outliers
- Patterns
- Compliance
- Accuracy
- Resources & Times
- Errors & Functionality
CORE PRINCIPLE: Report
The most effective reports are:
- Timely
- Relevant
- Audience Appropriate
- Visual
- Intuitive
- Supported