This is a companion series to the User Experience, not Metrics series and will address topics related to what happens after those initial results are collected, the part it takes a human brain to accomplish. We will explore what the results mean and what can be done to improve them. We will take the next step beyond simply testing and explore how to identifying specific, fixable, issues. What are these issues? Poor end user experience, scalability and confidence in our software applications.
Performance Testing and Analysis is the discipline dedicated to optimizing the most important application performance trait, user experience. In this series of articles, we will explore those performance engineering activities that lie beyond performance testing. We will examine the process by which software is iteratively tested, using Rational Suite TestStudio, and tuned with the intent of achieving desired performance by following an industry-leading performance engineering methodology that compliments the Rational Unified Process. This first article is intended to introduce you to the high-level concepts used throughout the series and to give you an overview of the articles that follow.
- Part 1: Introduction
- Part 2: A Performance Engineering Strategy
- Part 3: How Fast Is Fast Enough?
- Part 4: Accounting for User Abandonment
- Part 5: Determining the Root Cause of Script Failures
- Part 6: Interpreting Scatter Charts
- Part 7: Identifying the Critical Failure or Bottleneck
- Part 8: Modifying Tests to Focus on Failure or Bottleneck Resolution
- Part 9: Pinpointing the Architectural Tier of the Failure or Bottleneck
- Part 10: Creating a Test to Exploit the Failure or Bottleneck
- Part 11: Collaborative Tuning
- Part 12: Testing and Tuning Common Tiers
- Part 13: Testing and Tuning Load Balancers and Networks
- Part 14: Testing and Tuning Security
(Also as a free .pdf download here.)
Chapter 4: Collaboration Is the Cornerstone of Beautiful Performance Testing by: Scott Barber