This is Scott's first series of articles where he starts by asking How many times have you surfed to a web site to accomplish a task only to give up and go to a different web site because the home page took too long to download? "46% of consumers will leave a preferred site if they experience technical or performance problems." (Juniper Communications) In other words, "If your web site is slow, your customers will go!" This is a simple concept that all Internet users are familiar with. When this happens, isn't your first thought always, "Gee, I wonder what the throughput of the web server is?" Well no, that is certainly not the thought that comes to mind. Instead, you think "Man, this is SLOW! I don't have time for this. I'll just find it somewhere else." Now consider this, what if it was YOUR web site that people were leaving because of performance?
Face it, users don't care what your throughput, bandwidth or hits per second metrics prove or don't prove, they want a positive user experience. There are a variety of books on the market, which discuss how to engineer maximum performance. There are even more books that focus on making a web site intuitive, graphically pleasing and easy to navigate. The benefits of speed are discussed, but how does one truly predict and tune an application for optimized user experience? One must test, first hand, the user experience! There are two ways to accomplish this. One could release a web site straight into production, where data could be collected and the system could be tuned, with the great hope that the site doesn't crash or isn't painfully slow. The wise choice, however, would be to simulate actual multi-user activity, tune the application and repeat (until the system is tuned) before placing your site into production. Sounds like a simple choice, but how does one simulate actual multi-user activity accurately? That is the question this series of articles attempts to answer.
- Part 1: Introduction
- Part 2: Modeling Individual User Delays
- Part 3: Modeling Individual User Patterns
- Part 4: Modeling Groups of Users
- Part 5: What should I time and where do I put my timers?
- Part 6: What is an outlier and how do I account for one?
- Part 7: Consolidating Test Results
- Part 8: Choosing Tests and Reporting Results to Meet Stakeholders Needs
- Part 9: Summarizing Across Multiple Tests
- Part 10: Creating a Degradation Curve
- Part 11: Handling Authentication and Session Tracking
- Part 12: Scripting Conditional User Path Navigation
- Part 13: Working with Unrecognized Protocols
(Also as a free .pdf download here.)
Chapter 4: Collaboration Is the Cornerstone of Beautiful Performance Testing by: Scott Barber