« Distributing Java Applications | Home | Performance is Always Subjective »

Acceptable Response Times

Illustration: Stopwatch

It feels like hardly a single day has passed in the past six years that someone hasn't asked me this questions: "What is the industry standard response time for a Web page?" And in the past six years, the answer hasn't changed, not even a little bit. So if the answer hasn't changed, why am I still getting asked the question on virtually a daily basis?

That is a quote from Acceptable application response times vs. industry standard, an article by Scott Barber published by TechTarget on March 13, 2007.

As I read Scott's article, I found myself in strong agreement with every point. By the end, I realized that Scott had echoed and summarised many previous posts of mine. So I have used Scott's words as a framework to collect together references to my previous articles on the subject of performance objectives -- what they should be, and how you should set them:

1. There are no industry standards for Web page response times:

How could there be? Think about how you use the Web. How long were you willing to wait for this page to load? How long are you willing to wait to view your family's online photo album? How long are you willing to wait for your tax software to confirm that your return has been submitted successfully? Are those numbers the same when you are at home as when you are at work? How about when you are using the wireless connection in an airport?

--Scott Barber, TechTarget, March 13, 2007.

This is just what I was getting at in my recent post on Performance is Always Subjective.

2. There are some long-understood cognitive thresholds:

Your actual numbers don't really matter. The point is that no one number could possibly be the answer -- at least until Web pages start regularly having response times fewer than .25 seconds. Until then, what you are measuring is a combination of your current expectations about Web page response time and your determination to accomplish tasks via the Web.

This is because by the early 1980s cognitive psychologists had already determined that a delay of longer than one quarter of a second between an action and a response, on a computer or otherwise, would noticeably impact human performance, increasing error rate and increasing the probability of switching to a competing task. So, as far as I'm concerned, until our Web sites make it to that .25 second barrier, what matters more than agreeing on a standard is staying ahead of the expectations of our users.

--Scott Barber, TechTarget, March 13, 2007.

In 1968, Robert B. Miller of IBM studied cognitive thresholds for different levels of human attention during human-computer interaction (HCI). Computing technology may have changed since then, but people's brains still work the same way. For a few more details, see my post on The Miller Response-Time Test.

3. The so-called "8-second rule" was an over-hyped over-generalization:

For years, the most commonly quoted standard was the so-called "8-second rule." This was based on some research Nielsen Media conducted in the late 1990s, which concluded that most Internet users wouldn't give up on the task they were trying to accomplish as long as the Web site responded in 8 seconds or fewer. While that was certainly an interesting piece of research, it had nothing to do with user satisfaction nor was it ever intended as an industry standard.

--Scott Barber, TechTarget, March 13, 2007.

Actually, the basis for any such 8-second standard was even more tenuous. In fact, only half the test subjects abandoned after 8.5 seconds, and site feedback (like animated cursors or progress bars) kept them around for much longer. See The Miller Response-Time Test again.

4. An average response-time experience is not necessarily acceptable:

What it did measure was the degree to which people had come to accept that, if they wanted to accomplish a task on the Web, 8 seconds was how long it was bound to take over their 33.6 kbs modems. I can assure you that if those users had been presented with an option of one site with an 8-second response time and a competing site with a 3-second response time, they would have flocked to the 3-second site without a second thought.

--Scott Barber, TechTarget, March 13, 2007.

Customers' expectations determine their perceptions, and their expectations are continually evolving based on their experience of other sites. See WYSIWYG, or No Site is an Island.

5. A new "4-second rule" may or may not describe average experience today:

In November 2006, a new study by Akamai and Jupiter Research proposed replacing the 8-second rule, claiming that today ... four seconds is the maximum length of time an average online shopper will wait for a Web page to load before potentially abandoning a retail site. Reviewing this, Scott writes:

Akamai and Jupiter Research Identify '4 Seconds' as the New Threshold of Acceptability for Retail Web Page Response Times

As this claim cut the existing "rule" in half, I found it to be an intriguing finding, so I downloaded the whole report, only to find out that this "new rule" was determined by collecting 1,058 responses to the following survey question:

"Question: Typically, how long are you willing to wait for a single Web page to load before leaving the Web site? (Select one.)
A. More than 6 seconds.
B. 5-6 seconds.
C. 3-4 seconds.
D. 1-2 seconds.
E. Less than 1 second."

Clearly, this "new rule" is no more an industry standard than the Nielsen research from nearly a decade before. The Nielsen research was at least observationally accurate, if misused; this research simply demonstrates that we all learned the same rule for taking multiple choice tests in junior high school: "When you have no idea what the correct answer is, pick C; you might get lucky."

Try it yourself. Ask the person in the office next to you this question and see what his or her answer is. Then ask your guinea pig to surf the Web and find a Web page that loads in the same time bracket as his or her answer. Use your watch to see how close he or she is to estimating the load time. Do that with 10 people and see what kind of accuracy you get.

I have been doing performance testing long enough to know that Web surfers have no idea how long 4 seconds is. In fact, I promise that if someone were to sit down with those respondents and ask them to identify how many seconds various pages took to load, *most* of them would not get it right, and we would find that *most* of the wrong ones *think* a page takes longer to load than it actually does.

--Scott Barber, TechTarget, March 13, 2007.

I agree. If someone needs an overly broad and largely irrelevant generalization about Web site performance in 2007, I suppose 4 seconds is a better number than 8 seconds. For someone who remembers the old 8-second generalization, this new one is a reminder that Web technology is improving. It's like knowing that the world's population has roughly doubled from 3 billion to 6 billion since 1960 -- it's an approximation that's not very accurate today, and will be even less accurate by next year.

But that's about all it's useful for. As a standard objective, it would be better by far if the old rule would simply die and be forgotten altogether, and not be replaced by anything. Because as a performance goal, a suggested "4-second rule" is just as incorrect and misleading today as an "8-second rule" was 10 years ago.

I have not written anything about the Akamai/Jupiter Research proposal, so it's good to read Scott's article, which does a good job of debunking it.

6. The relationship of expectations and experience is what determines user satisfaction:

The real question is not "What is the industry standard?" but rather "What response time will the users of my Web site or application find acceptable?" The challenge is that determining what your users are going to deem "acceptable" is both difficult and subject to significant changes over short periods of time. Software development shops don't want to do regular usability studies with groups of representative users because it is time-consuming and expensive.

For the most part, they don't have the resources or the training to conduct those usability studies even if they wanted to, which is why so many folks keep latching onto narrowly conducted anecdotal research and proclaiming a standard. The real problem is that defaulting to a faulty standard is actually more likely to lead people to develop and release Web sites that users find frustrating due to poor performance than if those same people just sat down and used the site, deciding whether performance was good enough based on how it felt.

--Scott Barber, TechTarget, March 13, 2007.

I think Scott hits the nail on the head here. People want to believe that a "rule" can supply them with a quick answer, because they don't want to do the hard work needed to come up with the right answer. But here's the problem: a 6 billion world population estimate is no help to someone whose work involves tracking population data. Similarly, if your work involves tracking customer satisfaction data, you can't rely on a standard approximation derived last year from other people's data.

For a site owner, there is simply no substitute for doing the necessary research to understand your customers' experiences, and to keep up with your competition. At UpRight Marketing, we call this customer development. See Delight, Satisfy, or Frustrate?.

Tags: , , , , , ,
, , , ,

PrintView Printer Friendly Version

EmailEmail Article to Friend

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    There are two schools of thought in the industry concerning the origin of quality manuals. One categorically declares that a single, common quality manual will not work and that each company should develop its own. This conviction stems from the belief that if documents are not written within the company, its ...

Reader Comments (1)

for us, the acceptable web page response, talking about "content delivey" is based on the target audience; it is not the same to have a child playing ps3 online than having an advertising agency viewing demo reels from animation studios websites, these are different needs and they have different death lines in timing response.

August 23, 2007 | Unregistered Commenterdension mexico

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>