Collected thoughts about software and site performance ...
Web performance matters. Responsive sites can make the online experience effective, even enjoyable. A slow site can be unusable. This site is about online performance, how to achieve and maintain it, its impact on user experience, and ultimately on site effectiveness.
Home | Entries about Articles and White Papers (17), in reverse date order:
Drucker on Effectiveness vs. Efficiency
There is surely nothing quite so useless as doing with great efficiency what should not be done at all
-- Peter Drucker, 1963
This post highlights his incisive observation about the difference between effectiveness and efficiency. I have always found it to be especially memorable, and quoted it (twice) when discussing priorities and choices in my book about software performance. Unfortunately I got the source wrong, but thanks to Google I can now correct my mistake.
It appeared in Managing for Business Effectiveness, an article in the May/June 1963 edition of Harvard Business Review ("HBR"). You can also find it in a February 2006 HBR article -- What Executives Should Remember -- a collection of excerpts drawn from HBR articles by Drucker published between 1963 and 2004.
Five Scalability Principles
Don’t think synchronously, ...
The 12 Days of Scale-Out is a section of the MySQL site. It consists of a series of twelve articles, eleven of which are case studies describing large-scale MySQL implementations. But Day Six is a bit different -- it spells out five fundamental performance principles that apply to all application scaling efforts.
This subject is vitally important to MySQL, whose server replication and high availability features ... allow high-traffic sites to horizontally 'Scale-Out' their applications, using multiple commodity machines to form one logical database -- as opposed to 'Scaling Up', starting over with more expensive and complex hardware and database technology.
I know from first-hand experience that these claims are valid. At Keynote, my team used MySQL as the foundation for the Performance Scoreboard. In this data mart application, MySQL supports supports the continuous insertion of new measurements at the rate of several million per day, plus hourly aggregation into summary tables, plus the queries needed to support continually updated dashboard displays for every customer, plus any ad hoc queries generated by customers doing diagnostic investigations.
Latency, Bandwidth, and Station Wagons focused primarily on the limitations of network bandwidth, and the time required to transmit massive data volumes. While that is an interesting topic, and one that produces some surprising results (like the fact that FedEx is still faster than the Internet), it is not particularly relevant to the subject of Web performance, which depends on the time required to transmit many small files.
My post highlighted It's Still The Latency, Stupid, by William (Bill) Dougherty in edgeblog. Bill's title pays homage to a famous 1996 article by Stuart Cheshire about bandwidth and latency in ISP links, It's the Latency Stupid.
Over a decade later, Bill points out, Cheshire's writings are still relevant: One concept that continues to elude many IT managers is the impact of latency on network design ... Latency, not bandwidth, is often the key to network speed, or lack thereof. This is especially true when it comes to the download speeds (or response times) of Web pages and Web-based applications. In this post I explain why, providing some supporting references and examples to support my argument.
Managing Response Time for Web Sites and Web Applications
Human beings don’t like to wait. We don’t like waiting in line at a store, we don’t like waiting for our food at a restaurant, and we definitely don’t like waiting for Web pages to load.
Those words open Web Page Response Time 101, an excellent article by Alberto Savoia. Although it was published in July 2001, it remains every bit as relevant and useful today. It does a really good job of explaining and summarizing two fundamental aspects of Web performance -- human behavior and site behavior.
Response Time Standards for Web Sites and Web Applications
It feels like hardly a single day has passed in the past six years that someone hasn't asked me this questions: "What is the industry standard response time for a Web page?" And in the past six years, the answer hasn't changed, not even a little bit. So if the answer hasn't changed, why am I still getting asked the question on virtually a daily basis?
As I read Scott's article, I found myself in strong agreement with every point. By the end, I realized that Scott had echoed and summarised many previous posts of mine. So I have used Scott's words as a framework to collect together references to my previous articles on the subject of performance objectives -- what they should be, and how you should set them:
Most hosted Web Analytics vendors charge you according to page views -- not unreasonable since each view is a call to their server and a new record in their database. But what happens when Ajax and other rich applications eliminate the notion of a "page"?
That's from Web 2.0 Changes Web Analytics Pricing Models, a recent post by Phil Kemelor in CMP's Intelligent Enterprise Weblog. Describing how he sees Web Analytics (WA) vendors adapting to Web 2.0, Phil continues ...
Performance Management (SLM) Challenges for Web 2.0, Ajax, and Rich Internet Applications (RIA's)
Last week, TechTarget published an article by Patrick Lightbody about the performance of Web 2.0 applications. The article's technical core -- which I review below -- is a useful checklist of ten recommendations for developing and testing Web 2.0 applications with performance in mind.
For the full article, see Ten ways to improve testing, performance of Web 2.0 applications.
Because I believe in systematic performance engineering, I am always pleased when writers advocate proactive approaches to application performance. It's the only rational way to ensure acceptable performance in production applications. So it's too bad that Patrick feels the need to justify his good advice by surrounding it with an introduction and conclusion that suffers from all the worst features of Web 2.0 coverage. A few half-truths are buried in an amalgam of excessive hype, false claims, meaningless analysis, and an optimism that underestimates the real technical challenges.
What is performance testing? That seems like a silly question, doesn't it? I mean, we've all seen definitions for performance testing. We've conducted performance tests -- or been on projects where performance testing is conducted. But what is it really? And why is it that even when there seems to be obvious confusion about what performance testing is and is not, people seem hesitant to step back and ask What do you mean when you refer to performance testing?
I introduced the author of the paragraph above, Scott Barber, in my recent post here about Performance Engineering. The full article, What is Performance Testing?, contains some interesting observations.
As a technical writer, my aim is not to publish a stream of original thoughts, but to sift, understand, highlight, explain, connect, and amplify the thoughts of others. So I'm always scouring the Web for good raw material.
Today I have added a new Recommendations page to the site. You'll find it listed in the sidebar, in the "related sites" section, just above the blogroll.
Three Key Performance Engineering Questions
What have you got?
What do you want?
How do you get there?
Performance testing is the discipline concerned with determining and reporting the current performance of a software application under various parameters. But there comes a time after the tests are run when someone who's reviewing the results asks the deceptively simple question: So what, exactly, does all this mean? This point beyond performance testing is where the capabilities of the human brain come in handy.