Collected thoughts about software and site performance ...
Web performance matters. Responsive sites can make the online experience effective, even enjoyable. A slow site can be unusable. This site is about online performance, how to achieve and maintain it, its impact on user experience, and ultimately on site effectiveness.
Home | Entries about Performance Management (27), in reverse date order:
With over 30 application performance management (APM) tool vendors offering scores of products, buyers face hundreds of confusing choices. Compounding the problem, the lack of a common taxonomy, or standard APM nomenclature, makes cross-vendor product comparisons especially challenging.
To address this challenge, NetForecast has developed an APM tools framework anyone can use to define APM requirements and map them to vendor offerings. On June 30 2010, Peter Sevcik will describe this framework in a Webinar hosted by the Apdex Alliance ...
Service level management (SLM) is the art and science of keeping application services running properly once in production. The key to successful SLM is the ability to use metrics that are linked to the business.
Apdex (Application Performance Index) is an open standard that is a numerical measure of user satisfaction with the performance of enterprise applications. It converts many measurements into one number on a uniform scale of 0-to-1 (0 = no users satisfied, 1 = all users satisfied).
Properly implemented, Apdex enables an organization to link application performance to business needs ...
Do you subscribe to email newsletters? If you're like me, you get lots of them. New ones appear in my inbox every morning. They pile up, demanding to be read. In fact, they seem to breed like rabbits, producing new offspring -- when did I express an interest in Enterprise VOIP Security Architecture issues? Sometimes in a housekeeping splurge I delete a few dozen at once, suffering a momentary twinge of anxiety at having perhaps missed something important. So usually I skim them before hitting the delete button.
TechTarget's Search Software Quality service seems to be especially prolific, but is also a regular source of interesting references -- like TheServerSide.com, the subject of a recent note. According to the site's home page:
Java Performance Management for Large-Scale Systems
There are many classes of enterprise applications that have stringent performance and scalability requirements. TheServerSide.com has assembled a collection of resources to help you better design, develop, test and manage high performance, large-scale systems - learn new and innovative approaches for performance tuning, memory management, concurrent programming, JVM clustering and more.
Managing Response Time for Web Sites and Web Applications
Human beings don’t like to wait. We don’t like waiting in line at a store, we don’t like waiting for our food at a restaurant, and we definitely don’t like waiting for Web pages to load.
Those words open Web Page Response Time 101, an excellent article by Alberto Savoia. Although it was published in July 2001, it remains every bit as relevant and useful today. It does a really good job of explaining and summarizing two fundamental aspects of Web performance -- human behavior and site behavior.
Clustering and distributing Java applications has never been easier than it is today. As a result, writing good distributed performance tests and tuning those applications is increasingly important. Performance tuning and testing of distributed and/or clustered applications is an important skill and many who do it can use a little help.
That paragraph introduces a new series of four posts about how to approach testing for distributed Java applications by Steve Harris of Terracotta, who blogs as DSO Guy. Steve frames his guidelines as anti-patterns -- in other words, pitfalls or "commonly-reinvented bad solutions to problems" to be avoided [see Wikipedia].
Response Time Standards for Web Sites and Web Applications
It feels like hardly a single day has passed in the past six years that someone hasn't asked me this questions: "What is the industry standard response time for a Web page?" And in the past six years, the answer hasn't changed, not even a little bit. So if the answer hasn't changed, why am I still getting asked the question on virtually a daily basis?
As I read Scott's article, I found myself in strong agreement with every point. By the end, I realized that Scott had echoed and summarised many previous posts of mine. So I have used Scott's words as a framework to collect together references to my previous articles on the subject of performance objectives -- what they should be, and how you should set them:
Performance Management (SLM) Challenges for Web 2.0, Ajax, and Rich Internet Applications (RIA's)
Last week, TechTarget published an article by Patrick Lightbody about the performance of Web 2.0 applications. The article's technical core -- which I review below -- is a useful checklist of ten recommendations for developing and testing Web 2.0 applications with performance in mind.
For the full article, see Ten ways to improve testing, performance of Web 2.0 applications.
Because I believe in systematic performance engineering, I am always pleased when writers advocate proactive approaches to application performance. It's the only rational way to ensure acceptable performance in production applications. So it's too bad that Patrick feels the need to justify his good advice by surrounding it with an introduction and conclusion that suffers from all the worst features of Web 2.0 coverage. A few half-truths are buried in an amalgam of excessive hype, false claims, meaningless analysis, and an optimism that underestimates the real technical challenges.
Your new Web application is almost ready to go live, but you need to be sure it will handle the projected traffic -- before that traffic hits the site. You probably already know that you can't just collect up your working test scripts and loop through them at high speed.
So what should you do?
Three Key Performance Engineering Questions
What have you got?
What do you want?
How do you get there?
Performance testing is the discipline concerned with determining and reporting the current performance of a software application under various parameters. But there comes a time after the tests are run when someone who's reviewing the results asks the deceptively simple question: So what, exactly, does all this mean? This point beyond performance testing is where the capabilities of the human brain come in handy.
Anything you need to quantify can be measured in some way that is superior to not measuring it at all
Posts on The Importance of Measurements and Controlling Software Projects have reviewed the origin of the saying that "you can't manage what you can't (or don't) measure". Today I look more closely at its meaning and validity -- how true is it?
One apparent contradiction is that this much quoted fact of management is also widely viewed as a fallacy -- or at least, as an over-exaggerated claim -- especially by people in the software engineering profession, which seems (in the person of Tom DeMarco) to have coined the saying in the first place. That contradiction was highlighted in a 2003 book by Robert L. Glass, Facts and Fallacies of Software Engineering [Amazon].