Collected thoughts about software and site performance ...

Web performance matters. Responsive sites can make the online experience effective, even enjoyable. A slow site can be unusable. This site is about online performance, how to achieve and maintain it, its impact on user experience, and ultimately on site effectiveness.

If I Had A Hammer ...

Illustration: Ultimate Geeks Multi-Tool Hammer

If I had a hammer
I'd hammer in the morning
I'd hammer in the evening
All over this land
I'd hammer out danger
I'd hammer out a warning
I'd hammer out love between my brothers and my sisters
All over this land

--Pete Seeger and Lee Hays, 1949 [Wikipedia]

In May 2007, after I wrote about Controlling What You Can't Measure, I had a conversation with Ben Simo (see the comments) about metrics and tools ...

Click to read more ...

Scalability is Not Optional

Illustration: Kent Langley

My recent post, Asynchronous Architectures [4], summarized a presentation by Werner Vogels at the 2007 QCON conference in London.

A subsequent post by Kent Langley in his new ProductionScale blog -- entitled Getting Rid of the Relational Database -- supports the arguments advanced by Vogels.

Click to read more ...

Managing for Business Effectiveness

Drucker on Effectiveness vs. Efficiency

Management Wisdom: 3

There is surely nothing quite so useless as doing with great efficiency what should not be done at all

-- Peter Drucker, 1963

Peter Drucker is often called "the father of modern management". Many books and Web sites are devoted to his insights, some of which I have written about previously.

This post highlights his incisive observation about the difference between effectiveness and efficiency. I have always found it to be especially memorable, and quoted it (twice) when discussing priorities and choices in my book about software performance. Unfortunately I got the source wrong, but thanks to Google I can now correct my mistake.

It appeared in Managing for Business Effectiveness, an article in the May/June 1963 edition of Harvard Business Review ("HBR"). You can also find it in a February 2006 HBR article -- What Executives Should Remember -- a collection of excerpts drawn from HBR articles by Drucker published between 1963 and 2004.

Click to read more ...

Asynchronous Architectures [4]

Illustration: Werner Vogels

This is the fourth in a series of posts presenting arguments for asynchronous architectures as the optimal way to build high-performance, scalable systems for a distributed environment.

In a QCon conference presentation on Availability and Consistency or how the CAP theorem ruins it all, Werner Vogels, Amazon CTO, examines the tension between availability & consistency in large-scale distributed systems, and presents a model for reasoning about the trade-offs between different solutions. I recommend you find time to watch the entire 52-minute video.

Click to read more ...

Asynchronous Architectures [3]

Dan Pritchett's Design Rule

Performance Wisdom: 13

Always assume high latency, not low latency

This post is the third in a series presenting arguments for asynchronous architectures as the optimal way to build high-performance, scalable systems for a distributed environment.

The first reviewed the case for asynchronous communication among interdependent components or services, and Bell's Law of Waiting. The second highlighted The Fallacies of Distributed Computing, and discussed the importance of reflecting the business process in distributed systems design.

This post reviews The Challenges of Latency, an article about how asynchronous architectures can improve the quality of Web applications, published on the InfoQueue site by eBay architect Dan Pritchett in May 2007. Dan's article is especially relevant today, given the high level of interest in adopting Web services and SOA approaches.

Click to read more ...

Asynchronous Architectures [2]

The Fallacies of Distributed Computing

Performance Wisdom: 12

1. The network is reliable
2. Latency is zero
3. Bandwidth is infinite
...

This post is the second in a series presenting arguments for asynchronous architectures as the optimal way to build high-performance, scalable systems for a distributed environment. The first post reviewed the general case for asynchronous communication among interdependent components or services, and highlighted Bell's Law of Waiting.

The Fallacies of Distributed Computing highlight crucial differences between centralized and distributed computing. Network components introduce potential problems that a centralized solution does not have to consider.

In this post I discuss how the design of distributed systems should draw on that of manual business systems. Of course, distributed computing can shorten the timescales of some business operations enormously. But drawing analogies with the way manual systems work is an observation that will help us to design efficient and scalable distributed systems.

Click to read more ...

Asynchronous Architectures [1]

Bell's Law of Waiting

Performance Wisdom: 11

All computers wait at the same speed

In Five Scalability Principles, I reviewed an article published by MySQL about the five performance principles that apply to all application scaling efforts. When discussing the first principle -- Don't think synchronously -- I stated that Decoupled processes and multi-transaction workflows are the optimal starting point for the design of high-performance (distributed) systems.

That's a quote from High-Performance Client/Server, from a section on Abandoning the Single Synchronous Transaction Paradigm, in Chapter 15, Architecture for High Performance. My 1998 book is out of print now, and contains some outdated examples and references. But most of the discussions of performance principles are timeless, and you can pick up a used copy for about $3.00 at Amazon.

So I am planning some more posts built around excerpts from the manuscript. I'll be updating and generalizing the terminology as necessary for today's environments, and adding some guidelines in my Performance Wisdom series.

Click to read more ...

Five Scalability Principles

Five Scalability Principles

Performance Wisdom: 10

Don’t think synchronously, ...

The 12 Days of Scale-Out is a section of the MySQL site. It consists of a series of twelve articles, eleven of which are case studies describing large-scale MySQL implementations. But Day Six is a bit different -- it spells out five fundamental performance principles that apply to all application scaling efforts.

This subject is vitally important to MySQL, whose server replication and high availability features ... allow high-traffic sites to horizontally 'Scale-Out' their applications, using multiple commodity machines to form one logical database -- as opposed to 'Scaling Up', starting over with more expensive and complex hardware and database technology.

I know from first-hand experience that these claims are valid. At Keynote, my team used MySQL as the foundation for the Performance Scoreboard. In this data mart application, MySQL supports supports the continuous insertion of new measurements at the rate of several million per day, plus hourly aggregation into summary tables, plus the queries needed to support continually updated dashboard displays for every customer, plus any ad hoc queries generated by customers doing diagnostic investigations.

Click to read more ...

Latency, Bandwidth, and Response Times

Illustration: Web Page Response Time 101

Latency, Bandwidth, and Station Wagons focused primarily on the limitations of network bandwidth, and the time required to transmit massive data volumes. While that is an interesting topic, and one that produces some surprising results (like the fact that FedEx is still faster than the Internet), it is not particularly relevant to the subject of Web performance, which depends on the time required to transmit many small files.

My post highlighted It's Still The Latency, Stupid, by William (Bill) Dougherty in edgeblog. Bill's title pays homage to a famous 1996 article by Stuart Cheshire about bandwidth and latency in ISP links, It's the Latency Stupid.

Over a decade later, Bill points out, Cheshire's writings are still relevant: One concept that continues to elude many IT managers is the impact of latency on network design ... Latency, not bandwidth, is often the key to network speed, or lack thereof. This is especially true when it comes to the download speeds (or response times) of Web pages and Web-based applications. In this post I explain why, providing some supporting references and examples to support my argument.

Click to read more ...

Java Performance Optimization

Illustration: Pro Java EE5 Performance Management and Optimization (cover)

Do you subscribe to email newsletters? If you're like me, you get lots of them. New ones appear in my inbox every morning. They pile up, demanding to be read. In fact, they seem to breed like rabbits, producing new offspring -- when did I express an interest in Enterprise VOIP Security Architecture issues? Sometimes in a housekeeping splurge I delete a few dozen at once, suffering a momentary twinge of anxiety at having perhaps missed something important. So usually I skim them before hitting the delete button.

TechTarget's Search Software Quality service seems to be especially prolific, but is also a regular source of interesting references -- like TheServerSide.com, the subject of a recent note. According to the site's home page:

Java Performance Management for Large-Scale Systems

There are many classes of enterprise applications that have stringent performance and scalability requirements. TheServerSide.com has assembled a collection of resources to help you better design, develop, test and manage high performance, large-scale systems - learn new and innovative approaches for performance tuning, memory management, concurrent programming, JVM clustering and more.

Click to read more ...

Displaying entries 11 - 20 of 102    Previous Page [newer] | Next Page [older] | Home