Collected thoughts about software and site performance ...
Web performance matters. Responsive sites can make the online experience effective, even enjoyable. A slow site can be unusable. This site is about online performance, how to achieve and maintain it, its impact on user experience, and ultimately on site effectiveness.
Home | Entries about Architecture (5), in reverse date order:
This is the fourth in a series of posts presenting arguments for asynchronous architectures as the optimal way to build high-performance, scalable systems for a distributed environment.
In a QCon conference presentation on Availability and Consistency or how the CAP theorem ruins it all, Werner Vogels, Amazon CTO, examines the tension between availability & consistency in large-scale distributed systems, and presents a model for reasoning about the trade-offs between different solutions. I recommend you find time to watch the entire 52-minute video.
Dan Pritchett's Design Rule
Always assume high latency, not low latency
This post is the third in a series presenting arguments for asynchronous architectures as the optimal way to build high-performance, scalable systems for a distributed environment.
The first reviewed the case for asynchronous communication among interdependent components or services, and Bell's Law of Waiting. The second highlighted The Fallacies of Distributed Computing, and discussed the importance of reflecting the business process in distributed systems design.
This post reviews The Challenges of Latency, an article about how asynchronous architectures can improve the quality of Web applications, published on the InfoQueue site by eBay architect Dan Pritchett in May 2007. Dan's article is especially relevant today, given the high level of interest in adopting Web services and SOA approaches.
The Fallacies of Distributed Computing
1. The network is reliable
2. Latency is zero
3. Bandwidth is infinite
This post is the second in a series presenting arguments for asynchronous architectures as the optimal way to build high-performance, scalable systems for a distributed environment. The first post reviewed the general case for asynchronous communication among interdependent components or services, and highlighted Bell's Law of Waiting.
The Fallacies of Distributed Computing highlight crucial differences between centralized and distributed computing. Network components introduce potential problems that a centralized solution does not have to consider.
In this post I discuss how the design of distributed systems should draw on that of manual business systems. Of course, distributed computing can shorten the timescales of some business operations enormously. But drawing analogies with the way manual systems work is an observation that will help us to design efficient and scalable distributed systems.
Bell's Law of Waiting
All computers wait at the same speed
In Five Scalability Principles, I reviewed an article published by MySQL about the five performance principles that apply to all application scaling efforts. When discussing the first principle -- Don't think synchronously -- I stated that Decoupled processes and multi-transaction workflows are the optimal starting point for the design of high-performance (distributed) systems.
That's a quote from High-Performance Client/Server, from a section on Abandoning the Single Synchronous Transaction Paradigm, in Chapter 15, Architecture for High Performance. My 1998 book is out of print now, and contains some outdated examples and references. But most of the discussions of performance principles are timeless, and you can pick up a used copy for about $3.00 at Amazon.
So I am planning some more posts built around excerpts from the manuscript. I'll be updating and generalizing the terminology as necessary for today's environments, and adding some guidelines in my Performance Wisdom series.