Moore's Law extends to cover human progress
Moore's Law, famous for predicting the exponential growth of computing power over 40 years, comes from a simple try-fail/succeed model of incremental improvement. The predictive success of Moore's Law seems uncanny, so let's take a closer look to get an idea of where it comes from.
Moore conceived his law for computational power but Moore's-like growth laws permeate human endeavor—a fact that had never occurred to me until I went to a presentation by Lawrence Berkeley National Lab energy researcher, Robert van Buskirk. He showed several technologies that improve according to Moore’s law, but with different timescales than the original. You can read his paper here, notably co-authored by Nobel Laureate and former Secretary of the Department of Energy, Steven Chu.
Let's take a quick look at Moore’s Law for computing power (Figure 1) and then see where it and other exponential improvements come from. Right about every 18 months for the past 40 years, transistor density has doubled and there's no sign of the trend slowing down.
Figure 1. Moore's Law shows the rate that transistor density has increased (Source: http://home.fnal.gov/~carrigan/pillars/web_Moores_law.htm).
Gordon Moore proposed that, as long as there is incentive, techniques will improve, components will shrink, prices will scale, the cycle will repeat and we'll continue to see exponential growth. Van Buskirk has evidence that the driving incentive is not limited to economic supply and demand arguments but can also include far weaker incentives such as simply being aware that a quality is desirable for moral or aesthetic reasons.
Moore's-like laws emerge from the simple fact that you and all of your colleagues keep on making improvements as long as you're both motivated and improvements are possible. As far as the latter requirement is concerned, I don't have to tell you how clever engineers can be. While no one is going to violate the laws of thermodynamics—no perpetual motion machines or perfect engines—engineers are pretty good at pushing right up to those constraints and sometimes finding loopholes.
Start with something—a system or a doodad—that works with some proficiency. Along with thousands of other engineers, you set out to improve your doodad. You try stuff. Most of the stuff you try doesn't work, but now and then, you improve the performance of the doodad by some percentage, maybe 0.1%, maybe 10%, maybe you have a disrupting idea that improves it by 1000%. Mostly, we experience many small, 1%-ish improvements. Meanwhile, all of your colleagues around the world are pushing the same envelope but in different directions.
I wrote a simulation because, as I argued last time, writing simulations is a great way to understand how things work. My simulation is pretty simple, but it demonstrates the character of exponential growth laws and what affects their time constants.
Here's how my simulation works. It throws dice. Occasionally the roll comes up positive, which represents someone improving the state of the art. The dice that I throw follow a Gaussian bell curve (Figure 2) so that the probability for a small improvement is much larger than for a large improvement. With many thousands of people focused on the same subject, the central limit theorem argues that a Gaussian should be close to the true distribution.
Figure 2. A Gaussian distribution, a.k.a., bell curve.
The wider the bell curve, the larger the potential improvement. Let's call this the "low-hanging fruit" phenomenon. Figure 3 shows three Gaussian Bell curves, the wider, the lower the fruit hangs.
Figure 3: Gaussian distributions (bell curves) for three different widths.