Two Views of the Post PC World - Automata Processor and TOMI Celeste, Part 1

-March 18, 2014

David Patterson, Sophie Wilson, and Steve Jobs have torched the PC business.


Patterson along with Dave Ditzell co-authored the famous RISC paper documenting how simpler instruction sets could outperform complex ones.[1]

Sophie Wilson architected ARM, one of the first RISC processors based on these concepts.[2]

Steve Jobs proved that a RISC architecture powering toys, games, and cell phones could run a tablet at lower power and cost than a PC.[3]

It gets worse for the industry. Computer architecture in general seems to have bumped up against some hard limits. In particular Robert Denard's Scaling[4] and Gordon Moore's Law[5] have collided with Patterson's Walls[6].

Forty years ago Denard demonstrated that as MOSFET transistors were reduced in size, the speed improved. Furthermore, as the power supply voltage decreased, power consumption decreased by the square of the voltage.

A decade earlier Moore observed that the number of transistors on integrated circuits doubled approximately every two years.

Combining these two effects delivered three decades of progressively cheaper, faster, but hotter microprocessors until it all came to an end, just as David Patterson predicted.

Patterson (yes, the RISC guy) codified what most architects knew instinctively. All computers are limited by three elements:
  1. How fast they access memory[7]
  2. How hot they run[8]
  3. The amount of parallelism in their pipeline[9]
The Walls explain why the Pentium 4 Extreme Edition introduced in 2004 ran 3.73GHz[10], but the top of the line Haswell i7 introduced September of last year is spec'd for 3.5GHz and 3.9GHz "turbo"[11].

In short, legacy computer architecture is at a dead end.

SO NOW WHAT?
Intel is not going out of business, but as the PC dies they will leave the microprocessor business they pioneered just like they left the DRAM business they also pioneered[12]. Lenovo might be a good acquirer.

We will discuss the dominant trends in future computing in more detail in Part III, but in short future computer functionality will be broadly split into:
  • Very large massively parallel systems
  • Connected by high speed network to
  • Very small, person or thing related systems.

Both systems are currently constrained by:
  • Performance
  • Power consumption
  • Cost
Google, Amazon, and Facebook infrastructure are good examples of massively parallel systems networked to small systems.

Smart phones, tablets, and of course Google Glass are current examples of very small systems.

In light of the PC debacle and Patterson's Walls, the future of computing until the end of silicon probably consists of some combination of the following:
  1. ARM variations in everything from greeting cards to supercomputers,
  2. Non-von Neumann architectures such as Micron’s Automata Processor, and
  3. Multi-core CPUs in DRAM such as Venray’s TOMI Celeste.

In a future series we'll investigate #1.

This series discusses #2 and #3.

Micron's Automata Processor [13] [14]
Automata Processor (AP) is Micron's pattern matching engine it has been developing for seven years. AP is a data accelerator with applications ranging from deep packet inspection to graph analytics.

Micron builds AP in its DRAM fab which gives it several advantages compared to logic fabs such as TSMC:
  1. DRAM transistors have several thousand times less current leakage than logic transistors.
  2. DRAM transistors have much more consistent temperature performance than logic transistors.
  3. DRAM transistors are really cheap to make compared to logic transistors.
The greatest disadvantage is the lack of metal layer interconnect. Most DRAM processes have three metal layers. Logic processes require up to 16.

AP is a good match for a DRAM process due to the simplicity of its logic and the row and column structure necessary for its parallel compares.

Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES