3D interconnect session brings new topic to IITC
The International Interconnect Technology Conference this June is planning a session to look at an area outside its normal scope: three-dimensional interconnect structures. OK, all IC interconnect structures are 3D. But the IITC session will be about interconnect structures that extend beyond the confines of a single die, creating a dense, high-bandwidth mesh of interconnect between the dice in a stack.
This is properly a subject for an IC interconnect conference rather than a packaging conference, according to IITC 2008 Publicity Chair, and Chief Technologist for 3D Development at IBM Michael Shapiro. “We are interested in pitches significantly smaller than what can be achieved with, for instance, wire-bond interconnect, and significantly higher numbers of interconnect lines. Additionally, the 3D techniques of interest generally use silicon fabrication technology, such as solder balls or copper pillars, rather than traditional packaging technology,” he explained.
Exploration of true 3D technology is just starting out in the industry, and much of the work now is at the research level at institutions such as IMEC. Areas of exploration, according to Shapiro, include the basic structures for interconnecting dice, process steps to fabricate these structures, new materials for interstitial layers or carriers, and the electrical characteristics the resulting interconnect might have.
Beyond these basics lies a whole new way of thinking about chip design. Obviously new analysis tools will be necessary to deal with questions about thermal hot spots or signal integrity issues in the new kind of interconnect. New placement and routing algorithms will be necessary, and to use them well, new design flows. Further beyond that, the ability to have enormous—essentially on-chip—bandwidth between dice will change chip architectures.
Shapiro points, for example, to today’s stacked systems that include DRAM. Today, you just put DRAM dice in the stack along with the processors and peripherals. Then you wire-bond the DRAM I/Os into the rest of the chips in the stack. But if you had nearly unlimited connectivity between dice, Shapiro observes, that wouldn’t be the way to do it. The conventional DRAM interface is designed to minimize the number of high-speed I/Os—exactly the opposite of what you’d want. Instead, you might still use volatile memory cell arrays, but you would probably organize them into many small banks so that the memory’s clients could get huge, wide gulps of data in a single cycle—in effect, exposing the DRAM’s internal interconnect to the chips that need the data.
Thus memory architecture would change, and with it, processor architectures as well. Such systems might look more like the huge memory arrays with embedded processing proposed by the long-lost IRAM Project at UC Berkeley than they would resemble the bus-oriented, bandwidth-starved systems of today. But today, that is only conjecture.