In its research labs in Hillsboro, Oregon, Intel engineers
have designed and manufactured a handful of transistors that are only 20
nanometers, or 0.02 microns, in size. By comparison, the transistors found in
the latest chips in use today measure 0.18 microns from one side of the
transistor gate to the other. The implications of developing such small and fast
transistors are significant: Silicon will be able to be used to make chips until
2007, and it will make possible microprocessors containing close to 1 billion
transistors running at 20 gigahertz by that year. Today's Pentium 4 processors
have about 42 million transistors and run at 1.7 gigahertz. "There's been a
lot of talk and concern about the end of Moore's Law,'' Gerald Marcyk, the
director of components research for Intel's technology and manufacturing group,
told Reuters this week. "So far, we haven't hit any fundamental limits with
respect to our transistor technology.'' Even so, it appears that Moore's Law is
close to running out of steam. Some of the components in the transistors Intel
announced, such as the silicon dioxide gate, a layer that prevents the metal on
top from short-circuiting out the silicon underneath when current is passed
through it to make the transistors function, are only three atoms thick.
"You can't really scale much lower than three atoms thick,'' Marcyk said,
referring to the two oxygen atoms and one silicon atom bound together that
constitute the gate. By the time Intel, and others, roll out semiconductors with
transistor gates 0.02 microns wide, those chips will last for one more processor
generation. Such a generation, in Intel's case, typically lasts about three
years. This means that Moore's Law will last into the next decade. The so-called
Moore's Law, formulated in the 1960s, stipulates that the number of transistors
on a computer microprocessor doubles approximately every 18 months. Intel
co-founder Gordon Moore had originally predicted such leaps would be impossible
after 1975 because of the limits of chip design, a barrier scientists keep
shattering. After that, the dimensions get so small that a new material will be
required, and researchers across the globe are trying to figure out what it will
be. That is where something called high-k gate dialectrics comes into play.
"We're going to have to invent a new kind of material to replace the
silicon dioxide,'' Marcyk said. "And right now, that process is what I like
to call the random walk through the periodic table (of the elements).''
Of course, a microprocessor is ultimately only as powerful
and useful as the software programmes that are written to run on them. But
processors with 1 billion transistors, Marcyk said, leave the field wide open.
For example, computers and hand-held devices will be able to understand commands
in natural language, as well as handwriting. An investor could check his stock
portfolio in the morning and find that the computer has analyzed the portfolio,
market trends, economic data and such to present a number of options. "You
log on in the morning and (the computer) gives you two or three options: 'Have
you thought about doing one of these things? I've done the calculations for
you,''' Marcyk said. Transistors, as they get smaller, require less power, so
microprocessors in 2007 will consume less power in all than those on the market
today, Marcyk said. Not surprisingly, Andy Grove, Intel's hard-charging and
hypercompetitive chairman, has taken an interest in the research on just how
much longer transistors based on silicon can continue to work. "One of the
things Andy Grove keeps asking me is, when do they stop working?'' Marcyk said.
"And I say I don't know yet. I keep shrinking them, and they keep
working.'' Transistors are at the heart of all modern computers. The opening and
closing of their switches are the basis of all computations inside a
microprocessor. In 1965, Intel co-founder Gordon Moore predicted that the number
of transistors on a piece of silicon would double roughly every 18 months.
Moore's Law, as the famous prediction is now known, has been amazingly accurate,
largely thanks to new technologies that make transistors smaller and faster.
Performance has followed a similar path. The first "fully transistorized''
computer, built by IBM in 1954, had 2,000 transistors. Intel's Pentium 4, by
comparison, has about 42 million. About 1 billion of the new transistors can be
squeezed into a microprocessor with the new technology, Intel said.
In other "chip" news, IBM is scheduled to unveil
its latest materials breakthrough at a chip-technology conference in Kyoto,
Japan. The process is dubbed strained silicon — but don't let the jargon
mislead you. "Strained" doesn't bear any negative connotation.
Instead, it means electrons can jump through transistors up to 70% faster than
they do in state-of-the-art chips. Combined with the improved speed of electrons
in copper wires, which IBM also pioneered, tomorrow's chips could post a total
speed gain of 35%, Big Blue estimates, with the same-size circuitry. That will
translate into faster computers, more powerful mobile phones, and cheaper
telecom equipment. What gets "strained," or stretched, is the atomic
structure of the silicon crystal. Building on the technology it introduced in
1998 for high-speed telecom chips, IBM engineers grow a layer of silicon on top
of germanium, another semiconducting material. Germanium's crystalline lattice
has slightly larger spaces between its atoms than does silicon. "It gives
you a fundamentally new plateau to start from," says Bijan Davari, a fellow
and vice-president of semiconductor development at IBM Microelectronics. Until
now, compound silicon-germanium technology has been used just for bipolar
transistors. These superspeedy devices can cope with data flows of up to 200
gigahertz per second, roughly 200 times what the best ordinary silicon chips can