Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Intel

Intel Claims Smallest, Fastest Transistor 116

The Angry Clam writes: "Supposedly, Intel has really micronized transistors." Seems that "Intel engineers have designed and manufactured a handful of transistors that are only 20 nanometers, or 0.02 microns, in size." There's some of the usual discussion of how long Moore's Law can hold, but also a bit of discussion about what will replace silicon dioxide in a few years. Reader omnirealm points to a similar story at the New York Times as well.
This discussion has been archived. No new comments can be posted.

Intel Claims Smallest, Fastest Transistor

Comments Filter:
  • by Anonymous Coward
    What's cool is that Rob has finally started logging ip addresses from which crap like this is coming, to do something with it.

    I won't do much good on a dial-in ip, at first thought. But addressing 'matters' like these to isp's surely worked before and I wouldn't know why it wouldn't work now.

    It would be a relief to finally see some action against this and lots of other abuse of slashdot. A joke is a joke, but you can actually go too far.

  • by Anonymous Coward
    Silicon and carbon atoms are roughly the same size, and for that matter, so are the molecules SO2 and CO2. For this reason, even if you could attach CO2 to a semiconductor substrate, it would not help to make the transistor smaller.

    What is apparently necessary is a different design for a transistor (or a gate). It turn out to be as revolutionary as the shift from vacuum tube to semiconductor transistor. It may be an application of modern techniques to an old and forgotten idea. Or perhaps Moore's Law will quietly ebb out...

    What I'm personally rooting for is a change in the idea of computing. If people can build reasonably large quantum computers, maybe they can figure out something besides factoring large numbers that they are actually good at computing. FPGAs also sound both neat and promising.

    Realistically, even if this is the last generation of transistor shrinkage, it'll still take years for this to hit the desktop. That is quite a long time for people to come out with ingenious new schemes. Well... cross your fingers, anyway.
  • by Anonymous Coward
    Obligitory AMD plug -- have you tried it on a 1.4/266 Tbird?

  • by Anonymous Coward
    What then?

    Maybe in 100 years, computers will be smart enough to realize that 1.1+1.1+1.1+...+1.1 can be computed as 1.1*ULONG_MAX.

  • > Maybe in 100 years, computers will be smart
    > enough to realize that 1.1+1.1+1.1+...+1.1
    > can be computed as 1.1*ULONG_MAX.

    \begin{pedant}
    Unlikely, given that the value obtained by successive additions and the value obtained by multiplication differ substantially in the 11th decimal place. IEEE floating point numbers are not the same as the real line.
    \end{pedant}
  • "Intel engineers have designed and manufactured a handful of transistors that are only 20 nanometers, or 0.02 microns, in size."

    Which Handful is that ? 5 ( as in the fingers on a hand) or enogh to fill a palm. For something this small you are talking several million.
  • And remember, two cpus running in parellel enjor a greater performance boost (on some tasks) then a single processor with twice the speed of either of the dual processors.
    That is highly unlikely to happen in practice. For almost all workloads you're happy to scale linearly with the number of processors, and often logarithmic scaling is considered ok. Only jobs with peculiar communication requirements (e.g. where not having to context switch between each communication gives an advantage) or cache requirements (the extra cache on the extra processors keeps a job in cache that otherwise would have hit main memory) exhibit better than linear scalability.
  • After what I've been reading here lately, power-consuming seems to be just what California needs ;)

  • That's like saying if we made a law that all couches must be X size MAXIMUM, then somehow a 450lb man will seem less, um, bloated?

    Face it. We're bloated right now. If processors never got any faster ever, we'd still be bloated.
  • Um. That's what they do now. There's a handful of layers in a modern CPU. It's not to the point where you're talking "3d Cube" .. but cross-layer communication IIRC is slow and troublesome to architect.
  • I think he was implying that it would be squared each time instead of doubled. Ie:

    2^2 = 4
    4^2 = 16
    16^2 = 256

    .. and so on.
  • "that doesn't have some silly quote about what kind of AI feature it will enable."

    Right on the mark. We'll hate it anyway (anybody want a dancing paperclip? "The new Pentium V chip will be fast enough for the a line-dancing and juggling paperclip." It's still a lousy annoying paperclip.

    AI requires more than just fast transistors and 3D graphics.

    And all stock people should remember why the big crash happened back in the 80s: Yes: Computer trading and all these automatic trading programs suddenly shouting 'sell sell' in chorus. Let's all not learn from the past and do that again, that was fun (irony).

  • by tsa ( 15680 )
    Wow! A handfull of these transistors really is an awful lot of transistors!
  • immaterial to my point
    [Saint Stephen]
  • they still have to communicate with one another
    [Saint Stephen]
  • I knew I was spouting utter nonsense, but you seem to be amplifying what I seem to be noticed / worried about, that we may hit these sci-fi "limits of the universe" *way* soon than way-way-way-way in the future at the rate we're going. We're in the early part of the hockey stick WRT exponential growth of computing power. The way things are going, by the time your children are old people we'd have to be GOD-LIKE if computers double every couple years. That's the "wow" thing -- how different things must be just a bit further out on the hockey stick.
    [Saint Stephen]
  • Both you, and the fellow who mentioned that I could just multiply 1.1*ULONG_MAX, are in different ways pointing out that this particular problem is really O(1). I'm not current on the quantum computing literature, but it sounds like quantum computing makes the further bold claim that *any and all* problems that have ever been or ever will be are theoretically O(1).

    Let's get from here to there.

    First, why do we have to be stuck with stupid binary after all these years? Surely we can make the "wires" sensitive enough to recognize more than two electrical states. Lots more computing power in the same "physical space."

    Back at the turn of the century Goethe showed that non-trivial systems are not automatic, which ultimately is why we futz around with non-perfectly optimizing compilers that can't recognize that this problem is a single multiplication. A colleague was telling me about NP-completness, and how with the lambda calculus (don't know much about it) we can verify completeness of a system (but what about consistency)? In other words, you can generate every possible truth, but you can't prove it doesn't generate falsehoods. Sounds like the problem you'd have with quantum computing: you'd still have to be able to recognize the "correct" result from all possible correct and incorrect results in the answer set.

    Flame on!
    [Saint Stephen]
  • by Saint Stephen ( 19450 ) on Saturday June 09, 2001 @06:14PM (#163357) Homepage Journal
    Here's some pure bogusness, but what do you think:
    I wrote a C++ program which initializes a double to 1.1; then adds 1.1 to it 4 billion times (ULONG_MAX).

    On my PIII 500 mhz laptop (circa 1998-99) sometime, this program runs in 30 seconds.

    On my new P4 1.7 ghz, it runs in 12 seconds.

    I didn't check, but I think Plank time is about 10-47 seconds. Assuming the time it takes to execute one of these 4 billion steps, and if it continues to cut in half every three years, we'll hit planck time in about 100 years.

    In other words, there is a fundamental limit on how quickly we can know one single fact (planck time), and our children will hit that by the end of their lifetimes.

    What then?
    [Saint Stephen]
  • (Please excuse my having read *way* too much S-F.)

    I would think that given enough horsepower, we should be able to brute force compute all the possible solutions for a problem. Add to that a little statistical math and you might possibly be able to build a minimal AI that could help with some decisions.

    So I guess what we need is a massive online peer-to-peer statistical repository. That way, one system could "learn" from others.

    /me heads to bed.

    --
    Adam Sherman
  • Nice of you to take my rather mean-spirited criticism in such a good-natured way. :-)
    --
  • Most people can't make out any detail smaller than a centimeter.

    By "people" do you mean "blind people feeling things with their feet"?

    A centimeter is 0.4 inches. I don't know about most people, but I can sure see things smaller than that.
    --
  • This will be important only so long as uniprocessor speeds are relevant. If we move to more parallelism (or, of course, quantum computing), this be no more relevant than a limit based on how quickly people can shift the beads of an abacus.
    --
  • Didn't that happen last thursday?
  • Actually, you just contridicted yourself. The Itanium is a lot less complex in instruction fetch, issue, etc, because it relies on the compiler to do that work. From a cursory look at the Itanium ASM doc, it seems that the assembler (or the programmer) organizes all instructions into bundles and instruction groups that can be executed in parallel. This keeps the Itanium from having to do that on its own. Plus, if you look at the Intel C compiler, you'll notice that it supports something called software pipelining, where independent instructions are set up so they can be executed in parallel. As for SPEC numbers, a $1200 Itanium beats an Alpha in floating point. Who gives a bleep about integer performance, the Itanium's is bearable. But look at that fp!
  • C++ is just as efficient as C is just as efficient as ASM for small values of efficient. Seriously, though, some modern compilers can produce code that would put many hand-coders to shame. Besides, no matter what the language, the adds would take more than one clock. I belive the fp unit has a couple of clocks latency for fp adds.
  • True, but the point of EPIC is to make the best use of available resources. If it is easy to make the compiler parallize everything, then why not do that and save transistors on the chip for bigger caches/more function units? The 800MHz Itanium, which outperforms the Alpha for FP (which matters a *whole* lot more than INT on a workstation) costs less than $1200. You can bet that will go down significantly when volume increases and the arch becomes more mature (Remember, EPIC is supposed to replace x86). So eventually, it will cost the same to put 10 Itaniums into a machine as it does to put 10 Athlon-6's or Pentium 5's. At that point, why not just build a better compiler and get more overall performance from the same number of transistors?
  • Bloatware providers (those that keep Intel & AMD in business)
    www.microsoft.com
    www.kde.org
    www.gnome.org
    www.xfree86.org
    www.trolltech.com
    www.gtk.org
    www.openoffice.org

    You see, it's not just MS that spews bloatware. Its simply that while in the UNIX market, different organizations spew bloatware, while in Windows-land, all bloatware spewing is efficiently consolidated into one company.
  • Anyone want to comment on the validity of his verbiosity? The first paragraph seems okay from a vocabulary point of view, but no way in hell I can figure out if the rest of it is even true, much less if it is correct.
  • Aye. Again with the bus-speed Nazis. As long as you have a fat cache and a good amount of bus-speed, there are lots of apps that are still CPU bound. Consider, for example, floating point apps that perform better on an Athlon than on a P4, even though the P4 has 3.2GB/sec of bandwidth. True, bus-speeds are important, but so too are processor speeds.
  • by be-fan ( 61476 ) on Sunday June 10, 2001 @06:13AM (#163369)
    True, the bus does become an issue here. However, since cache fills tend to be large (32-64 bytes) it should be possible to have extremely wide busses (like the dual 256bit busses on Alpha workstations) to compensate for a lower clock speed. Also, 20GHz busses won't come around until processors reach 100GHz or so (which is still a bit away) since 1/5 the processor speed speeds to be a fairly regular bus speed. Of course, as many tricks as you put in there, the inherent problem remains, just gets postponed somewhat.
  • /* some test code */
    if (i == 0)
  • First off, please neglect my previous post... hit the wrong button by accident.

    > C++ is not a very efficient language

    Well, this little program:

    #include <stdio.h>
    #include <stdlib.h>
    #include <limits.h>

    const double a = 1.1;

    int main()
    {
    double d = a;
    unsigned long i;
    for (i = 0; i < ULONG_MAX; i++)
    {
    d += a;
    }
    printf("%lf\n", d);
    return 0;
    }

    Compiled into this:

    .file "repadd.c"
    .version "01.01"
    gcc2_compiled.:
    .globl a
    .section .rodata
    .align 8
    .type a,@object
    .size a,8
    a:
    .long 0x9999999a,0x3ff19999
    .LC1:
    .string "%lf\n"
    .align 8
    .LC0:
    .long 0x9999999a,0x3ff19999
    .align 8
    .LC16:
    .long 0x99999999,0x40319999
    .text
    .align 16
    .globl main
    .type main,@function
    main:
    pushl %ebp
    movl %esp, %ebp
    pushl %eax
    fldl .LC0
    fldl .LC16
    pushl %eax
    movl $15, %eax
    .p2align 4,,7
    .L36:
    fadd %st(1), %st
    addl $30, %eax
    cmpl $-2, %eax
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    fadd %st(1), %st
    jbe .L36
    fstp %st(1)
    subl $12, %esp
    fstpl (%esp)
    pushl $.LC1
    call printf
    xorl %eax, %eax
    movl %ebp, %esp
    popl %ebp
    ret
    .Lfe1:
    .size main,.Lfe1-main
    .ident "GCC: (GNU) 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)"

    Do you think you can make it much faster using hand-crafted assembly code? Admittedly, I used C instead of C++, but that doesn't make any difference for anything as small as this.
  • Hint for futuristic article editors: the human brain has a hardware and software architecture that has absolutely nothing in common with that of an electronic computer.

    In my opinion there is very little physical that cannot be emulated with computers, given enough processing speed and memory.
  • by Louis Savain ( 65843 ) on Saturday June 09, 2001 @06:41PM (#163373) Homepage
    From Yahoo Dailynews: An investor could check his stock portfolio in the morning and find that the computer has analyzed the portfolio, market trends, economic data and such to present a number of options.

    ``You log on in the morning and (the computer) gives you two or three options: 'Have you thought about doing one of these things? I've done the calculations for you,''' Marcyk said.


    If the computer is so smart, why not just tell it to initiate whatever stock transactions it thinks is best? Come to think of it, if computers are that smart, you'll be out of a job and you won't have any money to invest in stocks unless you inherited an estate or had some money stashed away from the time when you were working.

    When that happens, we'll need a new law to replace Moore's law: the number of unemployed people will double every seven days. Andy Grove will be heard saying "Where is the limit? Show me the limit, goddamnit!" while an angry and hungry mob tries to force its way into the lobby of Intel's headquarters, brandishing pitchforks and God knows what else. :-D
  • Moore's Law: CPU capacity shall double every 18-24 months.

    Ammendment I: Bus speed shall pretty much stay dormant, until some asshole decides to get off his ass and do something about it.

    Ammendment II: Tape as a hard storage solution will stick around like Herpes. Sure some gerks at Livermoore will screw around with rubies and diamonds, but the reality is one upgrade of the 8-track after another.

    Ammendment III: A hard drive's fragility will double every 18-24 months. Shit, the instructions on the last hard drive I got said, "Do not breathe in room with hard drive before installation."

    Ammendment IV: The average number of patches required between releases of software shall double every 18-24 months.

    Ammendment V: The number of hours it takes you to turn of the stupid marketeering features of the new windows Office release, like auto-capitalizer, will double every 18-24 months.

    Ammendment VI: (added by Microsoft recently) The ammount of money you pay us for software you have to buy will now double every 18-24 months.

    Ammendment VII: The number of months before Mozilla 1.0 is released doubles every 18-24 months.

    Ammendment VIII: The number of people who use emacs and the number of people who use vi hasn't changed since 1992 and may become one of the constants of physics (like the speed of light).

    Ammendment IX: The editorial skills of the /. editors will deminish by %50 every 18-24 months.

    Ammendment X: The number of stupid patents issued shall double every 18-24 months.

    Ammendment XI: The number of RAID variations shall double every 18-24 months and the number of different labels for the same variation shall also double every 18-24 months.

    Ammendment XII: The chances of a /. front page posting entitled, "Goatsex and You" shall dimminish by %50 every 18-24 months.

    "The Intern-what?" - Vince Cerf
  • by selectspec ( 74651 ) on Saturday June 09, 2001 @07:36PM (#163375)
    The Planck time is the time it would take a photon travelling at the speed of light to across a distance equal to the Planck length. This is the 'quantum of time', the smallest measurement of time that has any meaning, and is equal to 10^-43 seconds, under the current speculation as to the Planck Length. However, certain revalations regarding the size of certain extra-dimensions has put the Planck Length into a spin. (it could be considerably larger and could explain why gravity is such a seemingly weak force).

    But, all of this is irrelevant, because there is no limit on how quickly we can know a single fact, because we can determine a theoretical infinite number of facts from a single query with quantum computing.

  • Yeah, good point. A centimeter is pretty big. I was basing this on a chart which showed that you needed a microscope to make out detail somewhere between a centimeter and 1/10th of a centimeter.

    So, really, that should be most people can't make out much detail smaller than 1/10th of a centimeter and the transistors we're talking about are 50,000 times smaller...

  • Yes, I know... I'm quite embarrassed about this post now. I wish I could recall it.

    I both showed that I didn't have a grasp of the metric system AND that I didn't understand that the poster was talking about not being able to see the whole CPU, not just the transistor (which you've not been able to see for years).

    I guess I'm being moderated up for my unsupported suppositions later in the post, but that's not really any different than the post I was responding to. He just had different unsupported suppositions...

    I wish I had mod points so I could set my own post to "Overrated". Oh, but you can't mod your own posts, can you? That should be changed. Everyone should be allowed to apply Overrated to their own posts...

  • by JordanH ( 75307 ) on Saturday June 09, 2001 @06:39PM (#163378) Homepage Journal
    • It seems clear to me that Moore's Law does hold no matter what - in a way. When the size continues to decrease exponentially... smaller and smaller, to the point where we can't even see what we're making by the naked eye, it's not that further improvement becomes impossible, but simply that the process changes, or the technology.

    We're talking about 0.02 microns here. Most people can't make out any detail smaller than a centimeter. 0.02 microns would be 500,000 times smaller than what can be seen with the unaided eye!

    I really don't understand your reasoning. Are you saying that we are motivated to improve our technology all the time? What does this have to do with Moore's Law and specific predictions about how fast our technology improves?

    If anything, I think that Moore's Law might be a self-fulfilling prophecy.

    We just don't have that great a motivation to improve processor technology these days. We have processor technology that is beyond the dreams of engineers 30 years ago. For the most part, we have reached a point where most of the needs of applications of massively powerful computing are currently realized in today's machines.

    Sure, faster is better, but does faster translate to big development dollars to outdo Moore's Law when researchers and developers are constantly trying to develop software and systems to keep up with the huge gains that were seeing with Moore's Law? In this scenario, Moore's Law is how fast machines improve because Moore said as much and that's what drives the designers to improve, keeping up with and staying ahead of Moore's Law. The designers don't want to be in the group that finally failed to live up to the expectations of the industry, but there's also no particular motivation to get ahead of Moore's Law's predictions either.

    Take the above with a grain of salt. It's just conjecture, of course.

  • Ugh ... a couple of the posting in here are scientifically dubious at best as the moderators happily mod up anything vaguely resembling their high school physics class.

    Disclaimer: While I have a Ph.D. in plasma physics and did a large amount of scientific computing in my thesis, this is not an area on which I am an expert. However I do know that a number of high quality physicists have given this a fair amount of thought (like Feynmann and Wheeler for instance) and have read some of their work.

    The big limit is thermodynamic. The minimum energy it takes to flip a bit is of order k_b T_a where k_b is Boltzmann's constant and T_a is the ambient temperature (I think Wheeler was the first to show this limit through clever gedanken experiments but I could be wrong). The ambient temperature of the universe as measured to high precision by the cosmic microwave background black body radiation spectrum is T_a ~ 2.8 K (that is ~ -270 C or ~ -460 F for the unit challenged but remember Celsius and Fahrenheit are not referenced from absolute zero for the following formula).

    So, suppose your calculation needs to flip N bits and you want to do it in time tau. Then the thermodynamic minimum theoretical power requirements for your computer are of order:

    P ~ N k_b T_a / tau

    So you want to do a complex calculation in on a Plank time scale length? I hope you have the power output of a supernova available. Of course, this is the minimum. You have to account all the inefficiences in generation, cooling ... In the end, you might need a couple of simultaneous supernovas.

    Also, for reference, the Plank length and Plank time are the measurement scales made by constructing quantities of the appropriate unit out of Plank's constant h, the speed of light c and the gravitational coupling constant G. Crudely speaking, it is the length scale at which conjectured quantum gravity effects dominate. Planck length considerations aren't really factored into theoretical limits of computation as other more obvious limits are reached first (like the above limit).

    A more practical issue is whether or not computer miniturization can continue below the rapidly approaching atomic length scale (~1 A). For example, could you make logic gates based on complex inter-nuclear interactions or based out of non-linear vacuum dielectric polarization of hard gamma rays (i.e. compton backscattering off virtual electron-positron pairs) or other such known exotica of modern physics?

    Kevin
  • Carbon dioxide would actually be the closest relative to silicon dioxide, but solid carbon dioxide (dry ice) would have to be kept cold, since its evaporation temp is somewhere near 0C. Of course, if you kept it under high pressure, you could keep the temperature low and overclock the hell out of it =)
    ___________________________________________

    I'm somewhat ignorant of chemistry, but HO2 is neither water, nor possible with proton/electron bonding, since hydrogen has a +1 charge, and oxygen -2.

    --

  • An atom's diameter is about a third of a nanometer. You should check your numbers.
  • Silicon is just too cheap and abundant to give up on right now

    Exactly. And by the time we're finished with this obsession of ours with faster computing(since physics will stop us at some point), we'll start seeing better computing [mit.edu]. I think we'll start to see more special purpose cpu's and hardware for pervasive computing and the focus will be become less on innovation and the next greatest thing(since we all tire of it some time) and more towards integration. Computing will be truly pervasive and really will make things easier this time(read: paper office).

    -----
    "Goose... Geese... Moose... MOOSE!?!?!"
  • I don't see any mention of how they managed to do this. I doubt that it would be with a laser, since a light wavelength is usually measured in the hundreds of nanometers. An electron gun, perhaps? That's about all I can think of . . .
  • Gee, when I try doing that (adding 1.1 ULONG_MAX times) on my computer, the program doesn't want to finish. Is a 533 MHz AlphaPC really that much slower than a 500 MHz Pentium?????

    All Your Base Are Belong To Us!!!
  • Well, I suppose I should be happy that at least one person noticed. I think I should forgo my attempts at humor and just wait for the pros to jump in. :-)

    All Your Base Are Belong To Us!!!
  • lol - a "Handfull of transistors" 20nm in size? Shouldn't they be putting those into chips? ;)

  • Hell.. A handful of transistors that small would be enough to produce several thousand or so processors... Get busy!!! Chop Chop!!
  • Most people can't make out any detail smaller than a centimeter.

    That makes reading this comment very difficult indeed.

  • Fortunately your reasoning doesn't take into account parallelism. For instance, in 100 years time processors might have hit some limit (say of of 10e-40 seconds) for executing a single operation, but what about 10,000 fpus on a single chip? What about 1,000,000 of these processors running in parallel? This is the approach the EFF took to building Deep Crack.
    I think the problems facing engineers in the future will be finding ways of increasing parellism within hardware, and of course developing software to take advantage of those features.
  • Yes, but then so do all the clients involved in distributed.net [distributed.net], but the system is designed so that the overhead of communication and synchronisation is kept to a minimum.
  • Most people can't make out any detail smaller than a centimeter.

    How small a detail you can make out depends pretty critically on how close you are to the detail in question. A human hair is only hundred microns (i.e. a few hundredths of a centimeter) wide, but people have no making out individual hairs at close range. I routinely work with tubing that's 140 microns in outer diameter, and I personally have no trouble seeing it- though it gives some of my co-workers fits. 60 micron diameter optical fiber is a bit tougher to see, but still doesn't require a microscope.

    There are some limits, though. The shortest wavelength that the eye can see is about 0.35 microns, and the laws of optics say that you can't make out details much smaller than one wavelength. Light will just diffract around anything much smaller, so it's physically impossible to see something 0.02 microns across, even with a theoretically perfect visible light microscope. That's the exact reason that these kinds of features have been so difficult to make; the same rule that limits the resolving power of a perfect visible light microscope also limits the size of feature you can make with visible light lithography. To make something 0.02 microns across they have to use very short wavelength EM radiation.

  • Dude, hydrogen dioxide isn't water. Water is H2O (dihydrogen monoxide). Hate to rain on your parade, just thought I should point it out.

  • 20 nanometers, or 0.02 microns, in size

    Every few months were hear about how things are smaller, faster, better, more. Too bad it'll be 10 or 20 years before this stuff filters down to the consumer level.

    • Most people can't make out any detail smaller than a centimeter.

    Ha ha ha ! And this is fucking "insightful"??

    Please.

    1cm is 1/2.54 of a fucking inch.

    Most people can easily see a fraction of 1mm which is 0.1 cm.

    With a naked eye, yes.

  • >We just don't have that great a motivation to improve processor
    >technology these days. We have processor technology that is beyond the
    >dreams of engineers 30 years ago. For the most part, we have reached a
    >point where most of the needs of applications of massively powerful
    >computing are currently realized in today's machines.
    Ha! Just wait till Quake 4 hits the shelves, we'll see what you'll be saying then!
    Seriously, current computing power is FAR below what is needed for realistic simulation of reality. When you look at CGI in the movies nowadays, and you've got a good eye, you'll see it still 'feels' artificial, though they used multi-computer render-farms for them, and computations took months. And that's only flat 2D projection of a 3D scene, in a resolution (about 8000X8000 pixels) that's much less than what single human eye can achieve, and sound is still digitized from natural sources, and they don't do all the simultaion of physics - much of that is pre-directed, 'hand'-animated, and all the logic of the scene is a human's work (computers didn't process the 'what if a ship hits an iceberg' rules when they were making Titanic!)
    No, today's machines are far from realizing the need for computing power. Not only in VR uses. What about scientific processing of data? Would SETI exist if we didn't need much more processing power than we have now? What about intuitive user interfaces? I saw Nautilus from my new Mandrake 8.0 _crawl_ on my PIII 550, 256 Mb RAM just yesterday.
  • The use of this program is a little naive. C++ is not a very efficient language, so all these steps probably take more than one clock cycle per addition. Secondly, there has to be some kind of sinus wave for a clock signal, and the resolution would be too low when a clockcycle takes Planck Time. Thirdly, this theory doesn't take stuff like quantum computing into account.

    So the limit you mentioned will be hit sooner if the current trend continues, but it's questionable if it'll really matter.
  • by piecewise ( 169377 ) on Saturday June 09, 2001 @05:53PM (#163397) Journal
    It seems clear to me that Moore's Law does hold no matter what - in a way. When the size continues to decrease exponentially... smaller and smaller, to the point where we can't even see what we're making by the naked eye, it's not that further improvement becomes impossible, but simply that the process changes, or the technology.

    Example: a floppy disk's size can be pushed to the limit, and finally we have 1.4MB floppies.. but sooner or later, you need a CD. And then a DVD. Et cetera.

    It'll still be quite a while, but eventually silicon will simply be the wrong technology, the wrong process. Of course, a processor technology lasts MUCH longer than a subcomponent, such as a floppy drive technology.

    Moore's Law. Too bad it's "only" x2 and not ^2. :-)
  • you mean a millimeter? I'm guessing you're not down with that metric thing yet. soon, soon, the canadains will take you over.

    The slashdot 2 minute between postings limit:
    Pissing off hyper caffeineated /.'ers since Spring 2001.

  • thanks - you're a beacon of hope for those who read /. :)

    The slashdot 2 minute between postings limit:
    Pissing off hyper caffeineated /.'ers since Spring 2001.

  • because losing money on the stock market is an addiction. We can't let computers have all the fun...

    The slashdot 2 minute between postings limit:
    Pissing off hyper caffeineated /.'ers since Spring 2001.

  • I'm not absolutely sure and I never liked chem but doesn't the diameter of an atom vary greatly depending on the element because of the amount of electrons and their shells. That being true, it seems you may want to check your facts before jumping on someone elses.
  • heh...(being stupid here of course), can you imagine being the guy who drops a container of these things?

    -- "nobody move! I just droped 5 pounds of .02 micron transistors!"

    That would be way worse than loosing a contact in the snow...
    I wonder thou, how many of these little guys would it take to amount to 5 pounds?

    NO SPORK
  • The implications of developing such small and fast transistors are significant: Silicon will be able to be used to make chips until 2007, and it will make possible microprocessors containing close to 1 billion transistors running at 20 gigahertz by that year. [...] Some of the components in the transistors Intel announced -- [...] are only three atoms thick.

    I keep thinking about the problems with military gear where they have to worry about cosmic rays knocking out circuits. I don't know how usable these things will be in high radiation areas unless there is substantial redundancy built in.

    And to speculate on what we'll run on this puppies. or the cooling systems.

    Oh My!

    Check out the Vinny the Vampire [eplugz.com] comic strip

  • by cthugha ( 185672 ) on Sunday June 10, 2001 @02:11AM (#163404)

    I'm sorry, but won't creating processors with such high clock frequencies just be negated by the inherent slowness of the bus? One of the things you have to remember when designing hardware with such short clock cycles are the inherent speed limits on signals propagating through it. Light can only travel 1.5 cm in the time afforded by a single cycle from a clock running at 20 GHz. Electrons are much slower. The implication of this are that, given current motherboards, the CPU will stall for a hell of a lot more cycles waiting for a memory read/write.

    Caching can only go so far. It seems to me that increases in overall computing power (however you wish to measure it) will not come just through cranking up the clock speed, but will require fundamental architectural changes to the PC as we know it (main storage on the CPU, overall miniaturization, etc).

  • by Ami_Chan ( 188543 ) <MercMoonie&yahoo,com> on Saturday June 09, 2001 @05:56PM (#163405)
    Personally, I don't think we're going to see silicon dioxide go away for quite some time. Yes, it does have some physical limitations, but few inexpensive alternatives seem possible within a 5 year time span.

    Of course, new designs and materials will come (Toshiba is starting to use diagonal circuitry, helping efficiency). Silicon is just too cheap and abundant to give up on right now - we'll probably see it for a few decades into the future in things like appliances, calculators, and handheld computers because they're cheap to manufacture in mass quantities and the material itself is one of the most abundant substances on the surfaces of the planet (it's a large component of common sand).

    Therefore, I think the prediction of silicon dioxide fading away in just a "few years" is a bit premature. If we've learned anything from the tech industry, old standards tend to stick around for a VERY long time (witness floppy drives, ISA slots, and serial ports).

  • Also, as process size decreases, so does power consumption. So unless you drastically increase the size of the die, you will not have as much heat or need for cooling systems.
  • by ZeLonewolf ( 197271 ) on Saturday June 09, 2001 @06:17PM (#163407) Homepage
    Wow. just think...

    What would happen if computing hardware technology reached hard atomic limits?

    A new era would begin...programmers would actually have to write efficient code! The end of bloatware as we know it!

    Moore's Law II: On average, every 15 months, code would suck 50% less...


  • No, the 450lb man will have to eat less, and therefore BE less, um, bloated. Dixi.
  • The whole idea is to have the *smallest* transistor. The smaller the channel the less power it consumes.

    -Jeff
  • Ummmmm.... NO!

    Gallium Arsenide is used to make high efficiency solar cells.

    -Jeff
  • I think you're right. My example was just because I work with solar arrays. Notice my homepage on my profile.

    -Jeff
  • Here's the equation

    P=(1/2)*f*C*V^2

    Lower capacitance == lower power
    Lower frequency == lower power (we don't want that though)
    Lower voltage on the transistor == much lower power.

    Why did we go from 5V CMOS technology to 3.3V? Why did we go from 3.3V to 1.6V technology?

    I think we should not disregard the voltage that we can run these circuits.

    -Jeff
  • by tswinzig ( 210999 ) on Sunday June 10, 2001 @05:35AM (#163413) Journal
    The number of people talking about how long Moore's Law will last doubles every 18-24 months.
  • Transistor packing on such a processor would be so tight, you could have 256 MB right on the CPU, *EASY*.

    C//

  • only 20 nanometers, or 0.02 microns, in size

    Note: When IBM gets done stretching them, they go up to 40nm.
  • I think we're starting to reach the limit of "smallness" here. It shouldn't be too long before the best we can hope for is for "breaking even" with regards the laws of thermodynamics.
  • by n xnezn juber ( 243178 ) on Saturday June 09, 2001 @07:45PM (#163418)
    For all you non-EEs out there, Silicon Dioxide (SiO_2) is used in chips as an insulator. It is not that we're removing all the silicon from the chip, just replacing some of the insulating material. These articles are not talking at all about the silicon wafer substrate.

    Some of the silicon dioxide has already been replaced for a couple years with materials called "low-k dielectrics" which basically means it results in lower capacitance (lower capacitance == faster chip) than silicon dioxide. This is only on the metal layers which are relatively far from the transistors. The silicon dioxide mentioned in the article is the insulator used in the actual transistor itself. It is the one that is going to be "atoms thick" and it is one of the fundamental parts of the transistor.
  • No matter what little tricks they try, this whole transistor thing is just a passing fad.

    If it doesn't make my 100 watt tube head [marshallamps.com] go to 11, what good is it?




    Whatcha doooo with those rollin' papers?
    Make doooooobieees?

  • Computers have taken advantage of parallelism for a long time. Most scientific applications (which I guess you're representing by a repeated floating addition) have trememndous amounts of parallelism...
  • For the kind of program this guy described, it's all about clock frequency since there is no parallelism...
  • Itanium has a lot less complexity in instruction fetch / issue / execute etc. So, it seems to me like this is a contradiction of what you said. It is wider (6 instructions) as opposed to the 3 instructions of pentium 3 etc. Just look at the spec numbers though (hint, the p3 1ghz beats it and costs 1/50 as much (or some similar small fraction))...
  • Actually, in recent times there have been papers published on how to create cpus which are tolerant in the face of cosmic radiation. Two examples could be Todd Austin's DIVA research http://www.eecs.umich.edu/~taustin/papers/micro32_ diva.pdf and reinhart's Transient Fault Detection Via Simultaneous Multithreading http://www.eecs.umich.edu/~stever/pubs/isca00-srt. pdf.

    They take two separate approaches. DIVA puts a second, small cpu on the core which checks all work performed by the primary cpu. The multithreading paper executes two redundant copies of a program, checking that the results generated between the two agree (on the same processor, using simultaneous multithreading).

  • Well, I left out some details... If you rely on the compiler to do things, you can make your processor wider (actually, the primary limitation of making it infinitely wide is the number of execution units and cache ports, (and I suppose number of branch predictions you can provide)). Make a 3 issue out of order processor, however, is more complex overall than a 6 wide EPIC processor (although, the FU bybass network probably isn't pretty on the EPIC).

    As far as FP vs INT... well... I don't know... I mean, if all you care about is FP, then your work is 99% likely to be easily parallizable. Thus, just buy 10 1 gig athlons and be happy... but whatever :)

  • They tell us that inside will be the same logic way of building chips . No smart one just a bunch of logic array (1 billion of them ) no fuZZy logic at all Look around even a bird have immense AI+3D power so it does real time proceessing of the world .What we have : an machine that can't deal with 2D images (movies),and Mp3 in real time ,i mean encoding them . So we got the Klippy the smartest thing in the world more anoying that enything . In the new WinNt(Xperimental Programming from Micro$$oft) the clippy "help" you to clean the desktop of icons what a krappp!!! Even with 1Thz computer the Microsoft programmers will code with a lot of NOOP so your hardware to look old and to throw it in the Recyle Bin
  • Most people can't make out any detail smaller than a centimeter

    Haha! Heehee. Sorry. Do you know what a centimeter is? My index fingernail is about a centimeter wide. On my monitor, the word DUCK is about a centimeter wide. I am 186 centimeters tall. The civilized world (read: not afraid to make changes to improve efficiency) uses the metric system now, so I suggest you learn it :)

    Then again, I still tell everyone that I'm 6'1" and a bit..

  • by Liquid-Gecka ( 319494 ) on Saturday June 09, 2001 @05:36PM (#163427)
    A recent Slashdot story [slashdot.org] covers the posabilities of .01nm transistors and how there currently is a theoreticle limit with our current process of .002nm
  • Not to mention Intel's "Jackson" tech. It seems to me that SMT will greatly reduce the inefficiency of our current uProcessors. If it is backed up by a properly powerful core, that is (one where there is almost always a free int/fp pipe doing nothing, as MUST be the case w/ the athlon, since it is 3 int and 3 fp and manages an IPC of less than 2)
  • by Waffle Iron ( 339739 ) on Saturday June 09, 2001 @06:19PM (#163429)
    "You log on in the morning and (the computer) gives you two or three options: 'Have you thought about doing one of these things? I've done the calculations for you," Marcyk said.

    Just once, I'd like to read an article about a new microprocessor technology that doesn't have some silly quote about what kind of AI feature it will enable.

    For decades, hardware has been proving exponentially. For decades, they've been predictiong that the new features will magically enable intelligent software.

    All we've got to show for it so far is Clippy the paper clip. A mere 10X speedup won't make Clippy any less annoying.

    Hint for futuristic article editors: the human brain has a hardware and software architecture that has absolutely nothing in common with that of an electronic computer.

  • A mere 10X speedup won't make Clippy any less annoying.

    I don't know, anything that helps me dismiss the damn thing a couple milliseconds faster is forward progress as far as I'm concerned...

  • So, now we got megacorps who are only interested about pleasing their shareholders. And when shareholders are intelligent AIs supporting all the causes wich make most profit? Making decisions and transactions in seconds, calculating markets reactions using chaos theories and improving them, fastest computers playing ball over slower ones.. Who's going to win?

    Surely not us consumers and workers. Even now traditional 8 hour workdays are routinely exceeded, using coffeine, pills and stimulating experiences and working conditions to keep workers healthy. Good health is defined by new standards every year so that most productive units would look most healthy. "Healthy people smile a lot, their days are filled with varying tasks and refreshing experiences." and so on..

  • Most people can't make out any detail smaller than a centimeter.
    So on his 15" monitor, he's browsing at 23x41?
  • And remember, two cpus running in parellel enjor a greater performance boost (on some tasks) then a single processor with twice the speed of either of the dual processors.

    I think you have it backwards. two cpus almost never run at twice the speed of one. Usually, it's good for an extra 50-70 percent speed.

  • Name one.

    System performance is going to be lower on the dual proc version just from multiproc overhead.

  • Size Does Matter>
    And as the growing bloat^H^H^H^H^Hsoftware industry has proven
    It won't do nayone any good if you don't know what to do with it.
  • ....there's always MS windows to slow them down to reasonable speeds!
  • DNA is trivially custom-synthesized on solid supports. You rig chromophores or fluorophores to form Watson-Crick dimers instead of ATCG and roll rigid chromophore configuration and custom ordering any way you want. Bridge the base pairs with hydrogen bonding, dipole alignment, hydrophobic effects... be a chemist. We already have evidence of anomalous electronic conductivity in ordinary DNA (depends on base composition, which is a veeeery good sign). Want longer molecules? DNA-ligase and whatever. Let the enzymes do the fine work. After you get your Nobel Prize you want to manufacture, and you DON'T do it solid phase. You do it PCR with custom (patentable) templating. I bet you bust the conventional and closely held PCR application patents, too. GROW THE SUPERCONDUCTORS in bugs! Spinoff of fluorescent DNA and RNA probes for genome sequencing and clinical diagnostics ane therpaeutics - re photodynamic therapy targeted to oncogenes (especially gene hyper-repeat sequences).

    Why use crappy phosphate-deoxyribose alt-copolyester? Peptide nucleic acids are vastly more robust and give you optional chiral centers for more goodies, like non-linear optical devices.

    Hell, make a PNA 17-25 mer cocktail complimentary to a few critical HIV gene sequences and cure that, too, by knockout strategy (the Flavr-Savr HIV therapy). PNAs are uncharged and readily permeate cell membranes, they are totally untouched by nucleases and other catabolisms, and they are cheap to make. Turn off HIV RNA, turn off disease process progression. Boom. None of this downstream small molecule enzyme inhibitor bullshit that makes so much money for the pharm workers.

    Original proposal is an interesting problem, and rather a small proportion of the population is up to it. When I started out in the business some 30 years ago, the process of discovery and original proposal awed me. It still does, and my track record has been exemplary. Perhaps the best answer is that you must read everything and be prepared for things to bump around in your head.

    Example: My first original research proposal was to synthesize an obscure polycyclic alkaloid (in 32 steps! Silly synthesis is the refuge of a scoundrel) An ocean of blood flowed, and all of it was mine save for one redeeming skeletal inversion which was deemed "adequate." The next year, for my second original proposal, I proposed synthesizing C2 in cryogenic matrix and gas phase. C2 is hot stuff (literally) in flames and comet tails (Schwan lines), and its electronic structure was uncomputable at the time. When you warm the matrix fragments recombine to give acetylene diethers - which had not been synthesized at that time. The diethers dimerize to a squaric acid precursor, which was hot stuff re squarylium dyes for photoconductors. The tar from the reaction was worth at least ten times the cost of starting materials.

    Know everything, and see where stuff rubs.

    Almost any ten-carbon lump turns into adamantane in aluminum chloride/bromide slush. We can do better (though not cheaper) in ionic solvents like N-methl-N-(n-butyl)imidazolium tetrachloroaluminate with up to another added mole of AlCl3. The media support multiple carbocationic rearrangeents as a benign environment. What happens if you put micronized graphite into the slush and bubble in isobuytlene? Will you edge alkylate and solublize, or make 1-D tert-butylated diamond plates, or will something else happen? Look at all the applications of graphite fluoride and graphite intercalcates, as in high energy density battery systems and high number density low bulk mass hydrogen storage modalities.

    Sargeson trapped Co(en)3(3+) as the inspired sepulchrate (formaldehyde plus ammonia), and then the brilliant sarcophogate (formaldehyde plus nitromethane; look down the triangular face of the coordination octahedron). Stop being an inorganiker and start being an organiker. That last gives you "para" nitro groups, which give you amines, which give you redox nylon (and azo linkages; polyisocyanates, polyurethanes, epoxies, acrylamides, and...) Nitrogen chemistry is incredibly rich - conjugated azo linkages, fluorescent heterocycles, stable free radicals, extrusion and caged radical recombination... As Co(en)3(3+) is trivially optically resolved, you also have potential non-linear optical films switchable through redox change. (Information storage, chemical transistors, sensors, clinical diagnostics, electrochromic windows...) It goes on and on... a whole lifetime of research. Nobody has diddled with it.

    Look up the synthesis and reactions of of hydroxlyamine-O-sulfonic acid in Volume 1 (!) of Fieser and Fieser. Look at the mysteries of ammonia - inversion, nucleophilicity. Look at the Alpha Effect re hydrazine, hydroxylamine, and hydrogen peroxide. Look at Bredt's rule and all the interesting things it does at bridgeheads. Now, make it all rub against itself: Start with 1,4,7-triazacyclononane, which is easy enough though sloppy to make in bulk. Gently nitrosate it. The nitroso group goes on the first amine, then the adjacent amine (pre-organized to attack re Cram) attacks at the nitroso nitrogen to give you the hydroxylamine. Do the usual hydroxylamine-O-sulfonic acid synthesis and you tether the original nitroso nitrogen to the third amine with the original nitroso's oxygen as the leaving group. What have you got? You have four bridgehead nitrogens rigidly held, none of which can invert. The apical nitrogen is tethered only to other aliphatic nitrogens - which has never been done. It cannot invert and... for all that, it may have no nucleophilicity whatsoever because the Alpha Effect is euchered out by geometry and inductive electron withdrawal is mammoth. You could do it in undergrad lab.

    I once watched a bunch of engineers with a very big budget try to excimer laser drill parallel or serial hundreds of 5 micron holes in PMMA intrastromal corneal implants (without the holes to move oxygen from outside and nutrients from inside the cornea dies and sloughs, which is tough on the rabbits). Buncha maroons. 5 microns is a magic number to an organiker, and I won't insult your intelligence with the trivial solution. The next Tuesday I delivered a foot-long bar of oriented two-phase PMMA which was cut and polished to spec, had its holes revealed, and got me into incredible hot water since my employer did not give shit one about the product but was really interested in the long term money budgeted by its parent company.

    Take two cyclopentane rings (Framework Molecular Models do this nicely). Put 5 all-cis (vs the ring not olefin configuration, which need only be consistent) alkenes on one cylcopentane. Cap with the other. Now, twist slightly and watch the pi-oribtals. Is that a clever way to make dodecahedrane, or what? The alkenes came from alkynes. The alkynes were assembled with Schrock alkyne metathesis catalyst from the nitriles. Strain being what it is, you might want to have diacetylene linkages (copper-mediated oxidative coupling) and go for a bigger hydrocarbon bubble. Start with all-cis 1,3,5-cyclohexane and trace the diacetylene evolution (no strain here!) Consider 1,3,5-trans-2,4,6 all cis-substituted cyclohexane). Voila! You grow 1-D diamond (note the ring conformation and the special name given to that diamond structural variant).

    I could go on for megabytes. All you need do is read the library, hold it all in your head, and wonder "what if..." where stuff rubs together. This is the first (easy) kind of genius. The second (hard) kind of genius is to see it all ab initio. I don't have a handle on that one.

  • While we may not be able to truly follow Moore's law (cpu speeds double every 18 months) with silicon dioxide cpus, the fact that silicon-based processors are contually getting smaller, cheaper, and drawing less power suggests that when we finally "hit the wall" in terms of silicon cpu performance, it will be practical in terms of space, cost, and power to increase processing power simply by using multiple CPUs. There is no basic, fundamental reason that a motherboard couldn't be released with support for, say, 5 Pentium X processors running at 15 gigahertz, or whatever Intel will have on the market in 2007. And remember, two cpus running in parellel enjor a greater performance boost (on some tasks) then a single processor with twice the speed of either of the dual processors. While the speed of processors may slow its increase, I think we'll find ourselves simple using more processors in each computer, so your PC will still be obsolete within minutes after purchase :-).

  • I believe IBM is working on nanotechnology, they are developing tubes which are only a few atoms wide and should be out in 10 years. Massive reduction in power needs, and huge benefits in speeds are possible with this. The physically limits of silicon will be reached by that time, and I'm sure there will be many attempts like this to stretch out the technology as long as possible. Quite frankley, the processors may be reaching max speeds, but our computer systems aren't and the processor war is just hype. They need to be redeveloped to allow for higher FSB speeds which is currently impossible with the physical size of motherboards, allow for higher on board bandwidth. Adding 4 100Mhz channels isn't a step in the right direction as with the Pentium 4, its just a work around, not really getting too much faster. I say work on fixing and changing the physical layout of computer parts so speeds can be improved and wait for nanotechology to get here, it won't take too long. Intel's yields at current processes which are much larger than 0.02nm are poor at best, its just inefficient to reduce the size too much.

The rich get rich, and the poor get poorer. The haves get more, the have-nots die.

Working...