typodupeerror
• #### Re:Another Limit: Planck Time (Score:1)

C++ is just as efficient as C is just as efficient as ASM for small values of efficient. Seriously, though, some modern compilers can produce code that would put many hand-coders to shame. Besides, no matter what the language, the adds would take more than one clock. I belive the fp unit has a couple of clocks latency for fp adds.

True, but the point of EPIC is to make the best use of available resources. If it is easy to make the compiler parallize everything, then why not do that and save transistors on the chip for bigger caches/more function units? The 800MHz Itanium, which outperforms the Alpha for FP (which matters a *whole* lot more than INT on a workstation) costs less than 1200. You can bet that will go down significantly when volume increases and the arch becomes more mature (Remember, EPIC is supposed to replace x86). So eventually, it will cost the same to put 10 Itaniums into a machine as it does to put 10 Athlon-6's or Pentium 5's. At that point, why not just build a better compiler and get more overall performance from the same number of transistors? • #### Re:Microsoft + Intel conspiracy (Score:2) Bloatware providers (those that keep Intel & AMD in business) www.microsoft.com www.kde.org www.gnome.org www.xfree86.org www.trolltech.com www.gtk.org www.openoffice.org You see, it's not just MS that spews bloatware. Its simply that while in the UNIX market, different organizations spew bloatware, while in Windows-land, all bloatware spewing is efficiently consolidated into one company. • #### Re:Uses in DNA super computers? (Score:2) Anyone want to comment on the validity of his verbiosity? The first paragraph seems okay from a vocabulary point of view, but no way in hell I can figure out if the rest of it is even true, much less if it is correct. • #### Re:this is just a middle step. (Score:2) Aye. Again with the bus-speed Nazis. As long as you have a fat cache and a good amount of bus-speed, there are lots of apps that are still CPU bound. Consider, for example, floating point apps that perform better on an Athlon than on a P4, even though the P4 has 3.2GB/sec of bandwidth. True, bus-speeds are important, but so too are processor speeds. • #### Re:Problems with 20 GHz processors (Score:3) on Sunday June 10, 2001 @07:13AM (#163369) True, the bus does become an issue here. However, since cache fills tend to be large (32-64 bytes) it should be possible to have extremely wide busses (like the dual 256bit busses on Alpha workstations) to compensate for a lower clock speed. Also, 20GHz busses won't come around until processors reach 100GHz or so (which is still a bit away) since 1/5 the processor speed speeds to be a fairly regular bus speed. Of course, as many tricks as you put in there, the inherent problem remains, just gets postponed somewhat. • #### Re:Another Limit: Planck Time (Score:1) /* some test code */ if (i == 0) • #### Re:Another Limit: Planck Time (Score:1) First off, please neglect my previous post... hit the wrong button by accident. > C++ is not a very efficient language Well, this little program: #include <stdio.h> #include <stdlib.h> #include <limits.h> const double a = 1.1; int main() { double d = a; unsigned long i; for (i = 0; i < ULONG_MAX; i++) { d += a; } printf("%lf\n", d); return 0; } Compiled into this: .file "repadd.c" .version "01.01" gcc2_compiled.: .globl a .section .rodata .align 8 .type a,@object .size a,8 a: .long 0x9999999a,0x3ff19999 .LC1: .string "%lf\n" .align 8 .LC0: .long 0x9999999a,0x3ff19999 .align 8 .LC16: .long 0x99999999,0x40319999 .text .align 16 .globl main .type main,@function main: pushl %ebp movl %esp, %ebp pushl %eax fldl .LC0 fldl .LC16 pushl %eax movl15, %eax
.p2align 4,,7
.L36:
addl $30, %eax cmpl$-2, %eax
jbe .L36
fstp %st(1)
subl $12, %esp fstpl (%esp) pushl$.LC1
call printf
xorl %eax, %eax
movl %ebp, %esp
popl %ebp
ret
.Lfe1:
.size main,.Lfe1-main
.ident "GCC: (GNU) 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)"

Do you think you can make it much faster using hand-crafted assembly code? Admittedly, I used C instead of C++, but that doesn't make any difference for anything as small as this.
• #### Re:Obligatory AI quote (Score:1)

Hint for futuristic article editors: the human brain has a hardware and software architecture that has absolutely nothing in common with that of an electronic computer.

In my opinion there is very little physical that cannot be emulated with computers, given enough processing speed and memory.
• #### The End of Work? (Score:5)

on Saturday June 09, 2001 @07:41PM (#163373) Homepage
From Yahoo Dailynews: An investor could check his stock portfolio in the morning and find that the computer has analyzed the portfolio, market trends, economic data and such to present a number of options.

You log on in the morning and (the computer) gives you two or three options: 'Have you thought about doing one of these things? I've done the calculations for you,''' Marcyk said.

If the computer is so smart, why not just tell it to initiate whatever stock transactions it thinks is best? Come to think of it, if computers are that smart, you'll be out of a job and you won't have any money to invest in stocks unless you inherited an estate or had some money stashed away from the time when you were working.

When that happens, we'll need a new law to replace Moore's law: the number of unemployed people will double every seven days. Andy Grove will be heard saying "Where is the limit? Show me the limit, goddamnit!" while an angry and hungry mob tries to force its way into the lobby of Intel's headquarters, brandishing pitchforks and God knows what else. :-D
• #### Moore's Law, Ammendments (Score:1)

Moore's Law: CPU capacity shall double every 18-24 months.

Ammendment I: Bus speed shall pretty much stay dormant, until some asshole decides to get off his ass and do something about it.

Ammendment II: Tape as a hard storage solution will stick around like Herpes. Sure some gerks at Livermoore will screw around with rubies and diamonds, but the reality is one upgrade of the 8-track after another.

Ammendment III: A hard drive's fragility will double every 18-24 months. Shit, the instructions on the last hard drive I got said, "Do not breathe in room with hard drive before installation."

Ammendment IV: The average number of patches required between releases of software shall double every 18-24 months.

Ammendment V: The number of hours it takes you to turn of the stupid marketeering features of the new windows Office release, like auto-capitalizer, will double every 18-24 months.

Ammendment VI: (added by Microsoft recently) The ammount of money you pay us for software you have to buy will now double every 18-24 months.

Ammendment VII: The number of months before Mozilla 1.0 is released doubles every 18-24 months.

Ammendment VIII: The number of people who use emacs and the number of people who use vi hasn't changed since 1992 and may become one of the constants of physics (like the speed of light).

Ammendment IX: The editorial skills of the /. editors will deminish by %50 every 18-24 months.

Ammendment X: The number of stupid patents issued shall double every 18-24 months.

Ammendment XI: The number of RAID variations shall double every 18-24 months and the number of different labels for the same variation shall also double every 18-24 months.

Ammendment XII: The chances of a /. front page posting entitled, "Goatsex and You" shall dimminish by %50 every 18-24 months.

"The Intern-what?" - Vince Cerf
• #### Re:Another Limit: Planck Time (Score:3)

on Saturday June 09, 2001 @08:36PM (#163375)
The Planck time is the time it would take a photon travelling at the speed of light to across a distance equal to the Planck length. This is the 'quantum of time', the smallest measurement of time that has any meaning, and is equal to 10^-43 seconds, under the current speculation as to the Planck Length. However, certain revalations regarding the size of certain extra-dimensions has put the Planck Length into a spin. (it could be considerably larger and could explain why gravity is such a seemingly weak force).

But, all of this is irrelevant, because there is no limit on how quickly we can know a single fact, because we can determine a theoretical infinite number of facts from a single query with quantum computing.

• #### Re:The Change (Score:1)

Yeah, good point. A centimeter is pretty big. I was basing this on a chart which showed that you needed a microscope to make out detail somewhere between a centimeter and 1/10th of a centimeter.

So, really, that should be most people can't make out much detail smaller than 1/10th of a centimeter and the transistors we're talking about are 50,000 times smaller...

• #### Re:The Change (Score:1)

I both showed that I didn't have a grasp of the metric system AND that I didn't understand that the poster was talking about not being able to see the whole CPU, not just the transistor (which you've not been able to see for years).

I guess I'm being moderated up for my unsupported suppositions later in the post, but that's not really any different than the post I was responding to. He just had different unsupported suppositions...

I wish I had mod points so I could set my own post to "Overrated". Oh, but you can't mod your own posts, can you? That should be changed. Everyone should be allowed to apply Overrated to their own posts...

• #### Re:The Change (Score:5)

on Saturday June 09, 2001 @07:39PM (#163378) Homepage Journal
• It seems clear to me that Moore's Law does hold no matter what - in a way. When the size continues to decrease exponentially... smaller and smaller, to the point where we can't even see what we're making by the naked eye, it's not that further improvement becomes impossible, but simply that the process changes, or the technology.

We're talking about 0.02 microns here. Most people can't make out any detail smaller than a centimeter. 0.02 microns would be 500,000 times smaller than what can be seen with the unaided eye!

I really don't understand your reasoning. Are you saying that we are motivated to improve our technology all the time? What does this have to do with Moore's Law and specific predictions about how fast our technology improves?

If anything, I think that Moore's Law might be a self-fulfilling prophecy.

We just don't have that great a motivation to improve processor technology these days. We have processor technology that is beyond the dreams of engineers 30 years ago. For the most part, we have reached a point where most of the needs of applications of massively powerful computing are currently realized in today's machines.

Sure, faster is better, but does faster translate to big development dollars to outdo Moore's Law when researchers and developers are constantly trying to develop software and systems to keep up with the huge gains that were seeing with Moore's Law? In this scenario, Moore's Law is how fast machines improve because Moore said as much and that's what drives the designers to improve, keeping up with and staying ahead of Moore's Law. The designers don't want to be in the group that finally failed to live up to the expectations of the industry, but there's also no particular motivation to get ahead of Moore's Law's predictions either.

Take the above with a grain of salt. It's just conjecture, of course.

• #### Other limits will stop you before Plank time (Score:2)

Ugh ... a couple of the posting in here are scientifically dubious at best as the moderators happily mod up anything vaguely resembling their high school physics class.

Disclaimer: While I have a Ph.D. in plasma physics and did a large amount of scientific computing in my thesis, this is not an area on which I am an expert. However I do know that a number of high quality physicists have given this a fair amount of thought (like Feynmann and Wheeler for instance) and have read some of their work.

The big limit is thermodynamic. The minimum energy it takes to flip a bit is of order k_b T_a where k_b is Boltzmann's constant and T_a is the ambient temperature (I think Wheeler was the first to show this limit through clever gedanken experiments but I could be wrong). The ambient temperature of the universe as measured to high precision by the cosmic microwave background black body radiation spectrum is T_a ~ 2.8 K (that is ~ -270 C or ~ -460 F for the unit challenged but remember Celsius and Fahrenheit are not referenced from absolute zero for the following formula).

So, suppose your calculation needs to flip N bits and you want to do it in time tau. Then the thermodynamic minimum theoretical power requirements for your computer are of order:

P ~ N k_b T_a / tau

So you want to do a complex calculation in on a Plank time scale length? I hope you have the power output of a supernova available. Of course, this is the minimum. You have to account all the inefficiences in generation, cooling ... In the end, you might need a couple of simultaneous supernovas.

Also, for reference, the Plank length and Plank time are the measurement scales made by constructing quantities of the appropriate unit out of Plank's constant h, the speed of light c and the gravitational coupling constant G. Crudely speaking, it is the length scale at which conjectured quantum gravity effects dominate. Planck length considerations aren't really factored into theoretical limits of computation as other more obvious limits are reached first (like the above limit).

A more practical issue is whether or not computer miniturization can continue below the rapidly approaching atomic length scale (~1 A). For example, could you make logic gates based on complex inter-nuclear interactions or based out of non-linear vacuum dielectric polarization of hard gamma rays (i.e. compton backscattering off virtual electron-positron pairs) or other such known exotica of modern physics?

Kevin
• #### Re:If only the Earth's temp were lower.... (Score:2)

Carbon dioxide would actually be the closest relative to silicon dioxide, but solid carbon dioxide (dry ice) would have to be kept cold, since its evaporation temp is somewhere near 0C. Of course, if you kept it under high pressure, you could keep the temperature low and overclock the hell out of it =)
___________________________________________

I'm somewhat ignorant of chemistry, but HO2 is neither water, nor possible with proton/electron bonding, since hydrogen has a +1 charge, and oxygen -2.

--

• #### Re:Recent slashdot story.. (Score:2)

An atom's diameter is about a third of a nanometer. You should check your numbers.
• #### Re:Silicon dioxide replacements (Score:2)

Silicon is just too cheap and abundant to give up on right now

Exactly. And by the time we're finished with this obsession of ours with faster computing(since physics will stop us at some point), we'll start seeing better computing [mit.edu]. I think we'll start to see more special purpose cpu's and hardware for pervasive computing and the focus will be become less on innovation and the next greatest thing(since we all tire of it some time) and more towards integration. Computing will be truly pervasive and really will make things easier this time(read: paper office).

-----
"Goose... Geese... Moose... MOOSE!?!?!"
• #### How? (Score:1)

I don't see any mention of how they managed to do this. I doubt that it would be with a laser, since a light wavelength is usually measured in the hundreds of nanometers. An electron gun, perhaps? That's about all I can think of . . .
• #### Re:Another Limit: Planck Time (Score:1)

Gee, when I try doing that (adding 1.1 ULONG_MAX times) on my computer, the program doesn't want to finish. Is a 533 MHz AlphaPC really that much slower than a 500 MHz Pentium?????

All Your Base Are Belong To Us!!!
• #### Re:Another Limit: Planck Time (Score:1)

Well, I suppose I should be happy that at least one person noticed. I think I should forgo my attempts at humor and just wait for the pros to jump in. :-)

All Your Base Are Belong To Us!!!
• #### Handfull to transistors? (Score:1)

lol - a "Handfull of transistors" 20nm in size? Shouldn't they be putting those into chips? ;)
• #### A HANDFUL of .02 micron transistors?? (Score:1)

Hell.. A handful of transistors that small would be enough to produce several thousand or so processors... Get busy!!! Chop Chop!!
• #### Re:The Change (Score:1)

Most people can't make out any detail smaller than a centimeter.

That makes reading this comment very difficult indeed.

• #### Re:Another Limit: Planck Time (Score:1)

Fortunately your reasoning doesn't take into account parallelism. For instance, in 100 years time processors might have hit some limit (say of of 10e-40 seconds) for executing a single operation, but what about 10,000 fpus on a single chip? What about 1,000,000 of these processors running in parallel? This is the approach the EFF took to building Deep Crack.
I think the problems facing engineers in the future will be finding ways of increasing parellism within hardware, and of course developing software to take advantage of those features.
• #### Re:Another Limit: Planck Time (Score:1)

Yes, but then so do all the clients involved in distributed.net [distributed.net], but the system is designed so that the overhead of communication and synchronisation is kept to a minimum.
• #### Re:The Change (Score:2)

Most people can't make out any detail smaller than a centimeter.

How small a detail you can make out depends pretty critically on how close you are to the detail in question. A human hair is only hundred microns (i.e. a few hundredths of a centimeter) wide, but people have no making out individual hairs at close range. I routinely work with tubing that's 140 microns in outer diameter, and I personally have no trouble seeing it- though it gives some of my co-workers fits. 60 micron diameter optical fiber is a bit tougher to see, but still doesn't require a microscope.

There are some limits, though. The shortest wavelength that the eye can see is about 0.35 microns, and the laws of optics say that you can't make out details much smaller than one wavelength. Light will just diffract around anything much smaller, so it's physically impossible to see something 0.02 microns across, even with a theoretically perfect visible light microscope. That's the exact reason that these kinds of features have been so difficult to make; the same rule that limits the resolving power of a perfect visible light microscope also limits the size of feature you can make with visible light lithography. To make something 0.02 microns across they have to use very short wavelength EM radiation.

• #### Re:If only the Earth's temp were lower.... (Score:2)

Dude, hydrogen dioxide isn't water. Water is H2O (dihydrogen monoxide). Hate to rain on your parade, just thought I should point it out.

20 nanometers, or 0.02 microns, in size

Every few months were hear about how things are smaller, faster, better, more. Too bad it'll be 10 or 20 years before this stuff filters down to the consumer level.

• #### Re:The Change (Score:1)

• Most people can't make out any detail smaller than a centimeter.

Ha ha ha ! And this is fucking "insightful"??

1cm is 1/2.54 of a fucking inch.

Most people can easily see a fraction of 1mm which is 0.1 cm.

With a naked eye, yes.

• #### Re:The Change (Score:1)

>We just don't have that great a motivation to improve processor
>technology these days. We have processor technology that is beyond the
>dreams of engineers 30 years ago. For the most part, we have reached a
>point where most of the needs of applications of massively powerful
>computing are currently realized in today's machines.
Ha! Just wait till Quake 4 hits the shelves, we'll see what you'll be saying then!
Seriously, current computing power is FAR below what is needed for realistic simulation of reality. When you look at CGI in the movies nowadays, and you've got a good eye, you'll see it still 'feels' artificial, though they used multi-computer render-farms for them, and computations took months. And that's only flat 2D projection of a 3D scene, in a resolution (about 8000X8000 pixels) that's much less than what single human eye can achieve, and sound is still digitized from natural sources, and they don't do all the simultaion of physics - much of that is pre-directed, 'hand'-animated, and all the logic of the scene is a human's work (computers didn't process the 'what if a ship hits an iceberg' rules when they were making Titanic!)
No, today's machines are far from realizing the need for computing power. Not only in VR uses. What about scientific processing of data? Would SETI exist if we didn't need much more processing power than we have now? What about intuitive user interfaces? I saw Nautilus from my new Mandrake 8.0 _crawl_ on my PIII 550, 256 Mb RAM just yesterday.
• #### Re:Another Limit: Planck Time (Score:1)

The use of this program is a little naive. C++ is not a very efficient language, so all these steps probably take more than one clock cycle per addition. Secondly, there has to be some kind of sinus wave for a clock signal, and the resolution would be too low when a clockcycle takes Planck Time. Thirdly, this theory doesn't take stuff like quantum computing into account.

So the limit you mentioned will be hit sooner if the current trend continues, but it's questionable if it'll really matter.
• #### The Change (Score:5)

on Saturday June 09, 2001 @06:53PM (#163397) Journal
It seems clear to me that Moore's Law does hold no matter what - in a way. When the size continues to decrease exponentially... smaller and smaller, to the point where we can't even see what we're making by the naked eye, it's not that further improvement becomes impossible, but simply that the process changes, or the technology.

Example: a floppy disk's size can be pushed to the limit, and finally we have 1.4MB floppies.. but sooner or later, you need a CD. And then a DVD. Et cetera.

It'll still be quite a while, but eventually silicon will simply be the wrong technology, the wrong process. Of course, a processor technology lasts MUCH longer than a subcomponent, such as a floppy drive technology.

Moore's Law. Too bad it's "only" x2 and not ^2. :-)
• #### Re:The Change (Score:1)

you mean a millimeter? I'm guessing you're not down with that metric thing yet. soon, soon, the canadains will take you over.

The slashdot 2 minute between postings limit:
Pissing off hyper caffeineated /.'ers since Spring 2001.

• #### Re:The Change (Score:1)

thanks - you're a beacon of hope for those who read /. :)

The slashdot 2 minute between postings limit:
Pissing off hyper caffeineated /.'ers since Spring 2001.

• #### Re:The End of Work? (Score:2)

because losing money on the stock market is an addiction. We can't let computers have all the fun...

The slashdot 2 minute between postings limit:
Pissing off hyper caffeineated /.'ers since Spring 2001.

• #### Re:Recent slashdot story.. (Score:1)

I'm not absolutely sure and I never liked chem but doesn't the diameter of an atom vary greatly depending on the element because of the amount of electrons and their shells. That being true, it seems you may want to check your facts before jumping on someone elses.
• #### heh...dont drop the 'bit' bucket in that lab... (Score:1)

heh...(being stupid here of course), can you imagine being the guy who drops a container of these things?

-- "nobody move! I just droped 5 pounds of .02 micron transistors!"

That would be way worse than loosing a contact in the snow...
I wonder thou, how many of these little guys would it take to amount to 5 pounds?

NO SPORK
• #### future speeds (Score:2)

The implications of developing such small and fast transistors are significant: Silicon will be able to be used to make chips until 2007, and it will make possible microprocessors containing close to 1 billion transistors running at 20 gigahertz by that year. [...] Some of the components in the transistors Intel announced -- [...] are only three atoms thick.

I keep thinking about the problems with military gear where they have to worry about cosmic rays knocking out circuits. I don't know how usable these things will be in high radiation areas unless there is substantial redundancy built in.

And to speculate on what we'll run on this puppies. or the cooling systems.

Oh My!

Check out the Vinny the Vampire [eplugz.com] comic strip

• #### Problems with 20 GHz processors (Score:3)

on Sunday June 10, 2001 @03:11AM (#163404)

I'm sorry, but won't creating processors with such high clock frequencies just be negated by the inherent slowness of the bus? One of the things you have to remember when designing hardware with such short clock cycles are the inherent speed limits on signals propagating through it. Light can only travel 1.5 cm in the time afforded by a single cycle from a clock running at 20 GHz. Electrons are much slower. The implication of this are that, given current motherboards, the CPU will stall for a hell of a lot more cycles waiting for a memory read/write.

Caching can only go so far. It seems to me that increases in overall computing power (however you wish to measure it) will not come just through cranking up the clock speed, but will require fundamental architectural changes to the PC as we know it (main storage on the CPU, overall miniaturization, etc).

• #### Silicon dioxide replacements (Score:3)

<MercMoonie.yahoo@com> on Saturday June 09, 2001 @06:56PM (#163405)
Personally, I don't think we're going to see silicon dioxide go away for quite some time. Yes, it does have some physical limitations, but few inexpensive alternatives seem possible within a 5 year time span.

Of course, new designs and materials will come (Toshiba is starting to use diagonal circuitry, helping efficiency). Silicon is just too cheap and abundant to give up on right now - we'll probably see it for a few decades into the future in things like appliances, calculators, and handheld computers because they're cheap to manufacture in mass quantities and the material itself is one of the most abundant substances on the surfaces of the planet (it's a large component of common sand).

Therefore, I think the prediction of silicon dioxide fading away in just a "few years" is a bit premature. If we've learned anything from the tech industry, old standards tend to stick around for a VERY long time (witness floppy drives, ISA slots, and serial ports).

• #### Re:future speeds (Score:1)

Also, as process size decreases, so does power consumption. So unless you drastically increase the size of the die, you will not have as much heat or need for cooling systems.
• #### Moore's Law II (Score:5)

on Saturday June 09, 2001 @07:17PM (#163407) Homepage
Wow. just think...

What would happen if computing hardware technology reached hard atomic limits?

A new era would begin...programmers would actually have to write efficient code! The end of bloatware as we know it!

Moore's Law II: On average, every 15 months, code would suck 50% less...

• #### Re:Moore's Law II (Score:1)

No, the 450lb man will have to eat less, and therefore BE less, um, bloated. Dixi.
• #### Re:Caveat (Score:1)

The whole idea is to have the *smallest* transistor. The smaller the channel the less power it consumes.

-Jeff
• #### Re:No kidding (Score:1)

Ummmmm.... NO!

Gallium Arsenide is used to make high efficiency solar cells.

-Jeff
• #### Re:No kidding (Score:1)

I think you're right. My example was just because I work with solar arrays. Notice my homepage on my profile.

-Jeff
• #### Re:Silicon Dioxide is just an INSULATOR (Score:1)

Here's the equation

P=(1/2)*f*C*V^2

Lower capacitance == lower power
Lower frequency == lower power (we don't want that though)
Lower voltage on the transistor == much lower power.

Why did we go from 5V CMOS technology to 3.3V? Why did we go from 3.3V to 1.6V technology?

I think we should not disregard the voltage that we can run these circuits.

-Jeff
• #### Swinzig's Law (Score:3)

on Sunday June 10, 2001 @06:35AM (#163413) Journal
The number of people talking about how long Moore's Law will last doubles every 18-24 months.

nm != micron
• #### Re:Problems with 20 GHz processors (Score:1)

Transistor packing on such a processor would be so tight, you could have 256 MB right on the CPU, *EASY*.

C//
• #### Caveat (Score:1)

only 20 nanometers, or 0.02 microns, in size

Note: When IBM gets done stretching them, they go up to 40nm.
• #### How small is SMALL? (Score:1)

I think we're starting to reach the limit of "smallness" here. It shouldn't be too long before the best we can hope for is for "breaking even" with regards the laws of thermodynamics.
• #### Silicon Dioxide is just an INSULATOR (Score:3)

on Saturday June 09, 2001 @08:45PM (#163418)
For all you non-EEs out there, Silicon Dioxide (SiO_2) is used in chips as an insulator. It is not that we're removing all the silicon from the chip, just replacing some of the insulating material. These articles are not talking at all about the silicon wafer substrate.

Some of the silicon dioxide has already been replaced for a couple years with materials called "low-k dielectrics" which basically means it results in lower capacitance (lower capacitance == faster chip) than silicon dioxide. This is only on the metal layers which are relatively far from the transistors. The silicon dioxide mentioned in the article is the insulator used in the actual transistor itself. It is the one that is going to be "atoms thick" and it is one of the fundamental parts of the transistor.

No matter what little tricks they try, this whole transistor thing is just a passing fad.

If it doesn't make my 100 watt tube head [marshallamps.com] go to 11, what good is it?

Whatcha doooo with those rollin' papers?
Make doooooobieees?

• #### Re:Another Limit: Planck Time (Score:1)

Computers have taken advantage of parallelism for a long time. Most scientific applications (which I guess you're representing by a repeated floating addition) have trememndous amounts of parallelism...
• #### Re:Another Limit: Planck Time (Score:1)

For the kind of program this guy described, it's all about clock frequency since there is no parallelism...
• #### Re:Moore's law-type performace increases can conti (Score:1)

Itanium has a lot less complexity in instruction fetch / issue / execute etc. So, it seems to me like this is a contradiction of what you said. It is wider (6 instructions) as opposed to the 3 instructions of pentium 3 etc. Just look at the spec numbers though (hint, the p3 1ghz beats it and costs 1/50 as much (or some similar small fraction))...
• #### Re:future speeds (Score:1)

Actually, in recent times there have been papers published on how to create cpus which are tolerant in the face of cosmic radiation. Two examples could be Todd Austin's DIVA research http://www.eecs.umich.edu/~taustin/papers/micro32_ diva.pdf and reinhart's Transient Fault Detection Via Simultaneous Multithreading http://www.eecs.umich.edu/~stever/pubs/isca00-srt. pdf.

They take two separate approaches. DIVA puts a second, small cpu on the core which checks all work performed by the primary cpu. The multithreading paper executes two redundant copies of a program, checking that the results generated between the two agree (on the same processor, using simultaneous multithreading).

• #### Re:Moore's law-type performace increases can conti (Score:1)

Well, I left out some details... If you rely on the compiler to do things, you can make your processor wider (actually, the primary limitation of making it infinitely wide is the number of execution units and cache ports, (and I suppose number of branch predictions you can provide)). Make a 3 issue out of order processor, however, is more complex overall than a 6 wide EPIC processor (although, the FU bybass network probably isn't pretty on the EPIC).

As far as FP vs INT... well... I don't know... I mean, if all you care about is FP, then your work is 99% likely to be easily parallizable. Thus, just buy 10 1 gig athlons and be happy... but whatever :)

• #### Re:Moore's Law II (Score:1)

They tell us that inside will be the same logic way of building chips . No smart one just a bunch of logic array (1 billion of them ) no fuZZy logic at all Look around even a bird have immense AI+3D power so it does real time proceessing of the world .What we have : an machine that can't deal with 2D images (movies),and Mp3 in real time ,i mean encoding them . So we got the Klippy the smartest thing in the world more anoying that enything . In the new WinNt(Xperimental Programming from Microoft) the clippy "help" you to clean the desktop of icons what a krappp!!! Even with 1Thz computer the Microsoft programmers will code with a lot of NOOP so your hardware to look old and to throw it in the Recyle Bin
• #### Re:The Change (Score:1)

Most people can't make out any detail smaller than a centimeter

Haha! Heehee. Sorry. Do you know what a centimeter is? My index fingernail is about a centimeter wide. On my monitor, the word DUCK is about a centimeter wide. I am 186 centimeters tall. The civilized world (read: not afraid to make changes to improve efficiency) uses the metric system now, so I suggest you learn it :)

Then again, I still tell everyone that I'm 6'1" and a bit..

• #### Recent slashdot story.. (Score:3)

on Saturday June 09, 2001 @06:36PM (#163427)
A recent Slashdot story [slashdot.org] covers the posabilities of .01nm transistors and how there currently is a theoreticle limit with our current process of .002nm
• #### Re:Moore's law-type performace increases can conti (Score:1)

Not to mention Intel's "Jackson" tech. It seems to me that SMT will greatly reduce the inefficiency of our current uProcessors. If it is backed up by a properly powerful core, that is (one where there is almost always a free int/fp pipe doing nothing, as MUST be the case w/ the athlon, since it is 3 int and 3 fp and manages an IPC of less than 2)
• #### Obligatory AI quote (Score:5)

on Saturday June 09, 2001 @07:19PM (#163429)
"You log on in the morning and (the computer) gives you two or three options: 'Have you thought about doing one of these things? I've done the calculations for you," Marcyk said.

Just once, I'd like to read an article about a new microprocessor technology that doesn't have some silly quote about what kind of AI feature it will enable.

For decades, hardware has been proving exponentially. For decades, they've been predictiong that the new features will magically enable intelligent software.

All we've got to show for it so far is Clippy the paper clip. A mere 10X speedup won't make Clippy any less annoying.

Hint for futuristic article editors: the human brain has a hardware and software architecture that has absolutely nothing in common with that of an electronic computer.

• #### Re:Obligatory AI quote (Score:2)

A mere 10X speedup won't make Clippy any less annoying.

I don't know, anything that helps me dismiss the damn thing a couple milliseconds faster is forward progress as far as I'm concerned...

• #### just computers buying and selling (Score:1)

So, now we got megacorps who are only interested about pleasing their shareholders. And when shareholders are intelligent AIs supporting all the causes wich make most profit? Making decisions and transactions in seconds, calculating markets reactions using chaos theories and improving them, fastest computers playing ball over slower ones.. Who's going to win?

Surely not us consumers and workers. Even now traditional 8 hour workdays are routinely exceeded, using coffeine, pills and stimulating experiences and working conditions to keep workers healthy. Good health is defined by new standards every year so that most productive units would look most healthy. "Healthy people smile a lot, their days are filled with varying tasks and refreshing experiences." and so on..

• #### Squinting to post this (Score:1)

Most people can't make out any detail smaller than a centimeter.
So on his 15" monitor, he's browsing at 23x41?
• #### Re:Moore's law-type performace increases can conti (Score:1)

And remember, two cpus running in parellel enjor a greater performance boost (on some tasks) then a single processor with twice the speed of either of the dual processors.

I think you have it backwards. two cpus almost never run at twice the speed of one. Usually, it's good for an extra 50-70 percent speed.

• #### Re:Moore's law-type performace increases can conti (Score:1)

Name one.

System performance is going to be lower on the dual proc version just from multiproc overhead.

• #### Like I've Always Said (Score:1)

Size Does Matter>
And as the growing bloat^H^H^H^H^Hsoftware industry has proven
It won't do nayone any good if you don't know what to do with it.
• #### No matter how fast they make chips... (Score:2)

....there's always MS windows to slow them down to reasonable speeds!
• #### Uses in DNA super computers? (Score:2)

DNA is trivially custom-synthesized on solid supports. You rig chromophores or fluorophores to form Watson-Crick dimers instead of ATCG and roll rigid chromophore configuration and custom ordering any way you want. Bridge the base pairs with hydrogen bonding, dipole alignment, hydrophobic effects... be a chemist. We already have evidence of anomalous electronic conductivity in ordinary DNA (depends on base composition, which is a veeeery good sign). Want longer molecules? DNA-ligase and whatever. Let the enzymes do the fine work. After you get your Nobel Prize you want to manufacture, and you DON'T do it solid phase. You do it PCR with custom (patentable) templating. I bet you bust the conventional and closely held PCR application patents, too. GROW THE SUPERCONDUCTORS in bugs! Spinoff of fluorescent DNA and RNA probes for genome sequencing and clinical diagnostics ane therpaeutics - re photodynamic therapy targeted to oncogenes (especially gene hyper-repeat sequences).

Why use crappy phosphate-deoxyribose alt-copolyester? Peptide nucleic acids are vastly more robust and give you optional chiral centers for more goodies, like non-linear optical devices.

Hell, make a PNA 17-25 mer cocktail complimentary to a few critical HIV gene sequences and cure that, too, by knockout strategy (the Flavr-Savr HIV therapy). PNAs are uncharged and readily permeate cell membranes, they are totally untouched by nucleases and other catabolisms, and they are cheap to make. Turn off HIV RNA, turn off disease process progression. Boom. None of this downstream small molecule enzyme inhibitor bullshit that makes so much money for the pharm workers.

Original proposal is an interesting problem, and rather a small proportion of the population is up to it. When I started out in the business some 30 years ago, the process of discovery and original proposal awed me. It still does, and my track record has been exemplary. Perhaps the best answer is that you must read everything and be prepared for things to bump around in your head.

Example: My first original research proposal was to synthesize an obscure polycyclic alkaloid (in 32 steps! Silly synthesis is the refuge of a scoundrel) An ocean of blood flowed, and all of it was mine save for one redeeming skeletal inversion which was deemed "adequate." The next year, for my second original proposal, I proposed synthesizing C2 in cryogenic matrix and gas phase. C2 is hot stuff (literally) in flames and comet tails (Schwan lines), and its electronic structure was uncomputable at the time. When you warm the matrix fragments recombine to give acetylene diethers - which had not been synthesized at that time. The diethers dimerize to a squaric acid precursor, which was hot stuff re squarylium dyes for photoconductors. The tar from the reaction was worth at least ten times the cost of starting materials.

Know everything, and see where stuff rubs.

Almost any ten-carbon lump turns into adamantane in aluminum chloride/bromide slush. We can do better (though not cheaper) in ionic solvents like N-methl-N-(n-butyl)imidazolium tetrachloroaluminate with up to another added mole of AlCl3. The media support multiple carbocationic rearrangeents as a benign environment. What happens if you put micronized graphite into the slush and bubble in isobuytlene? Will you edge alkylate and solublize, or make 1-D tert-butylated diamond plates, or will something else happen? Look at all the applications of graphite fluoride and graphite intercalcates, as in high energy density battery systems and high number density low bulk mass hydrogen storage modalities.

Sargeson trapped Co(en)3(3+) as the inspired sepulchrate (formaldehyde plus ammonia), and then the brilliant sarcophogate (formaldehyde plus nitromethane; look down the triangular face of the coordination octahedron). Stop being an inorganiker and start being an organiker. That last gives you "para" nitro groups, which give you amines, which give you redox nylon (and azo linkages; polyisocyanates, polyurethanes, epoxies, acrylamides, and...) Nitrogen chemistry is incredibly rich - conjugated azo linkages, fluorescent heterocycles, stable free radicals, extrusion and caged radical recombination... As Co(en)3(3+) is trivially optically resolved, you also have potential non-linear optical films switchable through redox change. (Information storage, chemical transistors, sensors, clinical diagnostics, electrochromic windows...) It goes on and on... a whole lifetime of research. Nobody has diddled with it.

Look up the synthesis and reactions of of hydroxlyamine-O-sulfonic acid in Volume 1 (!) of Fieser and Fieser. Look at the mysteries of ammonia - inversion, nucleophilicity. Look at the Alpha Effect re hydrazine, hydroxylamine, and hydrogen peroxide. Look at Bredt's rule and all the interesting things it does at bridgeheads. Now, make it all rub against itself: Start with 1,4,7-triazacyclononane, which is easy enough though sloppy to make in bulk. Gently nitrosate it. The nitroso group goes on the first amine, then the adjacent amine (pre-organized to attack re Cram) attacks at the nitroso nitrogen to give you the hydroxylamine. Do the usual hydroxylamine-O-sulfonic acid synthesis and you tether the original nitroso nitrogen to the third amine with the original nitroso's oxygen as the leaving group. What have you got? You have four bridgehead nitrogens rigidly held, none of which can invert. The apical nitrogen is tethered only to other aliphatic nitrogens - which has never been done. It cannot invert and... for all that, it may have no nucleophilicity whatsoever because the Alpha Effect is euchered out by geometry and inductive electron withdrawal is mammoth. You could do it in undergrad lab.

I once watched a bunch of engineers with a very big budget try to excimer laser drill parallel or serial hundreds of 5 micron holes in PMMA intrastromal corneal implants (without the holes to move oxygen from outside and nutrients from inside the cornea dies and sloughs, which is tough on the rabbits). Buncha maroons. 5 microns is a magic number to an organiker, and I won't insult your intelligence with the trivial solution. The next Tuesday I delivered a foot-long bar of oriented two-phase PMMA which was cut and polished to spec, had its holes revealed, and got me into incredible hot water since my employer did not give shit one about the product but was really interested in the long term money budgeted by its parent company.

Take two cyclopentane rings (Framework Molecular Models do this nicely). Put 5 all-cis (vs the ring not olefin configuration, which need only be consistent) alkenes on one cylcopentane. Cap with the other. Now, twist slightly and watch the pi-oribtals. Is that a clever way to make dodecahedrane, or what? The alkenes came from alkynes. The alkynes were assembled with Schrock alkyne metathesis catalyst from the nitriles. Strain being what it is, you might want to have diacetylene linkages (copper-mediated oxidative coupling) and go for a bigger hydrocarbon bubble. Start with all-cis 1,3,5-cyclohexane and trace the diacetylene evolution (no strain here!) Consider 1,3,5-trans-2,4,6 all cis-substituted cyclohexane). Voila! You grow 1-D diamond (note the ring conformation and the special name given to that diamond structural variant).

I could go on for megabytes. All you need do is read the library, hold it all in your head, and wonder "what if..." where stuff rubs together. This is the first (easy) kind of genius. The second (hard) kind of genius is to see it all ab initio. I don't have a handle on that one.

• #### Moore's law-type performace increases can continue (Score:2)

While we may not be able to truly follow Moore's law (cpu speeds double every 18 months) with silicon dioxide cpus, the fact that silicon-based processors are contually getting smaller, cheaper, and drawing less power suggests that when we finally "hit the wall" in terms of silicon cpu performance, it will be practical in terms of space, cost, and power to increase processing power simply by using multiple CPUs. There is no basic, fundamental reason that a motherboard couldn't be released with support for, say, 5 Pentium X processors running at 15 gigahertz, or whatever Intel will have on the market in 2007. And remember, two cpus running in parellel enjor a greater performance boost (on some tasks) then a single processor with twice the speed of either of the dual processors. While the speed of processors may slow its increase, I think we'll find ourselves simple using more processors in each computer, so your PC will still be obsolete within minutes after purchase :-).

• #### this is just a middle step. (Score:2)

I believe IBM is working on nanotechnology, they are developing tubes which are only a few atoms wide and should be out in 10 years. Massive reduction in power needs, and huge benefits in speeds are possible with this. The physically limits of silicon will be reached by that time, and I'm sure there will be many attempts like this to stretch out the technology as long as possible. Quite frankley, the processors may be reaching max speeds, but our computer systems aren't and the processor war is just hype. They need to be redeveloped to allow for higher FSB speeds which is currently impossible with the physical size of motherboards, allow for higher on board bandwidth. Adding 4 100Mhz channels isn't a step in the right direction as with the Pentium 4, its just a work around, not really getting too much faster. I say work on fixing and changing the physical layout of computer parts so speeds can be improved and wait for nanotechology to get here, it won't take too long. Intel's yields at current processes which are much larger than 0.02nm are poor at best, its just inefficient to reduce the size too much.

#### Related LinksTop of the: day, week, month.

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...