Intel Claims Smallest, Fastest Transistor 116
The Angry Clam writes: "Supposedly, Intel has really micronized transistors." Seems that "Intel engineers
have designed and manufactured a handful of transistors that
are only 20 nanometers, or 0.02 microns, in size." There's some of the usual discussion of how long Moore's Law can hold, but also a bit of discussion about what will replace silicon dioxide in a few years. Reader omnirealm points to a similar story at the New York Times as well.
Re:Californians look at too much porn (Score:1)
I won't do much good on a dial-in ip, at first thought. But addressing 'matters' like these to isp's surely worked before and I wouldn't know why it wouldn't work now.
It would be a relief to finally see some action against this and lots of other abuse of slashdot. A joke is a joke, but you can actually go too far.
Re:If only the Earth's temp were lower.... (Score:1)
What is apparently necessary is a different design for a transistor (or a gate). It turn out to be as revolutionary as the shift from vacuum tube to semiconductor transistor. It may be an application of modern techniques to an old and forgotten idea. Or perhaps Moore's Law will quietly ebb out...
What I'm personally rooting for is a change in the idea of computing. If people can build reasonably large quantum computers, maybe they can figure out something besides factoring large numbers that they are actually good at computing. FPGAs also sound both neat and promising.
Realistically, even if this is the last generation of transistor shrinkage, it'll still take years for this to hit the desktop. That is quite a long time for people to come out with ingenious new schemes. Well... cross your fingers, anyway.
Re:Another Limit: Planck Time (Score:1)
Re:Another Limit: Planck Time (Score:2)
Maybe in 100 years, computers will be smart enough to realize that 1.1+1.1+1.1+...+1.1 can be computed as 1.1*ULONG_MAX.
Re:Another Limit: Planck Time (Score:2)
> enough to realize that 1.1+1.1+1.1+...+1.1
> can be computed as 1.1*ULONG_MAX.
\begin{pedant}
Unlikely, given that the value obtained by successive additions and the value obtained by multiplication differ substantially in the 11th decimal place. IEEE floating point numbers are not the same as the real line.
\end{pedant}
Re:No kidding (Score:2)
Which Handful is that ? 5 ( as in the fingers on a hand) or enogh to fill a palm. For something this small you are talking several million.
Re:Moore's law-type performace increases can conti (Score:1)
Less power (Score:1)
After what I've been reading here lately, power-consuming seems to be just what California needs ;)
Re:Moore's Law II (Score:2)
Face it. We're bloated right now. If processors never got any faster ever, we'd still be bloated.
Re:The Change (Score:2)
Re:The Change (Score:2)
2^2 = 4
4^2 = 16
16^2 = 256
.. and so on.
Re:Obligatory AI quote (Score:2)
Right on the mark. We'll hate it anyway (anybody want a dancing paperclip? "The new Pentium V chip will be fast enough for the a line-dancing and juggling paperclip." It's still a lousy annoying paperclip.
AI requires more than just fast transistors and 3D graphics.
And all stock people should remember why the big crash happened back in the 80s: Yes: Computer trading and all these automatic trading programs suddenly shouting 'sell sell' in chorus. Let's all not learn from the past and do that again, that was fun (irony).
Handful (Score:1)
Re:Another Limit: Planck Time (Score:1)
[Saint Stephen]
Re:Another Limit: Planck Time (Score:1)
[Saint Stephen]
Re:Other limits will stop you before Plank time (Score:1)
[Saint Stephen]
Re:Another Limit: Planck Time (Score:2)
Let's get from here to there.
First, why do we have to be stuck with stupid binary after all these years? Surely we can make the "wires" sensitive enough to recognize more than two electrical states. Lots more computing power in the same "physical space."
Back at the turn of the century Goethe showed that non-trivial systems are not automatic, which ultimately is why we futz around with non-perfectly optimizing compilers that can't recognize that this problem is a single multiplication. A colleague was telling me about NP-completness, and how with the lambda calculus (don't know much about it) we can verify completeness of a system (but what about consistency)? In other words, you can generate every possible truth, but you can't prove it doesn't generate falsehoods. Sounds like the problem you'd have with quantum computing: you'd still have to be able to recognize the "correct" result from all possible correct and incorrect results in the answer set.
Flame on!
[Saint Stephen]
Another Limit: Planck Time (Score:4)
I wrote a C++ program which initializes a double to 1.1; then adds 1.1 to it 4 billion times (ULONG_MAX).
On my PIII 500 mhz laptop (circa 1998-99) sometime, this program runs in 30 seconds.
On my new P4 1.7 ghz, it runs in 12 seconds.
I didn't check, but I think Plank time is about 10-47 seconds. Assuming the time it takes to execute one of these 4 billion steps, and if it continues to cut in half every three years, we'll hit planck time in about 100 years.
In other words, there is a fundamental limit on how quickly we can know one single fact (planck time), and our children will hit that by the end of their lifetimes.
What then?
[Saint Stephen]
CPU Power can allow a semi-AI (Score:1)
I would think that given enough horsepower, we should be able to brute force compute all the possible solutions for a problem. Add to that a little statistical math and you might possibly be able to build a minimal AI that could help with some decisions.
So I guess what we need is a massive online peer-to-peer statistical repository. That way, one system could "learn" from others.
/me heads to bed.
--
Adam Sherman
Re:The Change (Score:1)
--
Re:The Change (Score:2)
By "people" do you mean "blind people feeling things with their feet"?
A centimeter is 0.4 inches. I don't know about most people, but I can sure see things smaller than that.
--
Re:Another Limit: Planck Time (Score:2)
--
hey! (Score:1)
Re:Moore's law-type performace increases can conti (Score:1)
Re:Another Limit: Planck Time (Score:1)
Re:Moore's law-type performace increases can conti (Score:1)
Re:Microsoft + Intel conspiracy (Score:2)
www.microsoft.com
www.kde.org
www.gnome.org
www.xfree86.org
www.trolltech.com
www.gtk.org
www.openoffice.org
You see, it's not just MS that spews bloatware. Its simply that while in the UNIX market, different organizations spew bloatware, while in Windows-land, all bloatware spewing is efficiently consolidated into one company.
Re:Uses in DNA super computers? (Score:2)
Re:this is just a middle step. (Score:2)
Re:Problems with 20 GHz processors (Score:3)
Re:Another Limit: Planck Time (Score:1)
if (i == 0)
Re:Another Limit: Planck Time (Score:1)
> C++ is not a very efficient language
Well, this little program:
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
const double a = 1.1;
int main()
{
double d = a;
unsigned long i;
for (i = 0; i < ULONG_MAX; i++)
{
d += a;
}
printf("%lf\n", d);
return 0;
}
Compiled into this:
.file "repadd.c"
.version "01.01"
gcc2_compiled.:
.globl a
.section
.align 8
.type a,@object
.size a,8
a:
.long 0x9999999a,0x3ff19999
.LC1:
.string "%lf\n"
.align 8
.LC0:
.long 0x9999999a,0x3ff19999
.align 8
.LC16:
.long 0x99999999,0x40319999
.text
.align 16
.globl main
.type main,@function
main:
pushl %ebp
movl %esp, %ebp
pushl %eax
fldl
fldl
pushl %eax
movl $15, %eax
.p2align 4,,7
.L36:
fadd %st(1), %st
addl $30, %eax
cmpl $-2, %eax
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
fadd %st(1), %st
jbe
fstp %st(1)
subl $12, %esp
fstpl (%esp)
pushl $.LC1
call printf
xorl %eax, %eax
movl %ebp, %esp
popl %ebp
ret
.Lfe1:
.size main,.Lfe1-main
.ident "GCC: (GNU) 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)"
Do you think you can make it much faster using hand-crafted assembly code? Admittedly, I used C instead of C++, but that doesn't make any difference for anything as small as this.
Re:Obligatory AI quote (Score:1)
In my opinion there is very little physical that cannot be emulated with computers, given enough processing speed and memory.
The End of Work? (Score:5)
``You log on in the morning and (the computer) gives you two or three options: 'Have you thought about doing one of these things? I've done the calculations for you,''' Marcyk said.
If the computer is so smart, why not just tell it to initiate whatever stock transactions it thinks is best? Come to think of it, if computers are that smart, you'll be out of a job and you won't have any money to invest in stocks unless you inherited an estate or had some money stashed away from the time when you were working.
When that happens, we'll need a new law to replace Moore's law: the number of unemployed people will double every seven days. Andy Grove will be heard saying "Where is the limit? Show me the limit, goddamnit!" while an angry and hungry mob tries to force its way into the lobby of Intel's headquarters, brandishing pitchforks and God knows what else.
Moore's Law, Ammendments (Score:1)
Ammendment I: Bus speed shall pretty much stay dormant, until some asshole decides to get off his ass and do something about it.
Ammendment II: Tape as a hard storage solution will stick around like Herpes. Sure some gerks at Livermoore will screw around with rubies and diamonds, but the reality is one upgrade of the 8-track after another.
Ammendment III: A hard drive's fragility will double every 18-24 months. Shit, the instructions on the last hard drive I got said, "Do not breathe in room with hard drive before installation."
Ammendment IV: The average number of patches required between releases of software shall double every 18-24 months.
Ammendment V: The number of hours it takes you to turn of the stupid marketeering features of the new windows Office release, like auto-capitalizer, will double every 18-24 months.
Ammendment VI: (added by Microsoft recently) The ammount of money you pay us for software you have to buy will now double every 18-24 months.
Ammendment VII: The number of months before Mozilla 1.0 is released doubles every 18-24 months.
Ammendment VIII: The number of people who use emacs and the number of people who use vi hasn't changed since 1992 and may become one of the constants of physics (like the speed of light).
Ammendment IX: The editorial skills of the
Ammendment X: The number of stupid patents issued shall double every 18-24 months.
Ammendment XI: The number of RAID variations shall double every 18-24 months and the number of different labels for the same variation shall also double every 18-24 months.
Ammendment XII: The chances of a
"The Intern-what?" - Vince Cerf
Re:Another Limit: Planck Time (Score:3)
But, all of this is irrelevant, because there is no limit on how quickly we can know a single fact, because we can determine a theoretical infinite number of facts from a single query with quantum computing.
Re:The Change (Score:1)
So, really, that should be most people can't make out much detail smaller than 1/10th of a centimeter and the transistors we're talking about are 50,000 times smaller...
Re:The Change (Score:1)
I both showed that I didn't have a grasp of the metric system AND that I didn't understand that the poster was talking about not being able to see the whole CPU, not just the transistor (which you've not been able to see for years).
I guess I'm being moderated up for my unsupported suppositions later in the post, but that's not really any different than the post I was responding to. He just had different unsupported suppositions...
I wish I had mod points so I could set my own post to "Overrated". Oh, but you can't mod your own posts, can you? That should be changed. Everyone should be allowed to apply Overrated to their own posts...
Re:The Change (Score:5)
We're talking about 0.02 microns here. Most people can't make out any detail smaller than a centimeter. 0.02 microns would be 500,000 times smaller than what can be seen with the unaided eye!
I really don't understand your reasoning. Are you saying that we are motivated to improve our technology all the time? What does this have to do with Moore's Law and specific predictions about how fast our technology improves?
If anything, I think that Moore's Law might be a self-fulfilling prophecy.
We just don't have that great a motivation to improve processor technology these days. We have processor technology that is beyond the dreams of engineers 30 years ago. For the most part, we have reached a point where most of the needs of applications of massively powerful computing are currently realized in today's machines.
Sure, faster is better, but does faster translate to big development dollars to outdo Moore's Law when researchers and developers are constantly trying to develop software and systems to keep up with the huge gains that were seeing with Moore's Law? In this scenario, Moore's Law is how fast machines improve because Moore said as much and that's what drives the designers to improve, keeping up with and staying ahead of Moore's Law. The designers don't want to be in the group that finally failed to live up to the expectations of the industry, but there's also no particular motivation to get ahead of Moore's Law's predictions either.
Take the above with a grain of salt. It's just conjecture, of course.
Other limits will stop you before Plank time (Score:2)
Disclaimer: While I have a Ph.D. in plasma physics and did a large amount of scientific computing in my thesis, this is not an area on which I am an expert. However I do know that a number of high quality physicists have given this a fair amount of thought (like Feynmann and Wheeler for instance) and have read some of their work.
The big limit is thermodynamic. The minimum energy it takes to flip a bit is of order k_b T_a where k_b is Boltzmann's constant and T_a is the ambient temperature (I think Wheeler was the first to show this limit through clever gedanken experiments but I could be wrong). The ambient temperature of the universe as measured to high precision by the cosmic microwave background black body radiation spectrum is T_a ~ 2.8 K (that is ~ -270 C or ~ -460 F for the unit challenged but remember Celsius and Fahrenheit are not referenced from absolute zero for the following formula).
So, suppose your calculation needs to flip N bits and you want to do it in time tau. Then the thermodynamic minimum theoretical power requirements for your computer are of order:
P ~ N k_b T_a / tau
So you want to do a complex calculation in on a Plank time scale length? I hope you have the power output of a supernova available. Of course, this is the minimum. You have to account all the inefficiences in generation, cooling
Also, for reference, the Plank length and Plank time are the measurement scales made by constructing quantities of the appropriate unit out of Plank's constant h, the speed of light c and the gravitational coupling constant G. Crudely speaking, it is the length scale at which conjectured quantum gravity effects dominate. Planck length considerations aren't really factored into theoretical limits of computation as other more obvious limits are reached first (like the above limit).
A more practical issue is whether or not computer miniturization can continue below the rapidly approaching atomic length scale (~1 A). For example, could you make logic gates based on complex inter-nuclear interactions or based out of non-linear vacuum dielectric polarization of hard gamma rays (i.e. compton backscattering off virtual electron-positron pairs) or other such known exotica of modern physics?
Kevin
Re:If only the Earth's temp were lower.... (Score:2)
___________________________________________
I'm somewhat ignorant of chemistry, but HO2 is neither water, nor possible with proton/electron bonding, since hydrogen has a +1 charge, and oxygen -2.
--
Re:Recent slashdot story.. (Score:2)
Re:Silicon dioxide replacements (Score:2)
Exactly. And by the time we're finished with this obsession of ours with faster computing(since physics will stop us at some point), we'll start seeing better computing [mit.edu]. I think we'll start to see more special purpose cpu's and hardware for pervasive computing and the focus will be become less on innovation and the next greatest thing(since we all tire of it some time) and more towards integration. Computing will be truly pervasive and really will make things easier this time(read: paper office).
-----
"Goose... Geese... Moose... MOOSE!?!?!"
How? (Score:1)
Re:Another Limit: Planck Time (Score:1)
All Your Base Are Belong To Us!!!
Re:Another Limit: Planck Time (Score:1)
All Your Base Are Belong To Us!!!
Handfull to transistors? (Score:1)
A HANDFUL of .02 micron transistors?? (Score:1)
Hell.. A handful of transistors that small would be enough to produce several thousand or so processors... Get busy!!! Chop Chop!!
Re:The Change (Score:1)
That makes reading this comment very difficult indeed.
Re:Another Limit: Planck Time (Score:1)
I think the problems facing engineers in the future will be finding ways of increasing parellism within hardware, and of course developing software to take advantage of those features.
Re:Another Limit: Planck Time (Score:1)
Re:The Change (Score:2)
How small a detail you can make out depends pretty critically on how close you are to the detail in question. A human hair is only hundred microns (i.e. a few hundredths of a centimeter) wide, but people have no making out individual hairs at close range. I routinely work with tubing that's 140 microns in outer diameter, and I personally have no trouble seeing it- though it gives some of my co-workers fits. 60 micron diameter optical fiber is a bit tougher to see, but still doesn't require a microscope.
There are some limits, though. The shortest wavelength that the eye can see is about 0.35 microns, and the laws of optics say that you can't make out details much smaller than one wavelength. Light will just diffract around anything much smaller, so it's physically impossible to see something 0.02 microns across, even with a theoretically perfect visible light microscope. That's the exact reason that these kinds of features have been so difficult to make; the same rule that limits the resolving power of a perfect visible light microscope also limits the size of feature you can make with visible light lithography. To make something 0.02 microns across they have to use very short wavelength EM radiation.
Re:If only the Earth's temp were lower.... (Score:2)
Dude, hydrogen dioxide isn't water. Water is H2O (dihydrogen monoxide). Hate to rain on your parade, just thought I should point it out.
enough already! (Score:1)
Every few months were hear about how things are smaller, faster, better, more. Too bad it'll be 10 or 20 years before this stuff filters down to the consumer level.
Re:The Change (Score:1)
Ha ha ha ! And this is fucking "insightful"??
Please.
1cm is 1/2.54 of a fucking inch.
Most people can easily see a fraction of 1mm which is 0.1 cm.
With a naked eye, yes.
Re:The Change (Score:1)
>technology these days. We have processor technology that is beyond the
>dreams of engineers 30 years ago. For the most part, we have reached a
>point where most of the needs of applications of massively powerful
>computing are currently realized in today's machines.
Ha! Just wait till Quake 4 hits the shelves, we'll see what you'll be saying then!
Seriously, current computing power is FAR below what is needed for realistic simulation of reality. When you look at CGI in the movies nowadays, and you've got a good eye, you'll see it still 'feels' artificial, though they used multi-computer render-farms for them, and computations took months. And that's only flat 2D projection of a 3D scene, in a resolution (about 8000X8000 pixels) that's much less than what single human eye can achieve, and sound is still digitized from natural sources, and they don't do all the simultaion of physics - much of that is pre-directed, 'hand'-animated, and all the logic of the scene is a human's work (computers didn't process the 'what if a ship hits an iceberg' rules when they were making Titanic!)
No, today's machines are far from realizing the need for computing power. Not only in VR uses. What about scientific processing of data? Would SETI exist if we didn't need much more processing power than we have now? What about intuitive user interfaces? I saw Nautilus from my new Mandrake 8.0 _crawl_ on my PIII 550, 256 Mb RAM just yesterday.
Re:Another Limit: Planck Time (Score:1)
So the limit you mentioned will be hit sooner if the current trend continues, but it's questionable if it'll really matter.
The Change (Score:5)
Example: a floppy disk's size can be pushed to the limit, and finally we have 1.4MB floppies.. but sooner or later, you need a CD. And then a DVD. Et cetera.
It'll still be quite a while, but eventually silicon will simply be the wrong technology, the wrong process. Of course, a processor technology lasts MUCH longer than a subcomponent, such as a floppy drive technology.
Moore's Law. Too bad it's "only" x2 and not ^2.
Re:The Change (Score:1)
The slashdot 2 minute between postings limit: /.'ers since Spring 2001.
Pissing off hyper caffeineated
Re:The Change (Score:1)
The slashdot 2 minute between postings limit: /.'ers since Spring 2001.
Pissing off hyper caffeineated
Re:The End of Work? (Score:2)
The slashdot 2 minute between postings limit: /.'ers since Spring 2001.
Pissing off hyper caffeineated
Re:Recent slashdot story.. (Score:1)
heh...dont drop the 'bit' bucket in that lab... (Score:1)
-- "nobody move! I just droped 5 pounds of
That would be way worse than loosing a contact in the snow...
I wonder thou, how many of these little guys would it take to amount to 5 pounds?
NO SPORK
future speeds (Score:2)
I keep thinking about the problems with military gear where they have to worry about cosmic rays knocking out circuits. I don't know how usable these things will be in high radiation areas unless there is substantial redundancy built in.
And to speculate on what we'll run on this puppies. or the cooling systems.
Oh My!
Check out the Vinny the Vampire [eplugz.com] comic strip
Problems with 20 GHz processors (Score:3)
I'm sorry, but won't creating processors with such high clock frequencies just be negated by the inherent slowness of the bus? One of the things you have to remember when designing hardware with such short clock cycles are the inherent speed limits on signals propagating through it. Light can only travel 1.5 cm in the time afforded by a single cycle from a clock running at 20 GHz. Electrons are much slower. The implication of this are that, given current motherboards, the CPU will stall for a hell of a lot more cycles waiting for a memory read/write.
Caching can only go so far. It seems to me that increases in overall computing power (however you wish to measure it) will not come just through cranking up the clock speed, but will require fundamental architectural changes to the PC as we know it (main storage on the CPU, overall miniaturization, etc).
Silicon dioxide replacements (Score:3)
Of course, new designs and materials will come (Toshiba is starting to use diagonal circuitry, helping efficiency). Silicon is just too cheap and abundant to give up on right now - we'll probably see it for a few decades into the future in things like appliances, calculators, and handheld computers because they're cheap to manufacture in mass quantities and the material itself is one of the most abundant substances on the surfaces of the planet (it's a large component of common sand).
Therefore, I think the prediction of silicon dioxide fading away in just a "few years" is a bit premature. If we've learned anything from the tech industry, old standards tend to stick around for a VERY long time (witness floppy drives, ISA slots, and serial ports).
Re:future speeds (Score:1)
Moore's Law II (Score:5)
What would happen if computing hardware technology reached hard atomic limits?
A new era would begin...programmers would actually have to write efficient code! The end of bloatware as we know it!
Moore's Law II: On average, every 15 months, code would suck 50% less...
Re:Moore's Law II (Score:1)
Re:Caveat (Score:1)
-Jeff
Re:No kidding (Score:1)
Gallium Arsenide is used to make high efficiency solar cells.
-Jeff
Re:No kidding (Score:1)
-Jeff
Re:Silicon Dioxide is just an INSULATOR (Score:1)
P=(1/2)*f*C*V^2
Lower capacitance == lower power
Lower frequency == lower power (we don't want that though)
Lower voltage on the transistor == much lower power.
Why did we go from 5V CMOS technology to 3.3V? Why did we go from 3.3V to 1.6V technology?
I think we should not disregard the voltage that we can run these circuits.
-Jeff
Swinzig's Law (Score:3)
Re:Recent slashdot story.. (Score:1)
Re:Problems with 20 GHz processors (Score:1)
C//
Caveat (Score:1)
only 20 nanometers, or 0.02 microns, in size
Note: When IBM gets done stretching them, they go up to 40nm.
How small is SMALL? (Score:1)
Silicon Dioxide is just an INSULATOR (Score:3)
Some of the silicon dioxide has already been replaced for a couple years with materials called "low-k dielectrics" which basically means it results in lower capacitance (lower capacitance == faster chip) than silicon dioxide. This is only on the metal layers which are relatively far from the transistors. The silicon dioxide mentioned in the article is the insulator used in the actual transistor itself. It is the one that is going to be "atoms thick" and it is one of the fundamental parts of the transistor.
passing fad (Score:1)
If it doesn't make my 100 watt tube head [marshallamps.com] go to 11, what good is it?
Whatcha doooo with those rollin' papers?
Make doooooobieees?
Re:Another Limit: Planck Time (Score:1)
Re:Another Limit: Planck Time (Score:1)
Re:Moore's law-type performace increases can conti (Score:1)
Re:future speeds (Score:1)
They take two separate approaches. DIVA puts a second, small cpu on the core which checks all work performed by the primary cpu. The multithreading paper executes two redundant copies of a program, checking that the results generated between the two agree (on the same processor, using simultaneous multithreading).
Re:Moore's law-type performace increases can conti (Score:1)
As far as FP vs INT... well... I don't know... I mean, if all you care about is FP, then your work is 99% likely to be easily parallizable. Thus, just buy 10 1 gig athlons and be happy... but whatever :)
Re:Moore's Law II (Score:1)
Re:The Change (Score:1)
Most people can't make out any detail smaller than a centimeter
Haha! Heehee. Sorry. Do you know what a centimeter is? My index fingernail is about a centimeter wide. On my monitor, the word DUCK is about a centimeter wide. I am 186 centimeters tall. The civilized world (read: not afraid to make changes to improve efficiency) uses the metric system now, so I suggest you learn it :)
Then again, I still tell everyone that I'm 6'1" and a bit..
Recent slashdot story.. (Score:3)
Re:Moore's law-type performace increases can conti (Score:1)
Obligatory AI quote (Score:5)
Just once, I'd like to read an article about a new microprocessor technology that doesn't have some silly quote about what kind of AI feature it will enable.
For decades, hardware has been proving exponentially. For decades, they've been predictiong that the new features will magically enable intelligent software.
All we've got to show for it so far is Clippy the paper clip. A mere 10X speedup won't make Clippy any less annoying.
Hint for futuristic article editors: the human brain has a hardware and software architecture that has absolutely nothing in common with that of an electronic computer.
Re:Obligatory AI quote (Score:2)
I don't know, anything that helps me dismiss the damn thing a couple milliseconds faster is forward progress as far as I'm concerned...
just computers buying and selling (Score:1)
Surely not us consumers and workers. Even now traditional 8 hour workdays are routinely exceeded, using coffeine, pills and stimulating experiences and working conditions to keep workers healthy. Good health is defined by new standards every year so that most productive units would look most healthy. "Healthy people smile a lot, their days are filled with varying tasks and refreshing experiences." and so on..
Squinting to post this (Score:1)
Re:Moore's law-type performace increases can conti (Score:1)
And remember, two cpus running in parellel enjor a greater performance boost (on some tasks) then a single processor with twice the speed of either of the dual processors.
I think you have it backwards. two cpus almost never run at twice the speed of one. Usually, it's good for an extra 50-70 percent speed.
Re:Moore's law-type performace increases can conti (Score:1)
Name one.
System performance is going to be lower on the dual proc version just from multiproc overhead.
Like I've Always Said (Score:1)
And as the growing bloat^H^H^H^H^Hsoftware industry has proven
It won't do nayone any good if you don't know what to do with it.
No matter how fast they make chips... (Score:2)
Uses in DNA super computers? (Score:2)
Why use crappy phosphate-deoxyribose alt-copolyester? Peptide nucleic acids are vastly more robust and give you optional chiral centers for more goodies, like non-linear optical devices.
Hell, make a PNA 17-25 mer cocktail complimentary to a few critical HIV gene sequences and cure that, too, by knockout strategy (the Flavr-Savr HIV therapy). PNAs are uncharged and readily permeate cell membranes, they are totally untouched by nucleases and other catabolisms, and they are cheap to make. Turn off HIV RNA, turn off disease process progression. Boom. None of this downstream small molecule enzyme inhibitor bullshit that makes so much money for the pharm workers.
Original proposal is an interesting problem, and rather a small proportion of the population is up to it. When I started out in the business some 30 years ago, the process of discovery and original proposal awed me. It still does, and my track record has been exemplary. Perhaps the best answer is that you must read everything and be prepared for things to bump around in your head.
Example: My first original research proposal was to synthesize an obscure polycyclic alkaloid (in 32 steps! Silly synthesis is the refuge of a scoundrel) An ocean of blood flowed, and all of it was mine save for one redeeming skeletal inversion which was deemed "adequate." The next year, for my second original proposal, I proposed synthesizing C2 in cryogenic matrix and gas phase. C2 is hot stuff (literally) in flames and comet tails (Schwan lines), and its electronic structure was uncomputable at the time. When you warm the matrix fragments recombine to give acetylene diethers - which had not been synthesized at that time. The diethers dimerize to a squaric acid precursor, which was hot stuff re squarylium dyes for photoconductors. The tar from the reaction was worth at least ten times the cost of starting materials.
Know everything, and see where stuff rubs.
Almost any ten-carbon lump turns into adamantane in aluminum chloride/bromide slush. We can do better (though not cheaper) in ionic solvents like N-methl-N-(n-butyl)imidazolium tetrachloroaluminate with up to another added mole of AlCl3. The media support multiple carbocationic rearrangeents as a benign environment. What happens if you put micronized graphite into the slush and bubble in isobuytlene? Will you edge alkylate and solublize, or make 1-D tert-butylated diamond plates, or will something else happen? Look at all the applications of graphite fluoride and graphite intercalcates, as in high energy density battery systems and high number density low bulk mass hydrogen storage modalities.
Sargeson trapped Co(en)3(3+) as the inspired sepulchrate (formaldehyde plus ammonia), and then the brilliant sarcophogate (formaldehyde plus nitromethane; look down the triangular face of the coordination octahedron). Stop being an inorganiker and start being an organiker. That last gives you "para" nitro groups, which give you amines, which give you redox nylon (and azo linkages; polyisocyanates, polyurethanes, epoxies, acrylamides, and...) Nitrogen chemistry is incredibly rich - conjugated azo linkages, fluorescent heterocycles, stable free radicals, extrusion and caged radical recombination... As Co(en)3(3+) is trivially optically resolved, you also have potential non-linear optical films switchable through redox change. (Information storage, chemical transistors, sensors, clinical diagnostics, electrochromic windows...) It goes on and on... a whole lifetime of research. Nobody has diddled with it.
Look up the synthesis and reactions of of hydroxlyamine-O-sulfonic acid in Volume 1 (!) of Fieser and Fieser. Look at the mysteries of ammonia - inversion, nucleophilicity. Look at the Alpha Effect re hydrazine, hydroxylamine, and hydrogen peroxide. Look at Bredt's rule and all the interesting things it does at bridgeheads. Now, make it all rub against itself: Start with 1,4,7-triazacyclononane, which is easy enough though sloppy to make in bulk. Gently nitrosate it. The nitroso group goes on the first amine, then the adjacent amine (pre-organized to attack re Cram) attacks at the nitroso nitrogen to give you the hydroxylamine. Do the usual hydroxylamine-O-sulfonic acid synthesis and you tether the original nitroso nitrogen to the third amine with the original nitroso's oxygen as the leaving group. What have you got? You have four bridgehead nitrogens rigidly held, none of which can invert. The apical nitrogen is tethered only to other aliphatic nitrogens - which has never been done. It cannot invert and... for all that, it may have no nucleophilicity whatsoever because the Alpha Effect is euchered out by geometry and inductive electron withdrawal is mammoth. You could do it in undergrad lab.
I once watched a bunch of engineers with a very big budget try to excimer laser drill parallel or serial hundreds of 5 micron holes in PMMA intrastromal corneal implants (without the holes to move oxygen from outside and nutrients from inside the cornea dies and sloughs, which is tough on the rabbits). Buncha maroons. 5 microns is a magic number to an organiker, and I won't insult your intelligence with the trivial solution. The next Tuesday I delivered a foot-long bar of oriented two-phase PMMA which was cut and polished to spec, had its holes revealed, and got me into incredible hot water since my employer did not give shit one about the product but was really interested in the long term money budgeted by its parent company.
Take two cyclopentane rings (Framework Molecular Models do this nicely). Put 5 all-cis (vs the ring not olefin configuration, which need only be consistent) alkenes on one cylcopentane. Cap with the other. Now, twist slightly and watch the pi-oribtals. Is that a clever way to make dodecahedrane, or what? The alkenes came from alkynes. The alkynes were assembled with Schrock alkyne metathesis catalyst from the nitriles. Strain being what it is, you might want to have diacetylene linkages (copper-mediated oxidative coupling) and go for a bigger hydrocarbon bubble. Start with all-cis 1,3,5-cyclohexane and trace the diacetylene evolution (no strain here!) Consider 1,3,5-trans-2,4,6 all cis-substituted cyclohexane). Voila! You grow 1-D diamond (note the ring conformation and the special name given to that diamond structural variant).
I could go on for megabytes. All you need do is read the library, hold it all in your head, and wonder "what if..." where stuff rubs together. This is the first (easy) kind of genius. The second (hard) kind of genius is to see it all ab initio. I don't have a handle on that one.
Moore's law-type performace increases can continue (Score:2)
this is just a middle step. (Score:2)