The End of Moore's Law? 208
Lucius Lucanius writes "A recurrent theme of late, the NY Times describes an Intel researcher's paper on the possible end of Moore's law. Soon, 'transistors will be
composed of fewer than 100 atoms,
and statistical variations in this
Lilliputian world are beyond the ability of semiconductor engineers to
control.' Is it for real this time?
"
Re:I doubt if I will see the end of Moore's Law (Score:1)
Moore's law doesn't apply to software (Score:4)
Yet it crashes often enough to be noticeable.
It runs so slowly (a "mere" Pentium 400) that I can actually see my windows redraw.
Booting takes 5 minutes (NT 4.0)
Shutting down takes several minutes, too.
Maybe hitting a limit to processor power will encourage programmers to reintroduce the concept of "knowing how to write good code." Lord knows processor speed and cheap memory have made it possible for even the best programmers to stop thinking about code quality.
- Stever
Did you know? (Score:4)
Re:Some Things... (Score:2)
Equal the human brain.
> It will stop there
No. WE will stop there. They won't.
Re:Multiple cores on one processor? (Score:1)
Re:Getmore out of what we have (Score:1)
Lithography or maket forces. (Score:1)
Moore's Law (Score:1)
This was supposed to happen in 1983 (Score:3)
Then in the early 90's they said the cost of developing faster chips was becoming a vertical line. Computers would never get far beyond 200Mhz because of the brick wall of development costs.
Well electron tunneling became our friend. Design tools outpaced their costs. Maybe we'll find a way to turn the physical limits to our advantage.
Re:Should be CMOS (Score:1)
Other areas to focus on (Score:1)
reached, there are numerous other areas to work
on to improve performance. Among them:
Silly! (Score:4)
Even if the Intel folk were right, and we couldn't make out gates any smaller (I bet we can, with bucky-tubes and those neato single-molecule gates), it wouldn't be the end of Moore's law.
FIrst of all, there is the whole bandwidth problem- We programmers have to worry about cache coherency, cache misses, time to load from disk, time to load from RAM... etc.
These things are the major bottleneck for many applications.
Furthermore, This "limit" would only limit single-processor designs..
There is still a large world on parallel-processing to consider.. What if the CPU could execute EVERY non-dependant, non-aliased branch at concurrently?
(We'd obviously need better compilers, and probably better languages..)
In any case to rehash: Even if the Intel engineers are right about the "gate limit", Plenty of other advances to discover..
Re:The point was not that computers won't get fast (Score:2)
-Chris
So when we hit 20,000Mhz and (Score:1)
we can just start adding processors
Re:Silly! (Score:1)
Yes, but IIRC (I din't bother reading the article now, seen it so many times before) Moore's law does not deal with the speed of the processor, but the density of the transistors.
Even Moore himself admits that his law can't hold forever, only he is wise enough not to put a timetable on it.
-
Re:I didn't see anyting at 1100MHz (Score:1)
Re:No End (Score:3)
Sure they did. Most of them are still alive.
Biological Computing (Score:2)
After that, quantum computing of course
http://www.qubit.org/ [qubit.org]
Re:It has happened (Score:2)
Right. However, the two keywords would have significantly different effects, and would be used in different situations by the programmer.
Why would we have to have a new keyword when we can simply detect a parallel operation?
Because we *can't* simply detect a parallel operation, 90% of the time. If your code calls a function that isn't in the same object file (or is in a shared library), then there's no way to know whether the for loop is parallelizable. Unless, of course, the programmer tells you that the loop is parallelizable. Hence, the new keyword. If you tried to parallelize every for() loop in existing code, you would break most of it horribly.
There's also the fact that automatically parallelizing foreach() type code will have dramatically different performance effects on different systems; the threshhold size above which one would want incur the overhead of contacting separate threads to run code will vary depending on the CPU interconnections; a cluster would only be helpful for much larger loops than an SMP system. You could write every parallelizable loop with foreach() and hope the compiler will sort it out, but how is a compiler supposed to figure out that
foreach(i=0;i<3;i++) {
tinyfunction(i);
}
shouldn't be split among different threads, while
foreach(i=0;i<3;i++) {
hugefunction(i);
}
should be?
Why generate values of i if they aren't used? If the compiler can prove array[] is never used in your function, it'll just drop the loop entirely Can't get much more efficient than that.
Um, that wasn't at all what I was talking about. I was discussing cases where every value of array[] will be used, but where they could be calculated independently.
Functional languages, which can evaluate arguments to functions in any order (thus parallel), will often not bother to even run the loop until array[i] is needed in a fashion that can't be delayed. The best way to optimize code is to find ways to not run it at all.
Agreed.
I suggest thinking in languages other than C.
Disagreed. The point of SMP isn't to demonstrate some academic feature of Lisp, it is to make programs run faster. If you have a program that requires more than a single fast CPU to run, then you *definitely* don't want to run it in a non-compiled language. And if you have a Scheme compiler whose output will run as fast as C, C++, or Fortran code, I'd like to know about it.
That sounds antagonistic, but I'm absolutely serious; I'll probably be working next summer with fortran code on a 500-node system, and if you've got some means to let me write anything but Fortran, please take pity and let me know about it.
That all means unless you get a sadistic pleasure out of watching engineers shackled by fortran, it would be nice to see more parallel programming features available in C/C++.
The Wall (Score:2)
Moore himself talked [cnet.com] about this on Cnet a couple of years ago.
Re:Parallel languages already exist (Score:2)
Thank you; I'll take a look at it. I'd prefer an OO language, but anything that automatically parallelizes code, that links to C code, and that isn't Fortran, is good to hear about.
The idea of embedding parallelism into the language is nothing new.
When did I say it was? I mentioned HPF. Well, not by name, but I at least said Fortran had parallizing keywords already, didn't I?
Holographic/Optical computing (Score:1)
Moore's law doesn't apply to software: Yes It Does (Score:1)
the speed of software halves every eighteen months.
It goes to a new gen (Score:1)
Should be CMOS (Score:1)
Re:So when we hit 20,000Mhz and (Score:1)
lets forget that fact that there IS a large set of tasks which are parralelizable, and that in multitasking environments, you get great gains with MP3s running in a thread, compiler in a thread, set at home in a thread, Quake3 rendering in a thread, AI in a thread.
but thanks for being a dick to me so you could show off painfully obvious knowledge!
... (Score:1)
show me the money!
--
Other technologies pop up just in time (Score:2)
Re:Holographic/Optical computing (Score:1)
What are you talking about????????? (Score:1)
Secondly, there are other functional languages besides lisp - ones that can be compiled very efficiently indeed.
I don't know what you're coding that you need fortran for, but look at the functional languages, look at the Scheme compilers, look at compilers like Sather which is VERY efficient, and pretty nice too.
The point was not that computers won't get faster (Score:2)
Use this login... (Score:3)
Username: slashdoted
Password: slashdot
Enjoy!
-----
The real meaning of the GNU GPL:
Re:Haven't we heard this before? (Score:3)
I don't think the need for faster and more capable hardware will cease until computers advance to our "dream" computers. For each person what this means is different.
What I see most likely is the current manufacturers following their current practices of concentrating their R&D on faster and faster generic purpose CPUs until they reach some sort of "wall." When this happens, they will probably branch out in two separate directions. One focused on R&D into totally new methods of producing generic purpose CPUs that break through this wall and the other on application specific designs. They will most likely need to get the bulk of their revenue from application specific designs, taking a larger and larger percentage from the generic purpose CPUs as they get cheaper and cheaper (because other companies will reach the same barrier and the competition will reflect lower prices).
This is not necessarily a "bad" thing. I think it makes much more sense to design a chip specifically for, say, speach recognition. Sure, there is a very important software part of this and there has been some recent work on neural net chips or systems that supposedly is in the right direction, but someone like Intel spending vast amounts of resources on a speach recognition chip (based on neural computing or not) using 5 micron casts would likely have great success in a short amount of time (2-3 years). Think of all the other application specific areas where Intel and the other manufacturers could branch out if they ever do get to a 5 micron technology. Perhaps "visual" recognition, handwriting recognition, Oh, here's a big one -- language translation. The possibilities are endless, with a matching revenue stream. I could see someone spending $1000 or more for a generic language translation unit to take with them on their vacation (I certainly would and there's a heck of a lot of people in this world).
even if so, then so what? (Score:1)
Even if Moore's Law were to slam into a wall (highly unlikely) then so what? The next paradigm shift is the *macro*cosm, as our microprocessors *connect*. This "network effect" squares the sum value of our internetworked transistors. Exploding bandwidth frees abundant info to flow between 'em. These further enforce the law of increasing returns (and route around material laws based on diminishing returns (which attempt to enforce artificial scarcity (ie: patent, copyright, closed-source (which makes me smile;)))) So transistors shrink in half every 18 months. Big deal! Optical bandwidth doubles every 12. Wireless doubles every 9. Twice as fast as Moore's Law!
Moore's Law: The power of computer processors doubles every eighteen months
Metcalf's Law: The power of the Internet is equal to the square of its nodes.
Gilder's Law: Internet bandwidth will triple every year for the next twenty-five years
Anyone care to do the math?
Re:Haven't we heard this before? (Score:1)
Thanks for the tip. I'm going to start using that one as well!
-----
The real meaning of the GNU GPL:
Very silly (Score:1)
(It was something I noticed about the first 486 I ever played on - a DX25, them were the days, where it was so fast that in the space of a click, reversi thought I was double-clicking the square and told me to go elsewhere. I had to play it with the keyboard instead!)
Don't spoil my fun. I know it was a buggy mouse driver, but it was fun.
Re:Holographic/Optical computing (Score:1)
At the time it seemed pretty amazing that a 1-layer slab of these a foot square could hold the entire Encyclopedia Brittanica.
There was some disadvantage (I can't remember what) but it seems that research into that technology was abandoned as better technologies becae available. For example, DVD achieves a higher information density.
Consciousness is not what it thinks it is
Thought exists only as an abstraction
Re:Silly! (Score:2)
The Moore's Law that I was taught was that compute-power doubles along exponential curve according to the curve.
(in this case every 18 months)
You're saying that Moore's law is that transistor-gate speeds double in that timeperiod??
I think some research is necessary...
whatis.com thinks Moore's law is:
Moore's Law is that the pace of microchip technology change is such that the amount of data storage that a microchip can hold doubles every year or at least every 18 months. In 1965 when preparing a talk, Gordon Moore noticed that up to that time microchip capacity seemed to double each year. The pace of change having slowed down a bit over the past few years, the definition has changed (with Gordon Moore's approval) to reflect that the doubling occurs only every 18 months.
which is essentially what you said.
Maybe there should be a Moore's variation which states that compute-speed doubles every X... Its a more important indicator than gate-density, though not as easily measured..
Moore's Law still correct (Score:2)
Doesn't Moore's Law state that Microsoft apps double their bloat every 18 months? This is still holding true.
Re:The point was not that computers won't get fast (Score:2)
Look at the "WIN printers" for instance. It was thought that it would be cheaper to take the (relatively cheap) intelligence out of printers and put it into software drivers that are run on a generic CPU. Whether it is true or not (that it is more cost effective) is beside the point. The point is that we are asking the generic CPU to do things that it normally would not, requiring large amounts of processing power due to the timing and other issues.
If Transmeta is working on what is rumored, using current chip manufacturing techniques and not some new fangled method, then both the generic CPU instructions and the application specific instructions could then be executed on the same chip. Since the application specific instruction set would be taylored to the applications, presumably less cycles would need to be done to solve the same problem. This would reduce the need for faster and faster CPU's, would it not? Would not this be a new approach to the "problem?" May be not finding a new way to make a faster same old generic CPU product but finding a new paradigm that used a single unit in order to do both generic and app specific "stuff."
Think about it!
Re:Moore's Law still correct (Score:1)
The Ultimate Physicsl Limits of COmputing (Score:1)
Re:Silly! (Score:1)
One variant I've heard is that the Performance/Price ratio doubles every 18 months. A few years back, I saw some data that indicated that Intel's marketing department took this seriously anyway - either trying to double speeds or cutting prices when speeds weren't increasing that fast. A corrolary is that "Moore's Marketing Law" could be met with SMP systems, if Intel can't scale the CPU speed fast enough over an extended period of time. (See the duel Celery people.)
(I know there are some problems with this, namely that performance/price doesn't scale linearly at any given point in time. Xeons and 600Mhz CPUs have a pretty huge premium )
Software IS the next revolution. (Score:1)
Your fundemental argument here is incorrect. Although you make a good point which in itself is not incorrect neither is it the correct response. What the original poster is reffering to is the aggregate quality and focus on the quality of code.
This pertains to large companies, where if there was a set linit to speed (i.e the effective implementation of new-exciting-revolutionary features would take a backseat as what becomes possible is limited by speed). This does not entail the end of computing evolution however as the concentration of coders, ideas-people, marketers, consumers, managers and Board of directors would focus on the next step in the evolution of computing into the next century.
Case in point: a lot of coding books 4+ years old (and up to this day for some) emphasise sitting down with pen and paper to work out an algorithm. This is a remanent of the "hardware-era" of the past, where computer resource time was much more expensive then human resource time. Now at the start of the third mellenium the opposite is true.
The future is "information applications". At the risk of sounding very much like a maketing hypeist... the future will move to offering the average consumer what the need, want and understand. What they do not understand is the assembler code for the x86 architecture. What they do understand is the $ sign on e-commerce websites. This is the future as computers become even more pervaisive and ubiquitous with day-to-day tasks.Hence naturally if software becomes the most important cash-cow for companies and firms in the future, where do you think the research dollars are going to go? On new software methodolgies or creating a new thermal compound to reduce the heat of chips ? Sure this does not discount the fact that hardware is the fundemental base upon which technology is built upon and will be in the future, but the quility of code is horrendous in its current form. Money goes to where money is needed.
Re:Multiple cores on one processor? (Score:1)
the G3 does not. the G4 will soon.
the G3 is really pretty awful at multiprocessing; one of the modes that you kind of need for good MP isn't functional, or something. i think that it works if the software organizes the MP-ing, but the hardware can't handle it alone. I dunno, either way there's something that makes MPing a g3 not worth the bother.
the G4, on the other hand, not only SMPs but does multiple-coring beautifully, has been doing so in the lab for months, and will be doing so in shipping computers relatively soon. When the multiple core g4s do start showing up, it should be truly impressive..
Re:Multiple cores on one processor? (Score:1)
Re:Getmore out of what we have (Score:1)
(and I would love for someone to invent a technology so that I could save up that 20% for during compiles, etc)
Perhaps Itel has reached a impassible wall... (Score:1)
Or maybe that's just their excuse to explain their failure when compared to the Athlon and SledgeHammer.
poor bastards...
Actually, i would welcome a psuedo-wall (Score:1)
That's just my opinion, don't take is as dogma.
Re:No End (Score:1)
I think the industry is very conscious of the wall. Chips are being done in copper, and there's a lot of talk of silicon-on-insulator
Totally Offtopic (Score:1)
/. mirroring nytimes articles (Score:1)
It can be cited though.
Re:So go SMP. (Score:1)
What this "Law", which Mr. Moore said after looking at reletively little data on the matter, just for the sake of saying something, was that transistor density will double every 18 months.
Re:Moore's law doesn't apply to software (Score:2)
It's not so much a matter of "quality" as it is a matter of time. In a /. article a week or so ago, there was a big story about "Why Software Sucks" that basically said, "programmers don't have enough time to write everything well." When you can code an inefficient, memory-hogging algorithm in 2 hours while a streamlined crashproof sucker takes 2 days, guess which wins out. :-[ And does anyone really want to return to the days of bumming single instructions out of assembly code? The only case where it'd be worth the effort is in kernel design or CPU-hog tasks like distributed.net IMHO.
Maybe Moore's Law should contain an addendum or two:
Re:don't start counting chickens .... (Score:1)
>Modern chips aren't designed in manual tools anymore.
actually I was talking about both. These days we still design high performance datapaths by hand, ram too and in particular the standard cells that our automatic synthesis and P&R use as building blocks
Design of these is done as a stack of 2d layers (metal on top with silicon at the bottom) It used to be that capacitance extraction was a pretty simple thing to do and not that important a reasonable approximation was all that was needed - these days extraction is much more difficult - it has to be done in 3D and includes all the interactions between a wire and allthe other objects that come close to it along its whole length - time was just did capacitance to the substrate and got a good estimate - now days you have to worry about edge effects and wire delays (distributed RC delays along the wire because they've got so thin they have appreciable resistance - this is the real reason copper is important - RC delays are smaller - not just that the wires can carry more current)
Certainly we don't have tools that take simple 3D effects into account while working (even many routers can't take the time to do full extractions while they run - they have to do approximations in order to work at all).
But what I was getting at was that we don't have layout tools or P&R tools that could be used to automatically place gates on top of each other (building stacks of silicon stuff rather than one set of layers of silicon stuff and lots of wires) there's so many sorts of analogish leakage issues that would kick in it would be a nighmare
I'm not an expert on quantum computing but is seems that those sorts of structures where electrons are contained by quantum effects rather wires, poly and diffusion may be more amenable to stacking since the quantum barriers could provide a sort of 'insulation' between devices in multiple dimensions that might not be possible in bulk silicon
Re:Other technologies pop up just in time (Score:1)
Humm... if it became more expensive to produce faster chips, wouldn't you want to use them only where you REALLY needed them? (read: big servers)
The desktop is dying... and MS knows it! But, we sit and try and compete with what MS has already done...
...ok, enough ranting.
Interesting point, but... (Score:1)
I don't want a sewing machine. I buy things "pre-sewn"... I don't think there is much similarity between the idea of moore's law and sewing machines. Mainly, this is because a sewing machine is a device that requires a lot of skills to use. Almost any idiot can use a computer (not necessarily to its fullest). And, as computers get faster, the number of idiots who can use them increases (I'd say that it roughly doubles every 18 months). However in order to be able to actually get one's money's worth out of a sewing machine, you'd have to spend a lot of time making your own clothes, or making clothes for others. A better analogy could be had by comparing microprocessors to steel, since they are both crucial "ingredients" in many designs. Actually, a similar phenomenon to Moore's law occurred when steel first became popular. As better and better techniques for producing steel became available, more and more industry was able to use steel as an input to production. Eventually, somebody invented the current way that steel is processed... since then, people have lived with steel the way it is, arguably without creating a shock to the economy.
If technology gets to the point where I can do what I currently use my computer for without using a computer, then I'll be quite surprised. The only thing that could replace computers is IMO better computers. Of course, if the speed of hardware per dollar innovation were to slow, then we'd probably see higher prices for top-end hardware, but other than that, I'm sure the economy would shift so that more resources (people) were dedicated to optimizing code and compilers and architectures, rather than trying to squeeze more speed out of silicon.
Moore's law is more of an economic phenomenon than we realize. As long as there is demand for something, somebody finds a way to sell something relatively similar to it.
Of course, I want a computer that Moore's law predicts will exist in a decade, but I'm not going to sit around holding my breath until it exists... I'm going to make the best of what I have. To think that Moore's law has to do with the number of transistors crammed into a piece of silicon is kind of shortsighted. Furthermore, if there were enough demand for high-quality low-priced sewing machines, they would surely exist.
It's funny that the end of Moore's law is seen as such a negative. I mean, even if it were to happen for microelectronics, somebody would figure out how to push the frontier on another level (such as software, parallel procs, and generally better designs throughout).
The idea that software development has to proceed in such a linear (non-Moore'sLawish) fashion is silly. Who knows, maybe the parallel and distributed (to use the buzzwords) nature of OSS development will ignite exponential growth (if it already hasn't) in the software industry.
I guess this means less is Moore.
It gets to a point... (Score:1)
Maybe this is what you are looking for (Score:1)
Markoff got it right, NYT fscked it up (Score:1)
I emailed John Markoff. Apparently, the copy desk at NYTimes fubared this one -- it's not his fault. (I wrote him a nice friendly email, and he responded fairly quickly, actually.)
--Joe--
what about memory? (Score:1)
If this is really true, it's a Good Thing... (Score:1)
...for science and engineering in general. We ought to reserve the designation of Law for fundamentally sound and mathematically provable phenomena, like Gravity and Motion. Even Relativity, which has more experimental basis and practical application than Moore's "Law," only rates a designation of Theory. I got sick of this in my Computer Architecture class, too. Why don't we get back to real science instead of playing prophet with Moore's Rule of Thumb?
</RANT>End of Moore's law means global depression (Score:3)
IC manufacture is capital intensive. Someone told me that an Intel fab plant runs $5-8B today and doubles every generation. Wall Street is pumping money into Silicon because of growth. When growth slows or stops is will have an enormous impact on investment flow. Then the lack of investment will slow progress, slowing the need for development. It's all interrelated, coupled and highly amplified by the exponent of Moore's Law.
When it slows, the whole attitude about products will shift. Today, if you want a really good sewing machine or small lathe you get a good used one built 1930-1950. When Moore's law times out, the investment in plants will slow to a trickle. Fab equipment will wear out. Student will avoid the dying industry. (What's the Silicon version of rust belt?) People won't buy a new computer because it's not as good as the old one. Software development will begin to focus on quality and then crash as the market saturates (no need to buy new SW once it works and you have to run on the same old machine.
When today's Moore's law based economy crashes it will create massive dislocations. Imagine Silicon Valley with a New England mill town look, or Pittsburg/Buffalo/Cleveland circa 1970.
Re:Haven't we heard this before? (Score:1)
Saying "who needs more power for their word processor?" is the same as "who needs more than 640k?". It assumes too much. You can still get a "wait" cursor just applying Photoshop filters, and the average consumer is already editing photographs on their computers.
Re:So go SMP. (Score:1)
More people would have to read (& become) Knuth (Score:1)
Maybe, anyway.
Re:I didn't see anyting at 1100MHz (Score:1)
Better gas milage, too.
What the hell am I talking about?
Back ground radiation (Score:1)
How ever IBM, conextant, and others are developing transistor that in the simpleist of terms are two "back to back" transistors. this will help minimise "leakage" and they will be able to use fewer electrons to flip the gates. about 10 electrons or so. Yes thats right 10.
Re:The point was not that computers won't get fast (Score:1)
Re:This was supposed to happen in 1983 (Score:1)
People talk now about how fast you can make an airliner go, and others talk about just shooting up into orbit and waiting for the earth to move Tokyo to where London was beneath you. One way in one hour.
Then there are the transporters
Re:Multiple cores on one processor? (Score:2)
Sun's new processor, MAJC, is doing this; and Alpha 21364 will, too. Alpha 21264 already employs a similar technique.
You are completely right in thinking that this is the next step. The trend in the last 3-4 years has been in this direction. Decentralization is an active research topic in many institutions and processor companies in various forms: Multiscalar processing, superthreading, etc. You might want to take a look at the Multiscalar pages at Univ. of Wisconsin [wisc.edu], where some of the pioneering work has been done.
SMP on a chip is the next big thing (Score:1)
The enabling technology for the shift to SMP is none other than Linux: its crossplatform nature allows us to switch easily to alternative, more transitor-efficient architectures such as ARM (n.b.: also offered by Intel, make of that what you will) and at the same time provides the multi-CPU support we need, without costing an arm and a leg
Re:Multiple cores on one processor? (Score:2)
Possibly dumb but maybe right question. (Score:1)
It has happened (Score:2)
But there's only so much of that you can do in hardware. What I'd like to see is multithreaded software produced *automatically* by C/C++ compilers when possible, the way high end Fortran compilers do for multinode supercomputers today. So instead of writing
for(i=0;i<1000;i++) {
array[i] = function(i);
}
which generates i, executes the function of i, generates the next i, executes the function with that next i, etc; you might write
foreach(i=0;i<1000;i++) {
array[i] = function(i);
}
which will generate as many values of i as you have CPUs, execute the function for each value of i on a different CPU, then generate the next set of i's, sending them to threads as necessary to keep every CPU busy.
We'd have to have a new keyword (I like foreach) for this, since the overhead involved would make it counterproductive in many circumstances and would break code (anything where function() isn't reentrant, or where the i's are assumed to be evaluated in order) in others.
Re:Multiple cores on one processor? (Score:1)
This wouldn't be SMP, even the motherboard wouldn't really know there were multiple cores on one processor.
"
This is essentially how most of the current
processors already work. A Pentium II processor
is capable of executing _multiple_ instructions
on the same clock. The motherboard/OS don't
even know there's effectively more than one
processing unit present.
What you really want is a nice threaded OS and
applications, coupled with known multiple CPUs
so that applications can be executed in parallel
on the thread level, rather than on the
instruction level (or looking at it from the
programming perspective, it is much more fruitful
to parallelize your app deliberately, rather
than let the hardware try to do the best it can
for you.)
Re:Haven't we heard this before? (Score:1)
Re:Should be CMOS (Score:1)
==
Plurals of letters, signs, symbols, figures, and abbreviations used as nouns are formed by adding s or an apostrophe and an s. The omission of the apostrophe is gaining ground, but in some cases it must be retained for clarity, as with letters.
Margaret Shertzer: The Elements of Grammar
==
John Markoff replied to my e-mail: "sigh. of course. at least it's not my nit...the copy desk made the change and I didn't see it until it was in the paper.... thanks..."
Re:Moore's law doesn't apply to software (Score:2)
Guess which version would have been more maintainable? Guess which version is more robust? Guess which version wouldn't have to be rewritten next year because some software that depends on this algorithm has changed and now the performance is finally unacceptable?
The problem that programmers don't have enough time is because they are spending so much of their time on rework and bug fixing caused by their rushing through solutions. The problem is short sighted management who encourages programmers to get to the finish line at whatever cost.
... (Score:2)
The problem that many of my detractors (who Should be Obvious to you by now). Is that They have more problems with, ( of course ) the subnet of my presentation ( table 1 ). Needles to say, Nevertheless. That they more than Likely do not comprehend ( of course ) the Fundamentals of the I'm a Fucking Retard Rule ( Needless to say, similar to my Octet rule ).
Never the less, it should be Obvious why I didn't ( or should i say, Couldn't ). Needless to say, pass the fucking Cisco exam because my head ( or never the less, what is on top of my head ) is so far.
Just imagine! Shoved up my ass, that this paper should be my addmitance paperwork out of computer ( or network ). Consutlting/IT Professional, and into scooping M&M's for Dary Queen.
if you read this hampsters paper all the way thru.. take off two points. Take off 3 if you printed it out to read it later.
--
Re:It has happened (Score:2)
--
Some Things... (Score:2)
That means that the 2d card will only have to be so powerful, the same with the 3d card.
Sound cards will eventually be able to generate realistic sound that includes the full range of our hearing. Then they don't need to get more advanced.
I'm not saying consumers will only need this or that, I'm saying that humans will only be able to come up with this or that. AFter a while, they won't be able to figure out anything more to do with computers. (This will probably end at something like re-creation of worlds, aka massive holodecks)
Think to yourself, what is the biggest, most power-consuming thing a computer could ever do. Ever. It will stop there.
Perhaps it will be miniturized, but applications will stop eventually when there are certain limits, like optical resolution or the range o the human ear. Eventually there will be limits like that for application.
Moore's law will not stop. We will just keep finding newer processes to do things.. (Intel saying it will stop, well hell yeah you can fry pancakes on PIII's probably roast a cow on a Merced) Motorola isn't having many problems on the other hand... Microsoft could be a major part off this, you shouldn't f'ing need a PIII 400 or whatever ffor the operating system.
Apple's only restriction on OS compatibility is chip architecture (you have a PowerPC, it works, 69k it doesnt, this is a natural limit, its practically like trying to install the MacOS on a pc, wrong chipset.)
Hey we haven't even tried optroinic computers or anything yet maybe those will reach our needs. Still s lot that can be done, just ignore intel, they're just overclocking their chips until they melt.
Multiple cores on one processor? (Score:2)
This wouldn't be SMP, even the motherboard wouldn't really know there were multiple cores on one processor.
Someone with some experience in this field want to tell me why this hasn't already happened?
Code quality (Score:2)
I don't believe that if computers suddenly hit a ceiling in terms of max performance that people who code sloppy would stop. It's just like any other profession - some people do it to the best of their ability, and some people make it 'good enough'. And on a related topic - guess which methodology most linux programmers embrace. :^)
--
Haven't we heard this before? (Score:2)
"Who will ever need more than 640K?"
For decades people have made predictions contrary to Moore's law, and each time they were wrong. I can't say for certain that doubling will continue ad infinitum, but the end is definitly not near. In 1960, 1970, 1980 & 1990, someone said that their decade would be the end of Moore's law.
Excuse me, but... BullS**t
The nature of computing is that bigger apps require faster machines, and faster machines can run bigger apps. Most long term predictions in the computer industry far underestimate the power of human inginuity when faced with an ever more demanding consumer. In the 1950's someone (who was it?) said there was a world market for maybe *5* computers...
yeah... right
(BTW, can
Always Slashdot, Always CokeBear, Always Coca-Cola
Re:The point was not that computers won't get fast (Score:2)
Speed != quality (Score:2)
In addition, more speed has allowed us to make modular and OO designs that were traditionally "too slow". It's not that OO is necessarily sower than non-OO (there are too many benchmarks showing either camp is faster to know what is really going on), but good design often has overhead. I make speed sacrifices to maintainability all the time with the justification that computers are fast enough. And, IMLE, they are.
Because of this, I can see the exact opposite happening: when computers reach a limit to their speed increases, quality in code will go down. As more features are added to a program, more and more sacrifices to good design will be made for the necessary speed.
Re:Silly! (Score:2)
Moore wasn't some digerati pundit making grandiose prognostications, unlike the current crop of wannabe net.prophets. He only predicted that a trend toward doubling the number of transistors on a chip every 18 months would continue for at least the next 10 years -- that was over 20 years ago.
Actually, didn't the move to RISC actually result in LESS transistors?
Re:The point was not that computers won't get fast (Score:2)
If you analyze the performance of a modern PC (Intel PentiumIII, AMD K6, etc) in 'normal' use you'll see that the processor spends most of it's 'busy' time (90%+) waiting tor cache misses. A stupid interpretation of this is that a 50MHz processor would give you the same performance as a 500MHz chip. This isn't entirely true, but isn't a a bad guess. I run a processor-bound Oracle database on x86 architecture. At peak loads the (single) processor is reporting that it is very busy. What it is really doing is waiting for memory to respond. The system is designed not to need to swap to disk. The users think the system is very fast.
If Intel or (someone else) could improve memory performance to the level required by modern processors, we would see phenomenal improvements in overall sysstem performance. The x86 machine I am using at the moment is seriously disk-bound. If the disk performance was 20% better (which I could acheieve by buying better RAID) I would see something very close to a 20% performance improvement (It's *very* disk-bound). If I fed it another gigabyte of memory, I'd see a miraculous performance improvement, because it would stop being disk-bound and become memory bound.
If you take Moore's law as applying to silicon-based processors, then there is a limit that we are within about 5-10 years of. If you apply Moore's law to *whole* computer systems, then we've got a lot of room before we hit a major problem. Even then, there are a lot of things that can be done:
Chemical / molecular systems
Biological systems
Quantum logic
etc.
Every computer on this planet is built on the von Neumann architecture. It's a good architecture, but there are others. Many are inherently faster. I will happily bet that Moore's law (applied to system performance) will be exceeded over the next 50 years. Anyone want to bet against that?
Re:Well, nanotechnology is going to be needed... (Score:2)
Well, nanotechnology is going to be needed... (Score:3)
There's Plenty of Room at the Bottom by R. Feynman [xerox.com]
IMO, he basically started many people thinking about nanotech (and this was in the '50s). There are some remarkable things coming from nanotech (IIRC, there are some remarkable things coming out of U of Michigan in nanotech).
There is plenty of room. We just need the technology and sophisitication in order to harness it. Somebody will achieve this technology (who and when are the important questions, not if ). When it happens, Moore's Law will just chug along as usual (as it always has).
Justin
Re:It goes to a new gen (Score:2)
Re:Holographic/Optical computing (Score:2)
Also, about a year before that IIRC researchers at SUNY developed 3-D optical storage with density of 2.1 GB/cubic cm. Problems were access time and expense of materials.
Re:Haven't we heard this before? (Score:2)
"I think there's a world market for about five computers."
-- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943
I definitely agree with your post, except for the fact that most people don't need more powerful computers.. When a word processor opens in less than a second, that's usually as fast as things need to be (for *most* people!).. So when the demand isn't that high for faster machinery, there's not as much motivation to research for faster solution.. But, even that idea is negated when you realize that there is still a demand for faster technology in non-consumer sectors. Only time will tell what happens.
Personally, I think systems are going to shoot for minimalism over the next few years -- the biggest and baddest CPUs (even for the last few years) are complete overkill for most people. The current market division (lower-end (i.e. Celerons/K6-2 and 3s) with high-end (PIIIs/Athlons/etc.)) will probably just get further and further apart. In other words, Moore's Law will remain important for the high-end market, but become not-so-important for the lower-end CPU market. Because the high-end is becoming more and more secluded from most of society (how many people do you know that have P3 Xeons or Athlons on their desk?), Moore's Law won't even matter for most people.
-- Does Rain Man use the Autistic License for his software?
While transistor speed & size are limited... (Score:3)
I don't think we'll be using molecular computing on our desktops any time soon, nor quantum computing any time in the next few decades (you try lugging around an MRI machine, I dare ya), but all this means is that we'll have to shift paradigms to something else that's massively parallel.
Current technology relies on only a handful of processing paths though a chip being active at any one time. Compare this to our brains which are massively parallel at the cost of having lots of neurons sitting around and doing nothing most of the time. ('Nope, still don't smell anything new; nope, still not smelling anything...') The payoff comes when you want to do lots of things simultaneously, which is what happens in our visual centers, for example, when doing pattern recognition.
The harder problem (than transistor size) to deal with here is that our programming paradigm is going to have to shift to something that can take advantage of a massively parallel machine, which is really difficult. Not all problems can be made parallel, and only a few of them can be made parallel well.
On the bright side, it's mostly the hard ones like pattern recognition that work well parallel, so maybe the future is brighter than we think.
("Computer? Commm-PUTE-errr?" "Scotty, try the keyboard.")
Re:The point was not that computers won't get fast (Score:2)
> Not only does God play dice, he sometimes throws them where they can't be seen. - Hawking
How about this one concerning parity violation:
Not only does God play dice, the dice are loaded.
(apologies for once again snipping quotes from Alpha Centauri)
Re:It has happened (Score:2)
I suggest thinking in languages other than C.
Unintended? (Score:3)
From the article:
"When you get to very, very small sizes, you are limited by relying on only a handful of electrons to describe the difference between on and off."A handful of electrons? Some analogies just don't work. :u)
A promising development (Score:3)
The silicon chip business has been a bit like the gasoline/petroleum industry, in that many interesting ideas with plenty of potential have been pushed aside or starved for funding, as long as the prevailing product continues to deliver what we're used to.
Businesses are happiest growing and changing incrementally, and it usually takes outside factors to force major change. But when that happens, almost everyone's better off in the end, because we end up with more choices.
I look forward to looking back on the latter part of the 20th Century as the primitive Age of Silicon, and wondering how we ever survived without nano/optic/bio/quantum tech...
Re:Haven't we heard this before? (Score:2)
The current situation is completely different, however. There are very real physical limits to how small you can make a silicon transistor. I think Lucent made one sixty atoms across a while back. Any smaller than that, and quantum effects prevent it from working altogether.
OTOH, molecular transistors have been made which are considerably smaller, and operate by a different mechanism altogether, but so far no one has found a way to link these together into a useful circuit. This would probably boost CPU's from microwave frequencies (which we're just now reaching) to visible or even UV frequencies.
Also consider diffusion of dopant atoms within the semiconductor. Smaller transistors are more readily destroyed by this, which is one reason your P3 has a fan and heat sink where a typical 486 did not. Smaller transistors are more susceptible to heat damage (unlike what I have heard some people say) and will probably have to be supercooled.
So, I expect speed to hit the wall in a few more years, then after a delay perhaps it will suddenly increase by several orders of magnitude almost overnight.
Also, it goes without mentioning that if the clock frequency times Plank's constant even gets close to the bandgap energy of the semiconductor, the device will be useless as electrons are rised to the conduction band by the clock signal. For that matter, clocks themselves probably won't run at that speed for the same reason.