Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel

The End of Moore's Law? 208

Lucius Lucanius writes "A recurrent theme of late, the NY Times describes an Intel researcher's paper on the possible end of Moore's law. Soon, 'transistors will be composed of fewer than 100 atoms, and statistical variations in this Lilliputian world are beyond the ability of semiconductor engineers to control.' Is it for real this time? "
This discussion has been archived. No new comments can be posted.

The End of Moore's Law?

Comments Filter:
  • I'm partially skeptical about quantum computing. Granted, I haven't read a whole lot about the subject, but what I have read indicates that hardware and software are one and the same. The machine is designed for one specific problem, and cannot be reprogrammed short of being disassembled and built into a new machine. If anyone knows of evidence to the contrary, I'd appreciate a link.
  • by Stever777 ( 100575 ) on Saturday October 09, 1999 @09:28AM (#1626935) Homepage
    I'm not particularly concerned about hitting a theoretical limit to hardware power. At the moment, I'm typing on a system that has unimaginable 20 years ago.

    Yet it crashes often enough to be noticeable.
    It runs so slowly (a "mere" Pentium 400) that I can actually see my windows redraw.
    Booting takes 5 minutes (NT 4.0)
    Shutting down takes several minutes, too.

    Maybe hitting a limit to processor power will encourage programmers to reintroduce the concept of "knowing how to write good code." Lord knows processor speed and cheap memory have made it possible for even the best programmers to stop thinking about code quality.

    - Stever
  • by AnarchySoftware ( 2926 ) on Saturday October 09, 1999 @09:30AM (#1626936) Homepage Journal
    that the number of times the End of Moore's Law is predicted doubles every sixteen months?
  • > Think to yourself, what is the biggest, most power-consuming thing a computer could ever do. Ever.

    Equal the human brain.

    > It will stop there

    No. WE will stop there. They won't.
  • Okay, I'm not an expert but my understanding is that this HAS already happened, and been going on for some time. Since the original pentium intel processors at least have contains multiple cores of sorts which are then used for HEAVY pipelining. The pentium is in some sense two 486 cores welded together and madly piplined. Likewise, the PPro has four cores for piplining. This is why at equal mhz a pentium is about twice the speed of a 486 and a ppro/pII is twice again as fast as a pentium. So my Celeron-433 is roughly 4 times faster than my 200mhz pentium (twice for clock rate, twice again for chip architecture). That's my understanding of it, anyway.
  • I have a little net worm program I wrote that is essentially a virtual machine that tracks other virtual machines and passes little programs back and forth. Since dist.net came out I've been meaning to turn it into something better but never have got around to it. I'm afraid it is probably illegal to send something like that into the wild anyway even if all it does is fill spare processes. *sighs*
  • The real question is: what is the driving force behind Moore's Law. If the reason for the heretofore seen doubling time is something intrinisic in the process by which the chips are made, then the time of Moore's Law may indeed be drawing to a close. There is no real controversy about the fact that there is a hard bottom to lithography -- it can't continue to quantum dimentions. However, it seems somewhat unlikely that there is a doubling time implicit in a process of manufacture. More likely, the reason that there is a reliable pattern to processor devellopment is that there is such a large industry behind it. Once you have a sufficient number of people working independantly, statistical forces insure that each new breakthrough will inevitably be made with an approximately constant rate.
  • There is no end. It doesn't only apply to processors but all technology. The time can vary some and there are sometimes dead periods but knowledge tends to grow exponetially and with that knowledge comes faster, smaller, better gadgets.
  • by heroine ( 1220 ) on Saturday October 09, 1999 @09:33AM (#1626947) Homepage
    Remember back in the 80's when they said 20 Mhz computers were pushing the outer limits of semiconductors. They thought increasing prevalence of electron tunneling at subatomic levels would doom computers to stay below 25Mhz forever.

    Then in the early 90's they said the cost of developing faster chips was becoming a vertical line. Computers would never get far beyond 200Mhz because of the brick wall of development costs.

    Well electron tunneling became our friend. Design tools outpaced their costs. Maybe we'll find a way to turn the physical limits to our advantage.
  • Shouldn't that be CMOS' then?
  • When the limit to chip resolution are finally
    reached, there are numerous other areas to work
    on to improve performance. Among them:

    • Computer-verified optimality of design
    • Layered boards
    • Compiler technologies
    • Multiprocessing
  • by grmoc ( 57943 ) on Saturday October 09, 1999 @10:02AM (#1626952)

    Even if the Intel folk were right, and we couldn't make out gates any smaller (I bet we can, with bucky-tubes and those neato single-molecule gates), it wouldn't be the end of Moore's law.

    FIrst of all, there is the whole bandwidth problem- We programmers have to worry about cache coherency, cache misses, time to load from disk, time to load from RAM... etc.

    These things are the major bottleneck for many applications.

    Furthermore, This "limit" would only limit single-processor designs..
    There is still a large world on parallel-processing to consider.. What if the CPU could execute EVERY non-dependant, non-aliased branch at concurrently?

    (We'd obviously need better compilers, and probably better languages..)

    In any case to rehash: Even if the Intel engineers are right about the "gate limit", Plenty of other advances to discover..
  • But (as far as the patents have shown), transmeta doesn't have some new technology that's cooler than printing something on silicon. The real step is finding some thing besides semiconductors to make transistors out of. Or, basically, repeat what happened in the 40's when they came up with transistors to replace "bulky, expensive, heat-generating" vacuub tubes. What we need now is something smaller, cheaper, less power-consuming, and cooler than transistors that we can use in the same respects. Therin lies the future.

    -Chris
  • cant go any faster

    we can just start adding processors :)

  • Yes, but IIRC (I din't bother reading the article now, seen it so many times before) Moore's law does not deal with the speed of the processor, but the density of the transistors.

    Even Moore himself admits that his law can't hold forever, only he is wise enough not to put a timetable on it.

    -
    /. is like a steer's horns, a point here, a point there and a lot of bull in between.
  • Actually, it's less efficient, because you're dragging along the extra weight of the second car that you weren't before. More processors is like having a lot of smaller cars, you have to pay for the standard requirements of each, while a faster chip can be seen as a bus, since it can take less trips to move the same ammount of people.
  • by scrytch ( 9198 ) <chuck@myrealbox.com> on Saturday October 09, 1999 @04:46PM (#1626958)
    > Do you think those people ever imagined such things as our graphing calculators, which are tons smaller and do tons more with less energy? No. They had no clue about what was to come.


    Sure they did. Most of them are still alive.

    The futre of integrated electronics is the future of electronics itself ... Integrated circuits will lead to such wonders as home computers--or at least terminals connected to a central computer--automatic controls for automobiles, and personal portable communications equipment. The electronic wristwatch needs only a display to be feasable today.


    -- Gordon E. Moore, Electronics magazine, Apr 19, 1965


  • Bio and molecular level computing is next.. it will propagate and extend the life of Moore's law for another 10 years.

    After that, quantum computing of course :)

    http://www.qubit.org/ [qubit.org]
  • I don't get it. The ONLY difference between the two code fragments above was a keyword change.

    Right. However, the two keywords would have significantly different effects, and would be used in different situations by the programmer.

    Why would we have to have a new keyword when we can simply detect a parallel operation?

    Because we *can't* simply detect a parallel operation, 90% of the time. If your code calls a function that isn't in the same object file (or is in a shared library), then there's no way to know whether the for loop is parallelizable. Unless, of course, the programmer tells you that the loop is parallelizable. Hence, the new keyword. If you tried to parallelize every for() loop in existing code, you would break most of it horribly.

    There's also the fact that automatically parallelizing foreach() type code will have dramatically different performance effects on different systems; the threshhold size above which one would want incur the overhead of contacting separate threads to run code will vary depending on the CPU interconnections; a cluster would only be helpful for much larger loops than an SMP system. You could write every parallelizable loop with foreach() and hope the compiler will sort it out, but how is a compiler supposed to figure out that

    foreach(i=0;i<3;i++) {
    tinyfunction(i);
    }

    shouldn't be split among different threads, while

    foreach(i=0;i<3;i++) {
    hugefunction(i);
    }

    should be?

    Why generate values of i if they aren't used? If the compiler can prove array[] is never used in your function, it'll just drop the loop entirely Can't get much more efficient than that.

    Um, that wasn't at all what I was talking about. I was discussing cases where every value of array[] will be used, but where they could be calculated independently.

    Functional languages, which can evaluate arguments to functions in any order (thus parallel), will often not bother to even run the loop until array[i] is needed in a fashion that can't be delayed. The best way to optimize code is to find ways to not run it at all.

    Agreed.

    I suggest thinking in languages other than C.

    Disagreed. The point of SMP isn't to demonstrate some academic feature of Lisp, it is to make programs run faster. If you have a program that requires more than a single fast CPU to run, then you *definitely* don't want to run it in a non-compiled language. And if you have a Scheme compiler whose output will run as fast as C, C++, or Fortran code, I'd like to know about it.

    That sounds antagonistic, but I'm absolutely serious; I'll probably be working next summer with fortran code on a 500-node system, and if you've got some means to let me write anything but Fortran, please take pity and let me know about it.

    That all means unless you get a sadistic pleasure out of watching engineers shackled by fortran, it would be nice to see more parallel programming features available in C/C++.
  • by rde ( 17364 )
    C'mon. This idea is so familiar it's even got a name' 'the wall'. There are also a plethora of possible solutions on the horizon. Of course, it's possible that they're all crap and Quake 9 will have little over Quake 8.
    Moore himself talked [cnet.com] about this on Cnet a couple of years ago.
  • See Larry Snyder's work on ZPL at the University of Washington.

    Thank you; I'll take a look at it. I'd prefer an OO language, but anything that automatically parallelizes code, that links to C code, and that isn't Fortran, is good to hear about.

    The idea of embedding parallelism into the language is nothing new.

    When did I say it was? I mentioned HPF. Well, not by name, but I at least said Fortran had parallizing keywords already, didn't I?
  • I was just wondering what everyone's view on using optical computing. The ups, the downs, and the in-betweens. Also wanted opinions on the holopgrahic storage devices that were once a big-talker, but what happened? Did they ever make it worthwhile, or did solid-state hardware pretty much make it not worthwhile? I still remember hearing about the cube-drive thing that had a hologram inside that was the data being stored. I never had a really good chance to read into these technologies, so I lost contact with following them. Anyone have any info, or opinions to spare?


  • the speed of software halves every eighteen months.
  • After the engineers push silicon to its limits we will start to see the first gen of bio-procesors or something yet ot be discovered. Trust me the CPU industry has WAY to much to loose if there market suddenly hits a wall. "Gee the new Mac G43234 10e4 mhz is the fastest that there ever will be.... no sense in upgrading!" The only way around this is simple.... we will have HUGE CHIPS.. things the size of HDs may be to come if Bio procesing doesn't come along quickly.
  • O.K., I know it's a nit pick, but it should be CMOS, not C.M.O.'s. May be they should pass their stories through /. for editing too, or get a technology editor that can spot this obvious mistake.
  • thanks for being anally retentive and pointing out the already incredibly obvious! thanks!

    lets forget that fact that there IS a large set of tasks which are parralelizable, and that in multitasking environments, you get great gains with MP3s running in a thread, compiler in a thread, set at home in a thread, Quake3 rendering in a thread, AI in a thread.

    but thanks for being a dick to me so you could show off painfully obvious knowledge!
  • by Signal 11 ( 7608 )
    I believe that I am speaking for the majority of the /. readership when I say:

    show me the money!



    --

  • Ray Kurzweil addressed this issue extensively in his book "The Age of Spiritual Machines." He analyzed the trend in computing power from Babbage's first Analytical Engine all the way to the present (mid 1998), and found that paradigm shifts always occurred at just the right moment in the evolution of technology. Just as we reach the point of diminishing returns on improving a specific technology, the increasing demand for the resource inevitably spurs research in other technologies to continue the exponential growth.
  • In his 1999 book, The Age of Spiritual Machines, Ray Kurzweil offers this assessment of optical computing: "The advantage of an optical computer is that it is massively parallel with potentially trillions of simultaneous calculations. Its disadvantage is that it is not programmable and performs a fixed set of calculations for a given configuration of optical computing elements. But for important classes of problems such as recognizing patterns, it combines massive parallelism (a quality shared by the human brain) with extremely high speed (which the human brain lacks)."
  • First of all, where do you get the idea that Lisp is a "non-compiled language".

    Secondly, there are other functional languages besides lisp - ones that can be compiled very efficiently indeed.

    I don't know what you're coding that you need fortran for, but look at the functional languages, look at the Scheme compilers, look at compilers like Sather which is VERY efficient, and pretty nice too.
  • It's that the major chipmakers stand to loose a lot of money. Currently they are set up to produce traditional chips. They have a tremendous advantage over anyone entering the market, since they already have equipment, expertise, etc. If the way chips are made changes dramatically, what's to stop a little upstart, that realizes the coming change early on, from becoming serious competition for them?
  • by Robin Hood ( 1507 ) on Saturday October 09, 1999 @10:22AM (#1626976) Homepage
    If cypherpunks/cypherpunks isn't working, use:

    Username: slashdoted
    Password: slashdot

    Enjoy!
    -----
    The real meaning of the GNU GPL:

  • by fwr ( 69372 ) on Saturday October 09, 1999 @10:24AM (#1626977)
    You're forgetting Microsoft's propensity to throw everything including the kitchen sink into their products. When Windows 2010 requires a 1.5GHz CPU /w 2GB RAM just to boot (come now this is not too unrealistic) your point about the general consumer vs state of the art does not hold up. Sure, there will be some small "appliances" that would do fine with today's high end CPU's, but if MS and Intel has their way then ppl will be in an ever continuing cycle up upgrades -- needing to upgrade their hardware (which contains some "new" features) in order to handle the latest monstrosity from MS, which upgrades their software to handle the few new features in the new hardware (along with a lot of useless bloat), which demands a new hardware upgrade in order to run acceptably, etc....

    I don't think the need for faster and more capable hardware will cease until computers advance to our "dream" computers. For each person what this means is different.

    What I see most likely is the current manufacturers following their current practices of concentrating their R&D on faster and faster generic purpose CPUs until they reach some sort of "wall." When this happens, they will probably branch out in two separate directions. One focused on R&D into totally new methods of producing generic purpose CPUs that break through this wall and the other on application specific designs. They will most likely need to get the bulk of their revenue from application specific designs, taking a larger and larger percentage from the generic purpose CPUs as they get cheaper and cheaper (because other companies will reach the same barrier and the competition will reflect lower prices).

    This is not necessarily a "bad" thing. I think it makes much more sense to design a chip specifically for, say, speach recognition. Sure, there is a very important software part of this and there has been some recent work on neural net chips or systems that supposedly is in the right direction, but someone like Intel spending vast amounts of resources on a speach recognition chip (based on neural computing or not) using 5 micron casts would likely have great success in a short amount of time (2-3 years). Think of all the other application specific areas where Intel and the other manufacturers could branch out if they ever do get to a 5 micron technology. Perhaps "visual" recognition, handwriting recognition, Oh, here's a big one -- language translation. The possibilities are endless, with a matching revenue stream. I could see someone spending $1000 or more for a generic language translation unit to take with them on their vacation (I certainly would and there's a heck of a lot of people in this world).



  • Even if Moore's Law were to slam into a wall (highly unlikely) then so what? The next paradigm shift is the *macro*cosm, as our microprocessors *connect*. This "network effect" squares the sum value of our internetworked transistors. Exploding bandwidth frees abundant info to flow between 'em. These further enforce the law of increasing returns (and route around material laws based on diminishing returns (which attempt to enforce artificial scarcity (ie: patent, copyright, closed-source (which makes me smile;)))) So transistors shrink in half every 18 months. Big deal! Optical bandwidth doubles every 12. Wireless doubles every 9. Twice as fast as Moore's Law!

    Moore's Law: The power of computer processors doubles every eighteen months
    Metcalf's Law: The power of the Internet is equal to the square of its nodes.
    Gilder's Law: Internet bandwidth will triple every year for the next twenty-five years

    Anyone care to do the math?
  • Hey, that's actually a useful E-mail address to use! I just checked the WHOIS database, and domain.com is listed as "example domain", so you can be assured that mail sent to (anything)@domain.com will never (well, not within the foreseeable future) wind up filling up some poor guy's mailbox. Indeed, it's probably redirected to /dev/null.

    Thanks for the tip. I'm going to start using that one as well!
    -----
    The real meaning of the GNU GPL:

  • It just occurred to me - all this, just so the government can spy on my Reversi scores? ;)

    (It was something I noticed about the first 486 I ever played on - a DX25, them were the days, where it was so fast that in the space of a click, reversi thought I was double-clicking the square and told me to go elsewhere. I had to play it with the keyboard instead!)
    Don't spoil my fun. I know it was a buggy mouse driver, but it was fun.
  • The "cube" things you are talking about were so -called "bubble memory". they were about an inch on a side and IIRC (though I'm not too sure) they use magnetic domains to store data.

    At the time it seemed pretty amazing that a 1-layer slab of these a foot square could hold the entire Encyclopedia Brittanica.

    There was some disadvantage (I can't remember what) but it seems that research into that technology was abandoned as better technologies becae available. For example, DVD achieves a higher information density.

    Consciousness is not what it thinks it is
    Thought exists only as an abstraction
  • Really?
    The Moore's Law that I was taught was that compute-power doubles along exponential curve according to the curve.

    (in this case every 18 months)

    You're saying that Moore's law is that transistor-gate speeds double in that timeperiod??

    I think some research is necessary...

    whatis.com thinks Moore's law is:
    Moore's Law is that the pace of microchip technology change is such that the amount of data storage that a microchip can hold doubles every year or at least every 18 months. In 1965 when preparing a talk, Gordon Moore noticed that up to that time microchip capacity seemed to double each year. The pace of change having slowed down a bit over the past few years, the definition has changed (with Gordon Moore's approval) to reflect that the doubling occurs only every 18 months.


    which is essentially what you said.

    Maybe there should be a Moore's variation which states that compute-speed doubles every X... Its a more important indicator than gate-density, though not as easily measured..
  • Doesn't Moore's Law state that Microsoft apps double their bloat every 18 months? This is still holding true.

  • Or a way to dynamically change the instruction set of a processor so that one "generic" CPU is both a "generic" CPU and application specific CPU at the same time. A lot of the work that generic CPU's do, and will be doing more and more of because they are so fast and have the raw power to do it, is finding solutions to application specific problems. Because generic CPUs are so fast and cheap they are as cost effective as application specific CPUs in certain circumstances.

    Look at the "WIN printers" for instance. It was thought that it would be cheaper to take the (relatively cheap) intelligence out of printers and put it into software drivers that are run on a generic CPU. Whether it is true or not (that it is more cost effective) is beside the point. The point is that we are asking the generic CPU to do things that it normally would not, requiring large amounts of processing power due to the timing and other issues.

    If Transmeta is working on what is rumored, using current chip manufacturing techniques and not some new fangled method, then both the generic CPU instructions and the application specific instructions could then be executed on the same chip. Since the application specific instruction set would be taylored to the applications, presumably less cycles would need to be done to solve the same problem. This would reduce the need for faster and faster CPU's, would it not? Would not this be a new approach to the "problem?" May be not finding a new way to make a faster same old generic CPU product but finding a new paradigm that used a single unit in order to do both generic and app specific "stuff."

    Think about it!
  • Actually that is Bill's law, and it is double every 12 months.
  • For some real physics on this subject read the paper on The Ultimate Physical Limits of Computing at http://xxx.lanl.gov/abs/quant-ph/9908043
  • Maybe there should be a Moore's variation which states that compute-speed doubles every X

    One variant I've heard is that the Performance/Price ratio doubles every 18 months. A few years back, I saw some data that indicated that Intel's marketing department took this seriously anyway - either trying to double speeds or cutting prices when speeds weren't increasing that fast. A corrolary is that "Moore's Marketing Law" could be met with SMP systems, if Intel can't scale the CPU speed fast enough over an extended period of time. (See the duel Celery people.)

    (I know there are some problems with this, namely that performance/price doesn't scale linearly at any given point in time. Xeons and 600Mhz CPUs have a pretty huge premium )
  • Your fundemental argument here is incorrect. Although you make a good point which in itself is not incorrect neither is it the correct response. What the original poster is reffering to is the aggregate quality and focus on the quality of code.

    This pertains to large companies, where if there was a set linit to speed (i.e the effective implementation of new-exciting-revolutionary features would take a backseat as what becomes possible is limited by speed). This does not entail the end of computing evolution however as the concentration of coders, ideas-people, marketers, consumers, managers and Board of directors would focus on the next step in the evolution of computing into the next century.

    Case in point: a lot of coding books 4+ years old (and up to this day for some) emphasise sitting down with pen and paper to work out an algorithm. This is a remanent of the "hardware-era" of the past, where computer resource time was much more expensive then human resource time. Now at the start of the third mellenium the opposite is true.

    The future is "information applications". At the risk of sounding very much like a maketing hypeist... the future will move to offering the average consumer what the need, want and understand. What they do not understand is the assembler code for the x86 architecture. What they do understand is the $ sign on e-commerce websites. This is the future as computers become even more pervaisive and ubiquitous with day-to-day tasks.

    Hence naturally if software becomes the most important cash-cow for companies and firms in the future, where do you think the research dollars are going to go? On new software methodolgies or creating a new thermal compound to reduce the heat of chips ? Sure this does not discount the fact that hardware is the fundemental base upon which technology is built upon and will be in the future, but the quility of code is horrendous in its current form. Money goes to where money is needed.

  • doesn't the G3 do this already to some extent?

    the G3 does not. the G4 will soon.
    the G3 is really pretty awful at multiprocessing; one of the modes that you kind of need for good MP isn't functional, or something. i think that it works if the software organizes the MP-ing, but the hardware can't handle it alone. I dunno, either way there's something that makes MPing a g3 not worth the bother.

    the G4, on the other hand, not only SMPs but does multiple-coring beautifully, has been doing so in the lab for months, and will be doing so in shipping computers relatively soon. When the multiple core g4s do start showing up, it should be truly impressive..

  • by Anonymous Coward
    People are investigating this and other possibilities... the architecture community(by and large) have accepted that we can only look at ILP for so long, and will soon have to look at thread level parallelism... What you are describing will probably occur someday... in the meantime, SMT(simultaneous multithreading) is looking like a pretty good idea... read the papers by Dean Tullsen at www-cse.ucsd.edu if you care(and the new one on caches since I had a hand in that one :P)
  • Mine spends about 20%, and I don't run distributed.net :)
    (and I would love for someone to invent a technology so that I could save up that 20% for during compiles, etc)
  • Or maybe that's just their excuse to explain their failure when compared to the Athlon and SledgeHammer.

    poor bastards...

  • Everyone is so worried about hitting the proverbial wall. As much as it would be a horrible thing. It would force us to re-examine how and what we use the already abundant power of our computers for. Understandably, we cant have all electronics running at 99.999999% usage, it would jsut be too difficult to get everyone hooked on one cause. But, you never know. In addition to all this, if we were to hit a barrier, engineers would be forced to perfect the technology. Programmers would be less concerned with rushing out a product, the computers wouldn't be going anywhere fast and only market pressure would remain. In essence hitting the wall would, i hope, force us to do what we do better.
    That's just my opinion, don't take is as dogma.
  • During the past year I bought a brand-new blue G3 and an exactly ten-year old SE/30. The SE/30 was $5 at a yard sale. It's really something to compare the ten-year stretch. I had that same thought of "what would sit next to the blue G3 10 years from now?"

    I think the industry is very conscious of the wall. Chips are being done in copper, and there's a lot of talk of silicon-on-insulator ... new special-purpose designs like Altivec, basically building a DSP onto the CPU. There was talk of putting the whole chipset on the CPU so they can communicate faster. A few years ago, they were just making bigger, faster, more transistors. I guess what I'm saying is that we get faster at the same pace, but it's taking more and different methods to do it.
  • Gnome and KDE aren't really in the same class as WindowMaker. Gnome doesn't have its own WindowManager (officially) and kde doesn't require it's WindowManager to be used to run KDE apps. Needless to say I run the gnome panel WITHOUT the main menu foot. I use it for the pager only. I run kde apps on occasion though. The KDE Advanced Editor being the primary one. Just a side bar ;)
  • I don't think it's possible, that is copyrighted material.
    It can be cited though.
  • by Anonymous Coward
    The common misconception is that the speed or some such hogwash doubles every 18 months.

    What this "Law", which Mr. Moore said after looking at reletively little data on the matter, just for the sake of saying something, was that transistor density will double every 18 months.
  • Maybe hitting a limit to processor power will encourage programmers to reintroduce the concept of "knowing how to write good code." Lord knows processor speed and cheap memory have made it possible for even the best programmers to stop thinking about code quality.

    It's not so much a matter of "quality" as it is a matter of time. In a /. article a week or so ago, there was a big story about "Why Software Sucks" that basically said, "programmers don't have enough time to write everything well." When you can code an inefficient, memory-hogging algorithm in 2 hours while a streamlined crashproof sucker takes 2 days, guess which wins out. :-[ And does anyone really want to return to the days of bumming single instructions out of assembly code? The only case where it'd be worth the effort is in kernel design or CPU-hog tasks like distributed.net IMHO.

    Maybe Moore's Law should contain an addendum or two:

    1. The time spent by programmers on writing good code halves every 18 months.
    2. The number of idiots using computers doubles every 18 months.
    3. The stupidity of managers and marketroids is proportional to e^(x), where x is the number of 18-month periods under consideration.
  • > I think he was referring to automatic layout tools.

    >Modern chips aren't designed in manual tools anymore.

    actually I was talking about both. These days we still design high performance datapaths by hand, ram too and in particular the standard cells that our automatic synthesis and P&R use as building blocks

    Design of these is done as a stack of 2d layers (metal on top with silicon at the bottom) It used to be that capacitance extraction was a pretty simple thing to do and not that important a reasonable approximation was all that was needed - these days extraction is much more difficult - it has to be done in 3D and includes all the interactions between a wire and allthe other objects that come close to it along its whole length - time was just did capacitance to the substrate and got a good estimate - now days you have to worry about edge effects and wire delays (distributed RC delays along the wire because they've got so thin they have appreciable resistance - this is the real reason copper is important - RC delays are smaller - not just that the wires can carry more current)

    Certainly we don't have tools that take simple 3D effects into account while working (even many routers can't take the time to do full extractions while they run - they have to do approximations in order to work at all).

    But what I was getting at was that we don't have layout tools or P&R tools that could be used to automatically place gates on top of each other (building stacks of silicon stuff rather than one set of layers of silicon stuff and lots of wires) there's so many sorts of analogish leakage issues that would kick in it would be a nighmare

    I'm not an expert on quantum computing but is seems that those sorts of structures where electrons are contained by quantum effects rather wires, poly and diffusion may be more amenable to stacking since the quantum barriers could provide a sort of 'insulation' between devices in multiple dimensions that might not be possible in bulk silicon

  • ...and Linux advocacy (here) is about Linux-on-the-Desktop.

    Humm... if it became more expensive to produce faster chips, wouldn't you want to use them only where you REALLY needed them? (read: big servers)

    The desktop is dying... and MS knows it! But, we sit and try and compete with what MS has already done...

    ...ok, enough ranting.
  • by Anonymous Coward
    Interesting point, but...

    I don't want a sewing machine. I buy things "pre-sewn"... I don't think there is much similarity between the idea of moore's law and sewing machines. Mainly, this is because a sewing machine is a device that requires a lot of skills to use. Almost any idiot can use a computer (not necessarily to its fullest). And, as computers get faster, the number of idiots who can use them increases (I'd say that it roughly doubles every 18 months). However in order to be able to actually get one's money's worth out of a sewing machine, you'd have to spend a lot of time making your own clothes, or making clothes for others. A better analogy could be had by comparing microprocessors to steel, since they are both crucial "ingredients" in many designs. Actually, a similar phenomenon to Moore's law occurred when steel first became popular. As better and better techniques for producing steel became available, more and more industry was able to use steel as an input to production. Eventually, somebody invented the current way that steel is processed... since then, people have lived with steel the way it is, arguably without creating a shock to the economy.

    If technology gets to the point where I can do what I currently use my computer for without using a computer, then I'll be quite surprised. The only thing that could replace computers is IMO better computers. Of course, if the speed of hardware per dollar innovation were to slow, then we'd probably see higher prices for top-end hardware, but other than that, I'm sure the economy would shift so that more resources (people) were dedicated to optimizing code and compilers and architectures, rather than trying to squeeze more speed out of silicon.

    Moore's law is more of an economic phenomenon than we realize. As long as there is demand for something, somebody finds a way to sell something relatively similar to it.

    Of course, I want a computer that Moore's law predicts will exist in a decade, but I'm not going to sit around holding my breath until it exists... I'm going to make the best of what I have. To think that Moore's law has to do with the number of transistors crammed into a piece of silicon is kind of shortsighted. Furthermore, if there were enough demand for high-quality low-priced sewing machines, they would surely exist.

    It's funny that the end of Moore's law is seen as such a negative. I mean, even if it were to happen for microelectronics, somebody would figure out how to push the frontier on another level (such as software, parallel procs, and generally better designs throughout).

    The idea that software development has to proceed in such a linear (non-Moore'sLawish) fashion is silly. Who knows, maybe the parallel and distributed (to use the buzzwords) nature of OSS development will ignite exponential growth (if it already hasn't) in the software industry.

    I guess this means less is Moore.

  • It gets to a point where shrinking can no longer be the best option. Sure, you can (eventually) make chips with pathways of 1 atom in dimension. But think of this: If you were to take EVERY circuit (chip/PCB/whatever) in an office (for example: the pentagon), and combined it into 1 device, that device would still be SOOO small that you would still need a microscope to see it. So, let me ask. What happens if you open up your case (that has this tiny cpu in it), and you sneezy? Oops, there goes the CPU... But, in contrast, what if they did make chips so small that pathways were single atoms? What if they could efficiently make a chip that is big enough to see without the use of a microscope? The thing would have like 10e20 or more transistors!!! But here is my question: does it get to a point that having lots of transistors really dont matter?
  • I think that the IBM POWER3 might be what you are looking for. You can take a look here [ibm.com].
  • I emailed John Markoff. Apparently, the copy desk at NYTimes fubared this one -- it's not his fault. (I wrote him a nice friendly email, and he responded fairly quickly, actually.)

    --Joe
    --
  • It's not about AMD or Intel or G4 or multiple cpu's or better compilers. Memory comes in chips too and if there is a limit in the number of transistors you can put on a chip, there will also be a limit to how much memory you can have. Only better process technology can lead to more memory on a chip-not fancy architectures or compilers.
  • <RANT>

    ...for science and engineering in general. We ought to reserve the designation of Law for fundamentally sound and mathematically provable phenomena, like Gravity and Motion. Even Relativity, which has more experimental basis and practical application than Moore's "Law," only rates a designation of Theory. I got sick of this in my Computer Architecture class, too. Why don't we get back to real science instead of playing prophet with Moore's Rule of Thumb?

    </RANT>
  • by wa1hco ( 37574 ) on Saturday October 09, 1999 @11:24AM (#1627016)
    More and more of the economy now assumes a sustained exponential growth of IC based products. When the growth rate begins to slow, it WILL create significant disruption. A number of points: First the rate of growth WILL slow sometime, maybe caused by fundamental process limits, maybe by increasing costs associated with the capital equipment to manufacture the stuff, or maybe because it becomes to large a percentage of the GNP (saturation).

    IC manufacture is capital intensive. Someone told me that an Intel fab plant runs $5-8B today and doubles every generation. Wall Street is pumping money into Silicon because of growth. When growth slows or stops is will have an enormous impact on investment flow. Then the lack of investment will slow progress, slowing the need for development. It's all interrelated, coupled and highly amplified by the exponent of Moore's Law.

    When it slows, the whole attitude about products will shift. Today, if you want a really good sewing machine or small lathe you get a good used one built 1930-1950. When Moore's law times out, the investment in plants will slow to a trickle. Fab equipment will wear out. Student will avoid the dying industry. (What's the Silicon version of rust belt?) People won't buy a new computer because it's not as good as the old one. Software development will begin to focus on quality and then crash as the market saturates (no need to buy new SW once it works and you have to run on the same old machine.

    When today's Moore's law based economy crashes it will create massive dislocations. Imagine Silicon Valley with a New England mill town look, or Pittsburg/Buffalo/Cleveland circa 1970.

  • Camcorders are going digital. VCR's are going digital. TV's are going digital. These are very much consumer devices and they're going to create/store/move a lot of bits around. The consumer is going to want to control/store/edit lots of bits in the future. Word processing is already starting to give way to home video editing. The iMac DV is marketed as a digital camcorder accessory.

    Saying "who needs more power for their word processor?" is the same as "who needs more than 640k?". It assumes too much. You can still get a "wait" cursor just applying Photoshop filters, and the average consumer is already editing photographs on their computers.
  • A lot of people who really defend Be point out that SMP is very much the coming thing, and BeOS itself is completely SMP-capable. You write your app for BeOS and add more processors and the OS takes care of it quite efficiently. If BeOS gets its killer app (real-time video processing or some such) around the time that people are looking at SMP to get an edge, they will look pretty good.
  • So far, most posts have focused on ways around the hardware walls. They're probably right; eventually, someone would find a way around limitations. However... maybe a temporary stall out in processor speed increase would force an increase in effeciency on the software side. Fewer people could get by w/o reading Knuth. More people would take an interest in effecient algorithms & code re-use. Software would be less bloated and slow.

    Maybe, anyway.
  • But if you had to move 10,000 people (instructions) from one location to another (process them), wouldn't two five-seater cars at 60mph be pretty much as good as one five-seater car at 120mph?

    Better gas milage, too.

    What the hell am I talking about?
  • Current transistors need a minimum number of electrons to operate, other wise the gates can get flip by background "radiation".
    How ever IBM, conextant, and others are developing transistor that in the simpleist of terms are two "back to back" transistors. this will help minimise "leakage" and they will be able to use fewer electrons to flip the gates. about 10 electrons or so. Yes thats right 10.
  • While manufacturers have plants that are currently for making current chips, they often build new plants from scratch to make newer technology. According to a guy from Intel who gave a talk at my work, it is cheeper to make a new billion dollar plant than to refit an old plant. Since Intel will make a new $1 billion manufacturing site every year at least (according to the press releases on their website), they have quite an advantage over a garage chip maker. That's not to say they a patent on the new technology wouldn't make things hard for them, but my estimation gives Intel the advantage.
  • When cars were first invented, it was widely accepted that the human body would be crushed if it were accellerated to 60mph.

    People talk now about how fast you can make an airliner go, and others talk about just shooting up into orbit and waiting for the earth to move Tokyo to where London was beneath you. One way in one hour.

    Then there are the transporters ...
  • I do research on decentralized processors, the technology you mention. My thesis advisor likes to use this term, but please keep in mind that there is no common term for this type of architectures.

    Sun's new processor, MAJC, is doing this; and Alpha 21364 will, too. Alpha 21264 already employs a similar technique.

    You are completely right in thinking that this is the next step. The trend in the last 3-4 years has been in this direction. Decentralization is an active research topic in many institutions and processor companies in various forms: Multiscalar processing, superthreading, etc. You might want to take a look at the Multiscalar pages at Univ. of Wisconsin [wisc.edu], where some of the pioneering work has been done.

  • Notwithstanding the fact that there are still 7 binary orders of magnitude to go to get from 100-atom transistors to single-atom switches, there is a new phenomenon that will become dominant, starting right now: multiple independent CPU's on a single chip. Right now, the focus is on multi-mega-transistor behemoths like the K7, 22 million transistors, but wouldn't you rather have 22 1 million transistor CPU's in your computer? I know I would.

    The enabling technology for the shift to SMP is none other than Linux: its crossplatform nature allows us to switch easily to alternative, more transitor-efficient architectures such as ARM (n.b.: also offered by Intel, make of that what you will) and at the same time provides the multi-CPU support we need, without costing an arm and a leg :)
  • This has been tried before AFAIK but I counldn't point to anything definitive on it. The problem is figuring out how to do the different instructions but I would think you could have two cores running at 100mhz on a 200mhz total system bus, that would probably solve the bandwidth problem. Deviding up the code between the processors could be done most likely in the compiler which would multithread the program. With the two cores you wouldn't have double the clock speed but you would have double the MTOPS (millions of theoretical operations per second).
  • Well, wouldn't it HAVE to stop somewhere ? I mean, you can't make a semiconductor out of a single atom ... I mean, despite all the technological advances, you still have physical limits to work with. Like, you can't see a group of 5 atoms with the naked eye. And no amount of technological advance will change that ... so won't they hit a physical, barrier somewhere where the Laws of the Universe will trample all over them ?
  • It's happened to a great extent, anyway. Go hunt down the Toms Hardware review on the Athlon for a good understandable discussion on their architecture - the main reason they're getting up to 50% better FPU performance than Pentium IIIs is the fact that they've got 3 concurrent execution units there instead of 2.

    But there's only so much of that you can do in hardware. What I'd like to see is multithreaded software produced *automatically* by C/C++ compilers when possible, the way high end Fortran compilers do for multinode supercomputers today. So instead of writing

    for(i=0;i<1000;i++) {
    array[i] = function(i);
    }

    which generates i, executes the function of i, generates the next i, executes the function with that next i, etc; you might write

    foreach(i=0;i<1000;i++) {
    array[i] = function(i);
    }

    which will generate as many values of i as you have CPUs, execute the function for each value of i on a different CPU, then generate the next set of i's, sending them to threads as necessary to keep every CPU busy.

    We'd have to have a new keyword (I like foreach) for this, since the overhead involved would make it counterproductive in many circumstances and would break code (anything where function() isn't reentrant, or where the i's are assumed to be evaluated in order) in others.
  • "
    This wouldn't be SMP, even the motherboard wouldn't really know there were multiple cores on one processor.
    "

    This is essentially how most of the current
    processors already work. A Pentium II processor
    is capable of executing _multiple_ instructions
    on the same clock. The motherboard/OS don't
    even know there's effectively more than one
    processing unit present.

    What you really want is a nice threaded OS and
    applications, coupled with known multiple CPUs
    so that applications can be executed in parallel
    on the thread level, rather than on the
    instruction level (or looking at it from the
    programming perspective, it is much more fruitful
    to parallelize your app deliberately, rather
    than let the hardware try to do the best it can
    for you.)
  • Yes, we have, but those predictions were more based on user needs/wants. This wall is an actual physical barrier. I mentioned in a previous post that wouldn't you need x amount of atoms just to do basic functions ?
  • No. It is a plural, not a possessive.

    ==
    Plurals of letters, signs, symbols, figures, and abbreviations used as nouns are formed by adding s or an apostrophe and an s. The omission of the apostrophe is gaining ground, but in some cases it must be retained for clarity, as with letters.

    Margaret Shertzer: The Elements of Grammar
    ==

    John Markoff replied to my e-mail: "sigh. of course. at least it's not my nit...the copy desk made the change and I didn't see it until it was in the paper.... thanks..."

  • It's not so much a matter of "quality" as it is a matter of time. In a /. article a week or so ago, there was a big story about "Why Software Sucks" that basically said, "programmers don't have enough time to write everything well." When you can code an inefficient, memory-hogging algorithm in 2 hours while a streamlined crashproof sucker takes 2 days, guess which wins out.

    Guess which version would have been more maintainable? Guess which version is more robust? Guess which version wouldn't have to be rewritten next year because some software that depends on this algorithm has changed and now the performance is finally unacceptable?

    The problem that programmers don't have enough time is because they are spending so much of their time on rework and bug fixing caused by their rushing through solutions. The problem is short sighted management who encourages programmers to get to the finish line at whatever cost.

  • by Signal 11 ( 7608 )

    The problem that many of my detractors (who Should be Obvious to you by now). Is that They have more problems with, ( of course ) the subnet of my presentation ( table 1 ). Needles to say, Nevertheless. That they more than Likely do not comprehend ( of course ) the Fundamentals of the I'm a Fucking Retard Rule ( Needless to say, similar to my Octet rule ).

    Never the less, it should be Obvious why I didn't ( or should i say, Couldn't ). Needless to say, pass the fucking Cisco exam because my head ( or never the less, what is on top of my head ) is so far.

    Just imagine! Shoved up my ass, that this paper should be my addmitance paperwork out of computer ( or network ). Consutlting/IT Professional, and into scooping M&M's for Dary Queen.

    if you read this hampsters paper all the way thru.. take off two points. Take off 3 if you printed it out to read it later.

    --

  • Check out Cilk [mit.edu] for what seems like quite an effective way to parallelize C.
    --
  • will only go so high. Monitors, currently displaying around 72dpi, higher on LCDs, will get up to optical resolution then stop.

    That means that the 2d card will only have to be so powerful, the same with the 3d card.

    Sound cards will eventually be able to generate realistic sound that includes the full range of our hearing. Then they don't need to get more advanced.

    I'm not saying consumers will only need this or that, I'm saying that humans will only be able to come up with this or that. AFter a while, they won't be able to figure out anything more to do with computers. (This will probably end at something like re-creation of worlds, aka massive holodecks)

    Think to yourself, what is the biggest, most power-consuming thing a computer could ever do. Ever. It will stop there.

    Perhaps it will be miniturized, but applications will stop eventually when there are certain limits, like optical resolution or the range o the human ear. Eventually there will be limits like that for application.

    Moore's law will not stop. We will just keep finding newer processes to do things.. (Intel saying it will stop, well hell yeah you can fry pancakes on PIII's probably roast a cow on a Merced) Motorola isn't having many problems on the other hand... Microsoft could be a major part off this, you shouldn't f'ing need a PIII 400 or whatever ffor the operating system.

    Apple's only restriction on OS compatibility is chip architecture (you have a PowerPC, it works, 69k it doesnt, this is a natural limit, its practically like trying to install the MacOS on a pc, wrong chipset.)

    Hey we haven't even tried optroinic computers or anything yet maybe those will reach our needs. Still s lot that can be done, just ignore intel, they're just overclocking their chips until they melt.
  • What I think will happen in the near future is somehow making multiple cores on one processor work seamlessly, so you effectively have a 1400MHz Athlon by putting 2 700MHz cores on one chip. I have no idea how to solve the problems with bus bandwidth and dividing instructions between processors, but doesn't the G3 do this already to some extent?

    This wouldn't be SMP, even the motherboard wouldn't really know there were multiple cores on one processor.

    Someone with some experience in this field want to tell me why this hasn't already happened?
  • I have to disagree. I have a K6-350 at home here, and I've been programming now for several months. I try to avoid coding sloppy - partly for performance, but mostly because 'good enough' isn't in my vocabulary.

    I don't believe that if computers suddenly hit a ceiling in terms of max performance that people who code sloppy would stop. It's just like any other profession - some people do it to the best of their ability, and some people make it 'good enough'. And on a related topic - guess which methodology most linux programmers embrace. :^)

    --


  • "Who will ever need more than 640K?"

    For decades people have made predictions contrary to Moore's law, and each time they were wrong. I can't say for certain that doubling will continue ad infinitum, but the end is definitly not near. In 1960, 1970, 1980 & 1990, someone said that their decade would be the end of Moore's law.
    Excuse me, but... BullS**t

    The nature of computing is that bigger apps require faster machines, and faster machines can run bigger apps. Most long term predictions in the computer industry far underestimate the power of human inginuity when faced with an ever more demanding consumer. In the 1950's someone (who was it?) said there was a world market for maybe *5* computers...
    yeah... right

    (BTW, can /. mirror NY Times stories? That registration is a bitch)

    Always Slashdot, Always CokeBear, Always Coca-Cola
  • Higher processor speed, IMHO, has does wonders for code quality. With higher speed, garbage collectors become feasable. Garbage collectors go on to make it a lot easier to design robust systems becuase you end up have no leakage (given it is a good algorithm) and no accidental freeing of memory that is in use.

    In addition, more speed has allowed us to make modular and OO designs that were traditionally "too slow". It's not that OO is necessarily sower than non-OO (there are too many benchmarks showing either camp is faster to know what is really going on), but good design often has overhead. I make speed sacrifices to maintainability all the time with the justification that computers are fast enough. And, IMLE, they are.

    Because of this, I can see the exact opposite happening: when computers reach a limit to their speed increases, quality in code will go down. As more features are added to a program, more and more sacrifices to good design will be made for the necessary speed.
  • > Even Moore himself admits that his law can't hold forever, only he is wise enough not to put a timetable on it.

    Moore wasn't some digerati pundit making grandiose prognostications, unlike the current crop of wannabe net.prophets. He only predicted that a trend toward doubling the number of transistors on a chip every 18 months would continue for at least the next 10 years -- that was over 20 years ago.

    Actually, didn't the move to RISC actually result in LESS transistors?
  • I agree. Intel have a serious interest in producing faster and faster chips; if they ever reach a real limit, they'll have to do something. There's quite a lot they can do, so Intel shareholders needn't worry yet.

    If you analyze the performance of a modern PC (Intel PentiumIII, AMD K6, etc) in 'normal' use you'll see that the processor spends most of it's 'busy' time (90%+) waiting tor cache misses. A stupid interpretation of this is that a 50MHz processor would give you the same performance as a 500MHz chip. This isn't entirely true, but isn't a a bad guess. I run a processor-bound Oracle database on x86 architecture. At peak loads the (single) processor is reporting that it is very busy. What it is really doing is waiting for memory to respond. The system is designed not to need to swap to disk. The users think the system is very fast.

    If Intel or (someone else) could improve memory performance to the level required by modern processors, we would see phenomenal improvements in overall sysstem performance. The x86 machine I am using at the moment is seriously disk-bound. If the disk performance was 20% better (which I could acheieve by buying better RAID) I would see something very close to a 20% performance improvement (It's *very* disk-bound). If I fed it another gigabyte of memory, I'd see a miraculous performance improvement, because it would stop being disk-bound and become memory bound.

    If you take Moore's law as applying to silicon-based processors, then there is a limit that we are within about 5-10 years of. If you apply Moore's law to *whole* computer systems, then we've got a lot of room before we hit a major problem. Even then, there are a lot of things that can be done:

    Chemical / molecular systems
    Biological systems
    Quantum logic
    etc.

    Every computer on this planet is built on the von Neumann architecture. It's a good architecture, but there are others. Many are inherently faster. I will happily bet that Moore's law (applied to system performance) will be exceeded over the next 50 years. Anyone want to bet against that?
  • Nanotech is not about making big things small. People watch Star Trek or read The Diamond Age and assume that nano is about little robots with little CPU's and little mechanical parts. And that just ain't so. We already have tiny molecular cutting devices: they're called enzymes. Yep, it's the squishy science of chemistry, where we deal with weird funny smelling liquids rather than neat shiny MicroMachines. The kind of nanolithography Feynmann talks about is rather similar to how we make chips now, and they're infinitely more flexible than a micro-encyclopedia. There's not nearly as much utility in putting all the manufacturing effort. toward creating single-purpose micro-devices when you can create a general-purpose one like a CPU.
  • by Jerenk ( 10262 ) on Saturday October 09, 1999 @08:42AM (#1627105) Homepage
    I think Richard Feynman said it best many years ago:
    There's Plenty of Room at the Bottom by R. Feynman [xerox.com]

    IMO, he basically started many people thinking about nanotech (and this was in the '50s). There are some remarkable things coming from nanotech (IIRC, there are some remarkable things coming out of U of Michigan in nanotech).

    There is plenty of room. We just need the technology and sophisitication in order to harness it. Somebody will achieve this technology (who and when are the important questions, not if ). When it happens, Moore's Law will just chug along as usual (as it always has).

    Justin
  • Wouldn't you want a processor that looks like a Borg Cube? :)
  • Last I heard, Lucent could make CD-sized discs holding 25 or 50 Gigabytes of data. They wanted to push it up to 100GB before they even tried to work on a marketable product, though. That was a year ago.
    Also, about a year before that IIRC researchers at SUNY developed 3-D optical storage with density of 2.1 GB/cubic cm. Problems were access time and expense of materials.
  • The quote you mention is this:

    "I think there's a world market for about five computers."
    -- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943

    I definitely agree with your post, except for the fact that most people don't need more powerful computers.. When a word processor opens in less than a second, that's usually as fast as things need to be (for *most* people!).. So when the demand isn't that high for faster machinery, there's not as much motivation to research for faster solution.. But, even that idea is negated when you realize that there is still a demand for faster technology in non-consumer sectors. Only time will tell what happens.

    Personally, I think systems are going to shoot for minimalism over the next few years -- the biggest and baddest CPUs (even for the last few years) are complete overkill for most people. The current market division (lower-end (i.e. Celerons/K6-2 and 3s) with high-end (PIIIs/Athlons/etc.)) will probably just get further and further apart. In other words, Moore's Law will remain important for the high-end market, but become not-so-important for the lower-end CPU market. Because the high-end is becoming more and more secluded from most of society (how many people do you know that have P3 Xeons or Athlons on their desk?), Moore's Law won't even matter for most people.

    -- Does Rain Man use the Autistic License for his software?
  • by __aaswyr5774 ( 66534 ) on Saturday October 09, 1999 @08:59AM (#1627112)
    ...by quantum-level laws, this really doesn't impact the alternate version of Moore's Law that affects us day in and day out, which is that computing power will double every 18 months.

    I don't think we'll be using molecular computing on our desktops any time soon, nor quantum computing any time in the next few decades (you try lugging around an MRI machine, I dare ya), but all this means is that we'll have to shift paradigms to something else that's massively parallel.

    Current technology relies on only a handful of processing paths though a chip being active at any one time. Compare this to our brains which are massively parallel at the cost of having lots of neurons sitting around and doing nothing most of the time. ('Nope, still don't smell anything new; nope, still not smelling anything...') The payoff comes when you want to do lots of things simultaneously, which is what happens in our visual centers, for example, when doing pattern recognition.

    The harder problem (than transistor size) to deal with here is that our programming paradigm is going to have to shift to something that can take advantage of a massively parallel machine, which is really difficult. Not all problems can be made parallel, and only a few of them can be made parallel well.

    On the bright side, it's mostly the hard ones like pattern recognition that work well parallel, so maybe the future is brighter than we think.

    ("Computer? Commm-PUTE-errr?" "Scotty, try the keyboard.")

  • > God does not play dice - Einstein
    > Not only does God play dice, he sometimes throws them where they can't be seen. - Hawking

    How about this one concerning parity violation:

    Not only does God play dice, the dice are loaded.

    (apologies for once again snipping quotes from Alpha Centauri)
  • I don't get it. The ONLY difference between the two code fragments above was a keyword change. Why would we have to have a new keyword when we can simply detect a parallel operation? Why generate values of i if they aren't used? If the compiler can prove array[] is never used in your function, it'll just drop the loop entirely Can't get much more efficient than that. Functional languages, which can evaluate arguments to functions in any order (thus parallel), will often not bother to even run the loop until array[i] is needed in a fashion that can't be delayed. The best way to optimize code is to find ways to not run it at all.

    I suggest thinking in languages other than C.

  • by Krokus ( 88121 ) on Saturday October 09, 1999 @09:06AM (#1627135) Homepage

    From the article:

    "When you get to very, very small sizes, you are limited by relying on only a handful of electrons to describe the difference between on and off."

    A handful of electrons? Some analogies just don't work. :u)

  • by alienmole ( 15522 ) on Saturday October 09, 1999 @09:08AM (#1627136)
    If the chip industry really does hit a "silicon wall", that could be very good for the tech industry as a whole, in the long run.

    The silicon chip business has been a bit like the gasoline/petroleum industry, in that many interesting ideas with plenty of potential have been pushed aside or starved for funding, as long as the prevailing product continues to deliver what we're used to.

    Businesses are happiest growing and changing incrementally, and it usually takes outside factors to force major change. But when that happens, almost everyone's better off in the end, because we end up with more choices.

    I look forward to looking back on the latter part of the 20th Century as the primitive Age of Silicon, and wondering how we ever survived without nano/optic/bio/quantum tech...

  • The problem with your argument is that the 640K limit was just an arbitrary limit imposed by designers of early PC's. Maybe they had reason for it, maybe not.
    The current situation is completely different, however. There are very real physical limits to how small you can make a silicon transistor. I think Lucent made one sixty atoms across a while back. Any smaller than that, and quantum effects prevent it from working altogether.

    OTOH, molecular transistors have been made which are considerably smaller, and operate by a different mechanism altogether, but so far no one has found a way to link these together into a useful circuit. This would probably boost CPU's from microwave frequencies (which we're just now reaching) to visible or even UV frequencies.
    Also consider diffusion of dopant atoms within the semiconductor. Smaller transistors are more readily destroyed by this, which is one reason your P3 has a fan and heat sink where a typical 486 did not. Smaller transistors are more susceptible to heat damage (unlike what I have heard some people say) and will probably have to be supercooled.

    So, I expect speed to hit the wall in a few more years, then after a delay perhaps it will suddenly increase by several orders of magnitude almost overnight.

    Also, it goes without mentioning that if the clock frequency times Plank's constant even gets close to the bandgap energy of the semiconductor, the device will be useless as electrons are rised to the conduction band by the clock signal. For that matter, clocks themselves probably won't run at that speed for the same reason.

When it is incorrect, it is, at least *authoritatively* incorrect. -- Hitchiker's Guide To The Galaxy

Working...