Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel

Intel Demos 4.7-GHz Pentium 366

richmlpdx writes "Silicon Strategies has an article about Intel's latest demo... "Providing a sneak preview of its future developments, Intel Corp. here today demonstrated its fastest microprocessors to date--a 4.7-GHz chip for high-end desktop PCs.""
This discussion has been archived. No new comments can be posted.

Intel Demos 4.7-GHz Pentium

Comments Filter:
  • But, (Score:2, Funny)

    Is 4.7ghz 4x faster than 2.4Ghz, because 400mhz was approx 4x faster (if not more) than 100mhz?....

    Tony.
    • I should have asked is 4.7Ghz 4x faster than 1.2Ghz.... Put it down to too little sleep and too much coffee!

      Tony.
      • Re:Opps!.... (Score:3, Informative)

        by kryonD ( 163018 )
        The answer is yes and no. For any application that is doing massive ammounts of number manipulation on a small and colocated set of data (i.e. cachable) you will see performance at approx 4.7x10^9 operations per second. This is for the most part completely unrealistic since today's data applications usually operate on large quantities of data that are spread out through memory. For the average case, the computer will operate at somewhere over the speed of the Front Side Bus (FSB) which is still running close to the same speed it has been running at for the past 4 years. You will indeed notice a speed increase due to any computations that do not require the use of the FSB, but it will probably only be around 50% faster as opposed to 400% faster. The intuitive reader will note that the jump from a 100MHz to 400MHz processor was also limitted by the FSB and thus did not acheive a 400% increase in speed.
      • I should have asked is 4.7Ghz 4x faster than 1.2Ghz.... Put it down to too little sleep and too much coffee!

        Maybe, but only if your entire application and all its data can fit into the on-chip cache, and you make sure the cache is loaded before you start your measurements.

        In the real world, there are no such applications. As I said in another post yesterday, the bottleneck in the majority of computing tasks is not the CPU but the memory and I/O bandwidth. A fast CPU starved of useful work by a bus that can't keep up will spend most of its time idle.
      • Strictly, yes. It is ~4x faster. It runs at ~4x the clock speed.

        Now, if you ask if it can do the same job in 1/4 the time. . . that's another story. . . :-)
  • by MalleusEBHC ( 597600 ) on Thursday September 26, 2002 @04:18AM (#4334213)
    In other news, a small heat wave hit San Jose a few days ago. Amazingly, the source of this heat seemed to be centered at Intel's R&D headquarters.
  • Hammer & Intel (Score:5, Interesting)

    by Drunken Coward ( 574991 ) on Thursday September 26, 2002 @04:18AM (#4334216)
    From what I've read, even with the .13 die on the Athlon XP, they won't be able to clock it much above 2.5 GHz. And supposedly AMD is hoping to have sales of 60% Hammer, 40% Athlon XP by Q3-03, so does that mean they're going to take a whopping in the high end market or do they have a .09 Athlon XP up their sleeves?
    • Re:Hammer & Intel (Score:5, Informative)

      by Jugalator ( 259273 ) on Thursday September 26, 2002 @05:05AM (#4334410) Journal
      Hmm.. Well, AMD's Barton core that's supposed to be released in October or so still use a .13 micron die (mostly "just" 512Kb L2 cache and 333Mhz FSB). And I thought that was the core they were going to live on until the Hammer processors. :-/

      Sure, they *could* manage to start manufacturing the Truly Final Non-Hammer Core sometime in mid-2003, but by then the Hammers should be out (?) and I'd definitely go for and AMD Athlon (Clawhammer) 3400+ in Q1 2003. Mwhaha :)

      But they might plan on having .09 micron Athlon XP's and Clawhammer models overlapping each other throughout 2003, although it *seems* unlikely since the Clawhammer (at least the initial models) also use a .13 micron die. Much like if the tech isn't quite there yet for affordable prices.
  • by blirp ( 147278 )
    Wow... And I still remember when the PC was 4.7 MEGAhertz... :*)
  • by Anonymous Coward on Thursday September 26, 2002 @04:22AM (#4334231)
    Wow! Now my Palladium/LaGrande machine will be able to notify the FBI 8 times faster!
  • by jd678 ( 577145 ) on Thursday September 26, 2002 @04:22AM (#4334232)
    A group of extreme hackers based in a northern section of Finland have shown this processor able to run at 5907Mhz using a never before tried method of liquid helium cooling. "We're a bit dissapointed really, I mean, this is a new record and all, but we still don't think our DVD's are going to rip fast enough till we get up to 6Ghz"
    • And next in headlines...

      "a second group of teenagers in sweeden blew themselves up today in what appears to be a weird underground computer ritual called overclockers.. Is your child in danger??!"

      report at 10..
  • Awesome (Score:2, Funny)

    by sawilson ( 317999 )
    This means that the palladium and DRM stuff
    can be VERY poorly written and still probably
    maybe run somewhat fast hopefully.
  • by Soulslayer ( 21435 ) on Thursday September 26, 2002 @04:27AM (#4334254) Homepage
    I've seen this reported on other sites, and if I recall [infopop.net] this is not a demo of production silicon at 4.7Ghz, but rather this is Intel overclocking their own hardware till it crashed to show that with some improvements the chip design is capable of these speeds, if not in consumer quantities at present.

    Anand Tech [anandtech.com] has more information from their IDF report.
    • Right you are. And any editor worth his salt might have noticed that this news is several weeks old. The article is dated (09/09/02 06:04 p.m. EST)

      This was part of Paul Otellini's keynote at the Intel Developer Forum [intel.com]. Just the boys in the lab showing that they can overclock with the best of them [slashdot.org].

    • What? Intel are officially admitting that overclocking works, and that hardware sold with a rated speed of X GHz may actually be capable of much more?
      • No, the point is that Intel's showing off of an overclocked CPU that is just barely stable at approximately 4.7Ghz and the news media is reporting it as if this was going to be a readily available packaged processor rating within a short period of time.

        In reality it is more like reporting that Kyle over at [H]ard|OCP [hardocp.com] managed to get a few samples of P4 CPU's to run at 4.68 Ghz for a few minutes without it crashing.

        There is nothing evil about Intel overclocking their own hardware, but it is getting totally misrepresented as an actual new product. Which it is not.

    • Don't worry. One of the designers of the new Pentium IV told me that they will definitely release a Pentium IV of 5 GHz or more.
  • GHz Hunting (Score:5, Insightful)

    by e8johan ( 605347 ) on Thursday September 26, 2002 @04:29AM (#4334262) Homepage Journal
    How long will this hunt for more GHz continue? I'd say that if the major industry companies (Intel, AMD...) would make a since long needed move to a better architecture we could achieve more performance with less means.

    What do I have against high frequencies? For starters, high speed, fully syncronized digital constructions rely on switching millions of transistors at the same time (each clock cycle), this burns lots of power which is a limiting factor today.
    Also, high frequency does not imply high performance, the CPU still needs to do something each stage, for example older Pentiums (P3, if I remember right) had a 20 (yes twenty) stage pipeline. This yeilds huge penalties for miss predictions for branches etc.
    This GHz hunting also leads to other problems, such as huge electromagnetic disturbances in the chip, and in busses, etc. The solution to this is to add more wires and pull them in different directions to compensate. This only wastes more power and emits even more heat.

    What I suggest, now when we have lots of transistors to play with, are asyncronous designs! Yes they are harder to design and verify, but that is largely because the lack of supporting tools.
    This would reduce the power needs, let the designers make longer critical paths in their constructions (just clock that part slower), and reduce the need for registers used to balance pipe-lines etc.

    Another move could be to introduce simpler, but parallell CPUs, perhaps on the same piece of silicon. The software systems of today are multi-threaded already, so why not make the hardware capable of _true_ multi tasking...
    • Re:GHz Hunting (Score:5, Informative)

      by Jugalator ( 259273 ) on Thursday September 26, 2002 @05:14AM (#4334450) Journal
      I think the P3 pipeline had 10 stages, while the P4 had 20. So the problem with branch predictions are the Pentium 4's problem. ;-)

      But what about the P4's Hyper Pipeline tech that allow it to do 3 pipeline stages per clock cycle? The P4's Branch Prediction Unit (BPU) is also said to be improved by around 30% when compared to the one found in the P3. Perhaps these improvements even things out a bit while still making it easy to achieve high clock speeds?
      • Sorry about missrembering which Px had the 20 stage pipeline.

        As for Hyper Pipeline, it requires that the stage that you intend to jump to is empty, i.e. a bubble in the pipeline, which probably makes it's usefulness limited. Ahmdals law is a good thing to apply (Intel seems to miss that sometimes). I would like to say that these kind of small improvements simply increase the complexity of the construction.

        As for the P4's BPU, good or bad, it will still fail sometimes. It is not possible to predict all jump properly, thus you will have big penalties when not doing so if you use a big number of pipeline stages.
        • Oh, I thought a misprediction just caused a bubble in the pipeline, so... Ok, then the problem is a bit larger than I thought. I still wonder if the trade-off is worth it or not. If it's better with a high speed and high penalties or lower speed and less penalties. If they, in the end, would perform mostly the same, the high speed choice seem better from a marketing perspective.
          • As you say, it is most probably a marketing ploy from the start (remeber the old 486DX4 100MHz, that had an external frequency of 25MHz). But since Intel has painted themselves into a corner where they can't stop increasing the clock rates they now run into the heat problem...
            • As you say, it is most probably a marketing ploy from the start (remeber the old 486DX4 100MHz, that had an external frequency of 25MHz).

              Actually, wasn't the 486 DX4 the designation intel used for a 486 that ran at 3x 33MHz? DX3 would have been a less deceptive appellation.

    • Hunting for more GHz will continue for as long as computers work the way they currently does. There will always be demand for faster and faster CPU's. Only radically different technology will stop this GHz hunting we see now. But on the other hand, it's also kind of GHz hunting since that technology will of course be more efficient and powerful than the current technology where a silicon chip is pulsing n times a second, both achieve the same goal; computers that operate faster than their previous ones.

      As for Pentium 4, I remember reading an interview of Intel engineer who said P4 architecture is able to run up to around 6GHz, and that they could announce a 6GHz procssor anytime, but it would be economical desaster to Intel. People will buy newer, faster processors anyway so why jump from 1.7GHz to 6GHz while you can milk'em with 2.4GHz, 2.8GHz, 3.0GHz ... releases. And after announcing 6GHz, you'd better have something even better to offer a few months after that. I feel so lame not to be able to provide reference to back this post it's just the way it is :(
      • I'm trying to say that there are easier ways to achieve performance. The problem is that people want to be able to run software for the 8088 from 1983. If you run a modern OS, you shouldn't need that so it is time to make a move and lose the history.
    • What I suggest, now when we have lots of transistors to play with, are asyncronous designs! Yes they are harder to design and verify, but that is largely because the lack of supporting tools.
      But what if designing these complex asynchronous systems efficiently requires 5ghz processors? :-)

      -me
    • Power4 [ibm.com] by IBM uses two cores. There is rumor that the Cell processor that will drive the next generation PlayStation will also be multiple-core. To my understanding, multicore means less heat.

      Interestingly, Transmeta Crusoe processors are being used to build clusters. They give the most bang per watt, as far as I understand. Since cooling systems in clusters cost (serious) money, the reduced heat signature of the Crusoes pay off.
      • Xilinx VirtexPro FPGAs can have up to four hard PPC cores + busdrivers etc. on one _configurable_ chip... I'd say that we see a movement, but still, I want the mainstream processors to use this technology (multiple cores).
    • Re:GHz Hunting (Score:2, Insightful)

      by jreynold ( 56969 )
      "harder to design and veriry" ... Boy, .... now THAT was the understatement of the century ..... Two problems:
      • The tools we have now to design and verify synchronous designs suck. Big companies such as Cadence, Synopsys, Mentor, etc. can't figure out how to do this right, what makes you think they can figure out the much tougher problem of async designs?
      • Great, build the tools "in-house" ... yeah like my management is going to fund a team of EE's for the years it would take to make our own tools. Whatever ....
      We're stuck with synchronous design for a LONG time (at least in mainstream processor / ASIC).
    • Even running full-bore, the fastest x86 CPU available uses no more power than an incandescent light bulb. (Now, please don't tell me you're one of those freaks who have replaced all their bulbs with white LEDs...)

      - A.P.
    • What I suggest, now when we have lots of transistors to play with, are asyncronous designs!

      Sun Microsystems is already planning this for their UltraSPARC IIIi CPU [sun.com].

      One theory I have is that Sun recognizes that super-high frequencies result in less reliability than Sun will tolerate, driving them to new CPU architectures. Remember, Intel cares more about marketing and big business than they do about truly high-availability and zero-error CPUs, which leads to their high frequency yet terribly inefficient Pentium 4. Sun's chip designers are just as talented as Intel's, and if Sun wanted to release a 5GHz CPU they would. It's interesting that Sun chose the asynchronous architecture instead of taking Intel's route of over-the-horizon pipelines and other tricks.they chose
      • Re:GHz Hunting (Score:2, Interesting)

        by ergo98 ( 9391 )
        Intel cares more about marketing and big business than they do about truly high-availability and zero-error CPUs

        And that would be why a Pentium IV 2.8Ghz is the fastest tested on SpecInt [spec.org]? (Faster than any other processor in the world). That would also be why the SpecFP [spec.org] is dominated by the Intel Itanium2 (with, notably, the P4 not too far behind. The fact that the Itanium is at 1Ghz versus the P4 at 2.8Ghz is irrelevant, as both speeds are the fruits of their respective designs)?

        Note that I'm not an Intel "fanboy": I have an Athlon in my machine, and if I bought a machine today it'd have an Athlon in it. However, the strategy of Intel for their P4 is just a different variation on the pursuit of speed, and obviously it works because it's the fastest processor in the world at SpecInt. Saying that it's just marketing is clearly not true when seeing the results of their efforts.

        It's interesting that Sun chose the asynchronous architecture instead of taking Intel's route of over-the-horizon pipelines and other tricks.they chose

        Let the results do the talking. As it is, clearly Intel is winning the processor war.
  • by MalleusEBHC ( 597600 ) on Thursday September 26, 2002 @04:33AM (#4334280)
    This whole time we have been blaming our electricity problems here in California on deregulation, Davis' failure to secure contracts, etc.

    It's been those punks at Intel with this chip all along!!
  • by decarelbitter ( 559973 ) on Thursday September 26, 2002 @04:34AM (#4334285)
    My first pc was a 8088 at 4,77 MHz, somewhere in 1985. This new CPU does 4,7 GHz which is 4700 MHz, which is 1000 times as fast as what I've started with. Impressive. If back then someone would have told me that one day we would be using a 4700 MHz CPU I would probably burst out in laughter :)
    • Oh, I remember back in 1991 when Intel said they would develop a 1 GHz clock cycle CPU by 2000. People scoffed at the idea but in reality Intel did reach 1 GHz by 2000 on the Pentium III CPU.
    • Ha! My first PC was a semaphore flag. Beat that. (Sadly, it's true.) In the 70's, I learned (digital) semaphore (picture guys with flags) in the Navy a few years before I got into digital electronics (and big, big tubes -- cathodes, anodes, tetrodes, oh my!). Our ship had an analog computers for target tracking. They were room-size behemoths and had beaucoup dials and meters. Back in them days, maintenance was a full time job. The techs for those things were always re-calibrating the things on a pre-determined schedule.

      Which brings me to a topic I would like to see discussed or even polled -- What is the real percentage of technical types whose work (develop, integrate or maintain) is 100% Internet-related? (me, for one) I am betting, even during the heydays, that it is/was a lot lower than most people think it is.

      "The idea of Heaven and Hell was the first big power scam. If they can get you to believe that, they know they can get you to believe anything."
    • Impressive. If back then someone would have told me that one day we would be using a 4700 MHz CPU I would probably burst out in laughter

      Way back when, I would have believed that, since I knew Moores law.

      I would have burst out in laughter if you told me it would still take 10 minutes to boot my PC.

    • This new CPU does 4,7 GHz which is 4700 MHz, which is 1000 times as fast as what I've started with. Impressive.

      Dude, it's WAY more than that. I would venture to say its more like 100,000x. Take into account cache size and speed (did the 8088 even HAVE SRAM, if it did it was on the motherboard), memory speed (5ns vs. 70ns). And in general the overall efficiancy of the cpu (superscalar, speculative execution, etc).

      I would post the link to CPUScoreCard.com comparing the 8088 and the P4 2.6GHZ, but they went pay for access to older benchmarks.

      Jeezus, I just realized my CPU ranking is considered "historical". Damnit. What 800Mhz isn't good enough anymore? pfft!

  • by Anonymous Coward
    Really. Steve Jobs said so. He says my 700 MHz Mac is a supercomputer. Really. He wouldn't lie.
  • by weave ( 48069 ) on Thursday September 26, 2002 @04:45AM (#4334325) Journal
    I hope whenever the chip gets there, IBM will sell something like an anniversary IBM PC with same case design as original, rated at 4.77 GHz, 640 megs of RAM, and your choice of three different operating systems -- just like the original!. (er, ok, two out of three ain't bad...)

    I'd hit it.

    • Also one or two 180MB floppy drives (Zip).

      For graphics, the original PC has 80x25x256 characters. This is actually just 2KB graphics RAM and a 2MB card seems a bit feeble. Maybe something capable of displaying recognisable text in 800x250?
    • by weave ( 48069 ) on Thursday September 26, 2002 @08:29AM (#4335237) Journal
      Well, for those who didn't get the three operating system line, the original IBM PC came with your choice of three different OSes, MS/DOS 1.0, CPM/86, or something called p-System, some pascal based OS.

      So, when I said two out of three ain't bad, I meant there is no way in hell an anniversary PC would give you a choice of OSes. Microsoft just wouldn't permit it.

      p.s. No, it's not that funny. I have no idea why it's easier to get slightly humorous posts modded up to a 5 but posts with serious thought and hopeful insight in them never get modded up or often get modded down by someone who just doesn't agree with you.

      Whatever, not like it all matters anyway...

      • So, when I said two out of three ain't bad, I meant there is no way in hell an anniversary PC would give you a choice of OSes. Microsoft just wouldn't permit it.

        Even if the top-secret OEM contract with Microsoft rules out selling PCs without an operating system or with anything other than Windows pre-installed, what stops a PC vendor from including FreeDOS with the machine [slashdot.org], along with a voucher for a CD of FreeBSD or Red Hat Linux?

        • The OEM contract prohibits dual boot arrangements too. That's one reason why Be never was able to get a foothold. Vendors refused their offer to give Be to them for free if they pre-installed as a dual-boot option.

          At least that was the case, maybe not now, now that the Justice Department had their noses up Microsoft's bum there for a while...

          • The OEM contract prohibits dual boot arrangements too.

            The OEM contract prohibits dual-boot systems from being pre-installed. I don't think even Microsoft could prohibit OEMs from including a FreeBSD CD with every computer.

            Free sig: "Anti competition's gone too far, here's your Antitrust Superstar."

    • Would it have a 'turbo' button?

      If so, what would it clock the PC down to when deselected?
      • actually...
        in at least the dell poweredge servers, there is a BIOS setting called something like "x86 compatability" or somesuch which takes a nice dual Pentium II (thats what we had in these a few years ago) and makes it run at a nice slow 60 MHz or so... we had a bitch of a time remotely trying to figure out why simple tasks were pegging the machine. luckily we had a competant service tech who got the call, and he was able to walk through the bios settings with us over the phone.

    • Wouldn't you need a choice of 3,000 different OSes?

      Also, as someone else pointed out below, the CGA used 2KB of memory for video, and 2MB video RAM is pretty small by today's standards.

      This is turning into a summary of the other posts. Why not?

      No hard disk.

      One or two 180MB floppies.

      2.38Gbps to the expansion cards.

      Supports up to 640MB or RAM, but only comes with 64MB as standard (or 16MB if you get one of the first ones).
  • Vaporware (Score:2, Interesting)

    by Anonymous Coward
    Until Intel comes up with an actual example of a motherboard that supports asynchronous ram-flushing, the speed of the cpu means nothing.

    For any motherboard that still uses conventional ram-flushing, the cpu will top out at ~3Ghz and stay there, I don't care what kind of data bus you're using.

    Mark my words, AMD's next generation of motherboards (now documented to support async r-f) will blow Intel out of the water. Hold on to your asses, ass-holders.
  • by __aahlyu4518 ( 74832 ) on Thursday September 26, 2002 @05:04AM (#4334407)
    on x.x Ghz processors that they actually still don't need... my server runs beautifully with a pentium 166 and 64Mb of RAM, AND I still have money to feed the family.
    C'mon people... I'm not saying nobody needs this (it does say high-end), or that 166Mhz is enough for everybody (it certainly isn't for a desktop), but why aren't people still not smarting up? Why do they keep buying a completely new PC every 2 years while they don't need it to write their word-document? (and i'm not even asking why they buy such crap that a pc with only half of the specifications could perform equally well).
    • Please. (Score:3, Interesting)

      by SlashChick ( 544252 )
      Seriously. Why do people buy luxury cars when a Honda could get them to work just as easily? Why do people buy large houses? Why do lots of people, for that matter, insist on leasing a new car every two years, even though they own nothing at the end of the lease?

      The answer is simple: People perceive it as being of some VALUE. People buy new PCs because they look better, or because Internet Explorer will take less time to load, or because right now it's just taking too damn long to print out that document, or the Internet is too slow. Yes, some of these reasons are misguided, and it's our job as those "in the know" to tell people when they do have a misguided assumption ("A Pentium 4 will make my Internet connction faster...") It's also our job to explain to them how best to spend their money if they ask us for advice -- perhaps their money would be better spent on a broadband connection or a memory upgrade or a better video card. Maybe they don't need a new computer.

      Whining about why people buy new computers is futile. People buy new things constantly. Don't forget that people buying and upgrading new computers is what keeps our industry afloat, as well. Not only does it make hardware prices go down, thus benefiting more of us, but we get the added benefit of easier tech support (for the most part, computers have dramatically improved in this area since Windows 95 first hit the shelves) and better software. (My personal favorite is finally dragging those last few holdouts off of Netscape 4.7 so I can make great-looking dynamic websites that actually work with their browser.)

      Next time, instead of wringing your hands and saying "Why?!", encourage those who are upgrading to spend their money in the wisest way possible. The more people who enjoy using their computers, the more successful the industry will be as a whole, and the more jobs we will all have as a result. ;)
      • You are right... but I didn't get my daily dose of cafein yet, so I got a little cranky :-)

        People do need to be educated about these things. Because a LOT of people don't think of a new computer as being of value, but as a necessity, which it often isn't.

        And although it happens more often than not, having people constantly buy things they don't need (yet) just to keep the economy afloat is one of the worst reasons I've heard to support kapitalism. If that's what keeps us going then something is terribly wrong with the western system. And yes, I knew that allready...

        And the argument of better tech support because of the quantity of new pc's is a little shaky as well... If so much people buy new computers, they should have funds enough to make quality products so we don't need that tech support. And isn't tech support better when they have product to support that don't change every 6 months ?

    • From your website [marsdude.com]: I decided to get rid of the little content that was left here...

      I guess without any content, it doesn't take much to run your server ;-)
  • Question. (Score:5, Insightful)

    by Boss, Pointy Haired ( 537010 ) on Thursday September 26, 2002 @05:07AM (#4334416)
    Does rapid improvement in processor technology cancel out the need for developers to learn how to write better code on a particular platform in order to achieve the maximum possible benefit from Information Technology?

    Background:

    Remember the BBC Micro, the ZX Spectrum? When they first came out, games were slow and blocky. But then several years went by without any significant improvement in processor performance.

    Therefore, in order to produce better software and better games, developers had to learn how to write better code on their favourite platforms. They developed techniques and tricks to make every Hz count.

    Today, you can do impressive stuff with crap code, simply through virtue of the raw grunt of the processor.

    Hence the question. Do they cancel out? If Intel had not brought out a new processor in the last 5 years, where would software be in relation? Better, worse, or same?
    • Re:Question. (Score:2, Informative)

      by RupW ( 515653 )

      Does rapid improvement in processor technology cancel out the need for developers to learn how to write better code on a particular platform in order to achieve the maximum possible benefit from Information Technology?

      No, that's

      1. the advances in compiler technology
      2. the divergence in architectures with a common instruction set.

      It's no longer practical to hand-code assembler for speed: chances are your C compiler will do it much better than you can and in a fraction of time, too. Nowadays if you get the basic algorithms right your compiler should do all the rest. (And if it doesn't, go contribute to gcc [gnu.org] until that does.)

      • Re:Question. (Score:2, Insightful)

        by Anonymous Coward
        Better code does not stop at the machine code barrier. Better code should also mean better design, which is the most important aspect of optimizing.

        You can spend weeks optimizing a hand-rolled assembly loop, to no avail if a better design allows for a faster approach to the problem to be solved.

        Thanks to compiler technology, programmers can now spend more time on the design. However, do they do that? Optimization is still an issue, because it seems solutions today only get more and more bloated.
        • Optimization is still an issue, because it seems solutions today only get more and more bloated.

          But bloat doesn't necessarily mean that something is not well optimized. Some kinds of optimization- like unrolling loops- can wind up getting improved performance at the cost of increased binary size. That's not always the case, since bloating the binary beyond a certain point can have diminishing returns as it prevents the whole thing from being able to reside in the fastest cache, but it is an important example of how big doesn't neccessarily mean bad.

          Honestly, is bloat really that big of a problem for a typical computer, anyway? RAM and disk memory seem to be growing even faster than processor speed, so that bloat really shouldn't be a serious issue. When was the last time you had to clear out your hard drive because it was getting too full? And when you did, was it because the binaries were too big, or because there were too many data files taking up space?

          • Honestly, is bloat really that big of a problem for a typical computer, anyway?

            A "typical computer" is not a PC. A typical computer is an embedded system in a microwave oven with a 0.5 MHz processor, 1 KB of ROM, and 256 bytes of RAM, if that.

            Next step up from an embedded system is a handheld device such as the Palm or the Game Boy Advance. You get a processor in double-digit MHz, only about 384 KB of work RAM, and storage measured in single or double digit MB.

            Then you have the typical six-year-old Pentium computers in public schools. 100 MHz, 24 MB of RAM, unaccelerated video, 800 MB hard drive, 4x CD-ROM (if that).

            Then you get to DVD-based game consoles, which have 32 to 64 MB of RAM. Bloat begins to disappear, but the less bloat you have, the more triangles you can push, and the faster your game will load. That was one of Mr. Shigeru Miyamoto's biggest complaints about the Sega CD and the old Nintendo Playstation project[1], that disc technology wasn't fast enough to provide a seamless experience. Only recently have engineers developed the hardware to load data faster and the software tricks to cover up loading time.

            Only after all those do you get to a relatively modern PC.

            [1] The Nintendo Playstation was originally a project between Sony and Nintendo to develop a 32-bit CD-ROM system that connected to the Super NES. When Nintendo dropped the project in favor of the Nintendo 64 console, Sony finished it up and released it as a stand-alone game console.

    • Re:Question. (Score:2, Informative)

      by Elledan ( 582730 )
      You must keep in mind that on these old (archaic =P ) systems you referred to in your post every program was tiny, so optimizing the whole program to waste no CPU-cycles was still feasible.

      Nowadays it would be pure madness to even attempt to optimize a program the same way as 'back then'. Programs simply have become too large (size and features) and too complex to begin optimizing them in the same manner.
      Not to mention the fact that the average system in use today is simply overkill for 99% of all applications.

      Sure, it would be possible, but would it be worth it? It would cost lots of money, take more time of larger development teams, driving up the costs of software.

      Optimization is a good thing, but only up to a certain point, beyond which it just doesn't make any sense.
    • It depens on what type of optimizing we talk about. Assembler optimizations arent worth the hassle today but some functions in applications can be well worth coding in a better way. My first c program i made for dos come to mind. It took keyboard input and searched to find similar commands or files in that directory and did basically inline completion by automatic while you typed. On our NCR PC4i it took 10 minutes with the first version and when me and my father tried some other ways we got that down to a fraction of a second (my dad was one of the first unix guys in sweden).

      The speed gain was extreme and we fixed it by programming slightly different. I think most apps can benefit if some care is taken on how things are done.

      I dont think they cancel out eachother and crappy code will always be crappy code. Slab a pile of code togheter over the weekend and you have a stinking pile of shit that works much worse than it could.

      My view is that this is like building bridges or houses. Cheat on planning and youve got a bridge/house thats worthless and dangerous. Time learn us that there arent any shortcuts to do advanced stuff. The abstraction strives being made in some unamed programming languages gives us crappy programs. Look at some unamed applications from a certain company making much of their software in an unamed programming enviroment that seems to squirt out much worse code than other enviroments.
    • Therefore, in order to produce better software and better games, developers had to learn how to write better code on their favourite platforms. They developed techniques and tricks to make every Hz count.

      That may be true, but programmers were likelier to jump-ship when the next latest-and-greatest computer would come out-- VIC-20 killed development in the PET, C-64 killed the VIC-20 (and Atari 400/800).

      There would have been alot more tangible incentives to jump in those days too, the leaps in changes were phenomenal: C-64 had a real synthesizer and 20X more RAM than the VIC, Amiga whomped C-64 with 20,000X more colours and stereo sound-- each advance was just as tantalizing to the developer as it was to the consumer.

      Nowadays, we're so completely divorced from the actual computer, what with APIs and hardware-abstraction-layers, why would you bother trying to squeeze every MHz out of a machine when how well the machine operates is really up to the OS maker? (*cough* Microsoft *cough**cough* planned obsolesence *cough* conspiracy with chipmakers *cough*)

    • You make a valid point, and I think the answer depends on the balance at the time.

      Right now code is out of control -- just look at how easy it is to simply throw lines of source at a problem until it is solved. It is the easiest method because CPUs continue getting faster -- the burden is on the CPU designers.

      However, once gains in CPU power stall, there will be no choice but for developers to take stock of their bloaty code and make changes. Having recently attended a panel with the original implementors from Atari, it is very clear that necessity drives invention and creativity.

      right now there is simply no need for creative coding w.r.t. efficiency. there may not be a need for a long time.

  • by Observer ( 91365 ) on Thursday September 26, 2002 @05:12AM (#4334442)
    ...(Whom God Preserve) is well and working hard on several hundred new inventions. They include:
    • A foghorn key,
    • An erasable wheelbarrow,
    • A stanchion for propping up other stanchions,
    • A collapsible canvas pumpkin for balancing on ironing boards (for acrobats, mainly),
    • A 4.7GHz Pentium that fries an egg while computing your bank balance, and
    • A chivet for screaming radishes.
    Note: one of these items does not belong.

    (With apologies to the irreplacible J.B.Morton, who for many years wrote the bellowing-out-loud-its-so-funny "Beachcomber" column in the London Daily Express. Wish I had the (longer) original of this list to hand to post it.)

  • by Jugalator ( 259273 ) on Thursday September 26, 2002 @05:18AM (#4334460) Journal
    Achieve super high speeds for super short durations to impress the spectators.
    • Achieve super high speeds for super short durations to impress the spectators.

      What do you mean? It runs at 4.7 GHz.

      a) that's not even close to super-high
      b) how is it a super-short duration?

      if you've been paying attention for longer than 2 years you will clearly notice that EVERY processor intel demonstrates becomes mainstream in a few quarters. period. they've never failed.

  • by DarkHelmet ( 120004 ) <mark&seventhcycle,net> on Thursday September 26, 2002 @05:59AM (#4334607) Homepage
    You think of using anything above 3 ghz to cook your Thanksgiving turkey :)

    If they don't make it by thanksgiving, don't worry! Just use your Athlon.

  • Business as usual, I suppose. Once everyone has their 1.whatever GHz processors, they have to go and show off something faster. People need to realize that, despite all these newer, faster processors, we don't need them. The Space Shuttle still launches, performs missions, and lands without too many failures, and they're not running much more than a 486 equivalent. We don't need 4.7 GHz. 2 GHz is more than sufficient for everyday use.

    When you think about it, the average user (AKA Joe and Jane Sixpack) do three basic things with computers: Internet (including e-mail, browsing and the occasional Multimedia site), Music, and Games. That's it. They're not ubergeeks like most of us /.ers. They won't be trying to scan, edit and compress 10 gigs of high quality video/audio data. They won't be compiling an insanely huge Linux Kernel. They won't be dabbeling in Voice Over IP. Hell, they probably mindlessly rely on MS apps to do the work for them, using Outlook, IE, and others.

    They'll get all wide-eyed and tickled pink at the thought of that kind of power, but all they'll really notice is windows opening faster. It's a huge waste of money, and they'd be too blinded by the thought of "this will make everything so much better" to notice.

    It won't make MP3s play any clearer, it won't filter out the spam that clogs 90% of their inbox, and it sure won't make "HotChicksPorn.com" load any faster. Unless the Sixpack's are running SETI@Home [berkeley.edu], they wouldn't notice much of a difference and feel ripped off. Those FFTs would render rather quickly on a 4.7 GHz machine, though, which I wouldn't mind.

    Production people like me would kill for a machine that fast. I do alot of digital video and audio work, and that kind of processing power would be most welcome. But people like me (and you, the ubergeeks of the world) are a relative rare breed. Maybe it's time for Intel and friends (or is it enemies) to start splitting demographics a little better and targeting specific types of "Joe and Jane Sixpacks" with different processors instead of just offering up the same two processors (Pentium and Celeron) to everyone as if we're all the same. The need to upgrade constantly isn't that big a deal, or at least it shouldn't be treated as such...
  • - First Post!

    New Thread
    - Someone complains that they should be changing the architechture not the speed.
    - Reply about how he just described the G4
    - Further reply that G4 is now behind
    - Sulky Apple - Intel speculation

    New Thread
    - AMD Roolz
    - Intel Roolz
    - Motorola Roolz
    - Crusoe Roolz
    - ARM roolz
    - No AMD roolz (repeat to fade)

    New Thread
    - Complaint that no-one needs that power
    - You said that last time and we did
    - I don't, I like my 486
    - Ever Rendered, played a game, video edited
    - Reasons for needing that much power
    - Offtopic bitch about CmdrTaco and reference to 640k being enough for everyone

    New Thread
    - Comment digest
    - complaints about comment digest
  • Redundant? (Score:2, Funny)

    by m00nun1t ( 588082 )
    "Intel Corp. here today demonstrated its fastest microprocessors to date--a 4.7-GHz chip for high-end desktop PCs"

    As opposed to their 4.7-GHz chip for low-end desktop PCs?

  • coffe? (Score:5, Funny)

    by bogado ( 25959 ) <bogado.bogado@net> on Thursday September 26, 2002 @08:43AM (#4335329) Homepage Journal
    Why don't make it water cooled, then you just put a paper filter and some coffer, and tada... your computer makes coffe. If want hotter coffe, just overclock it a litte. :-)
  • by account_deleted ( 4530225 ) on Thursday September 26, 2002 @09:36AM (#4335711)
    Comment removed based on user account deletion
  • Tenebrae [sourceforge.net] is an opensource game engine based on the Quake source code. However, this free engine has many of the same features as the upcoming Doom 3 engine. Stencil shadows, bump maps, per-pixel lighting, reflective water, etc... this game engine has it! Thing is it costs you, and even on a 2ghz Geforce4 setup, it runs under 72 frames per second. Note that for several reasons 72 FPS is optimal for Quake play. A 5ghz CPU with a Geforce4 could probably chrank out 72 FPS in Tenebrae at 1280x1024x32.

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...