Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel

Intel defocussed? 42

MPR's analysis of Katmai is that it offers no benefits for business users or consumers not interested in 3D or video. In fact, Willamette should be shipping by now (4 years since the P6 was designed), but it hasn't even taped out... suggesting the proliferation of P6 cores (Klamath, Deschutes, Mendocino, Dixon, and Katmai) and the development of Merced is spreading Intel's engineers too thin.
This discussion has been archived. No new comments can be posted.

Intel defocussed?

Comments Filter:
  • by Anonymous Coward
    Finally, someone has publicly stated the emperor has no clothes. The business community has been screaming that there hasn't been one single advance in computing in the last five years. Any increase in hardware speed has been soaked up by the bloated software resulting from the C++ compilers and object oriented programing.

    Computer usability has been steadily regressing to the point where apps are no more productive or useful than they were in the late 80s.

    A pox on all of you.
  • More like AMDinux, or LinMD... but by then, PC's won't have nearly as much impact on the market than smart single-purpose devices running eCos or something.
  • But I won't know until I test them head-to-head, which I plan to do.

    I think KNI can be useful for games and 3D rendering (it should speed up matrix math something wonderful) but it really has nothing to do with normal business use, especially servers. How often does your file server need to raytrace?
  • Posted by gruv:

    You're "stuck" on Intel? I don't like to see that. Your're not "stuck" with Intel. NOt when you have AMD...
  • Avoiding cache churning would be a big advantage, but as far as I know, PIII doesn't do that.

    What MIGHT be a good idea is something like an extension of MTRR to allow little used code (especially one shot init code) or data to be marked no cache to keep it out entirely. This perhaps with a persistant attribute for frequently accessed code or data. Ideally, compilers would automatically optimize by creating no-cache and persistant sections to minimize the number of memory zones affected by the new attributes.

    The drawbacks to that include increased complexity, and the need for compiler support (or inline assembly) to see the benefits. Without compiler supported zone allocation, you'd probably run out of attribute registers and end up leaving some sections unoptimized. It would be really easy to do a bad job of optimization. Of course, KNI has the same problem.

  • In the meantime, that prices Xeon systems up in the lower stratosphere with faster RISC systems including Sun Enterprise servers.
  • Do you wonder where these analysts get money from? They sell reports to the companies who buy these
    processors. Nobody wants to hear what they just bought was junk, they want to hear it is the best.
    And, until recently, everyone bought Intel. Now that more companies are designing-in non-Intel
    processors, the coverage will probably reflect this.

    You have to hand it to MDR, though. Almost everything they predict eventually happens (but
    not usually in the same timeframe). But this is probably just a result them printing the press-releases
    of Microsoft and Intel and post-hypnotic suggestion of the marketing guys who read their newsletters.
  • Er... my BBC Micro could do real time spellcheck back in 1987. That's a 6502, for all you suckers don't know. The fact that it's taken Microsoft so long to implement it is merely an indicator of their mind-boggling incompetence.

  • Can anyone think of an application which is accelerated substantially by MMX or KNI, but which could not practically be accelerated by an add-on board?

    The defects of add-on boards are that you can't put much RAM on them, and that the link to the CPU is slow. For graphics, the slow CPU link means you throw vertex data across, and get your textures by DMA, and use nifty things like DX6 texture compression to get around the lack of memory on the video card.

    For sound, you don't need much RAM and you don't need tens of megabytes per second of bandwidth, so add-on boards are not a problem.

    The obvious scientific applications really need more than 16 bits of precision - so you can't use MMX. Possibly KNI would be useful for data analysis - I think you could write an incredibly fast single-precision FFT using it.

    For working with large numbers (RSA crypto, lots of mathematics), you really need double-precision FP - so KNI is useless - for the FFT approach to bignum arithmetic, and you want add-with-carry of packed 32-bit or 64-bit numbers - which KNI just might have - for the schoolboy approach. And, unless your numbers are incredibly large, they'll fit in the RAM of an add-on card; unless you're doing fairly unpleasantly complicated tasks, you can do all the work in the add-on card's RAM and not worry about transfer bandwidth at all.

    OK. So what applications are MMX or KNI actually useful for? The only one I can think of, possibly, is clever video decompression where you work from tens of megabytes of stored history - but video decompression runs into the bandwidth-to-graphics-card bottleneck. So that's no good either.
  • I'm not sure hardware speed has been soaked up by bloated software. I think it's more a matter of hardware speed having gone beyond the capacity of software to bloat.

    Sometimes you can get advantages out of hardware speed - but emacs can do real-time spellcheck and autocorrect on arbitrary small Linux boxes, and even Word can do them, with non-noticable delay, on a P133.

    As far as we know, You can't use all of the performance of a P2/350 to wordprocess or to browse the Web - while I'm using this PC, and everything is running fast enough to keep up with my typing and update the screen smoothly, 85% of my cycles are going to Prime95.

    If a P3/700 is three times faster than this box, all I'll notice is that 95% of my cycles are going to Prime95. It'll still keep up with my typing; it'll still update the screen smoothly.

    !GIVE ME AN EXAMPLE OF AN APPLICATION (not a game, that's too easy - though even games aren't that bad now we've got ubiquitous 3d acceleration) THAT REQUIRES A P2/350!
  • There WAS NT on PPC. It died of neglect due to no native apps being available for it.

    PPC/MacOS may not be such a bad thing by the end of the year: BSD based OS, end-user apps, newbie-friendly GUI...

    And you can always run Linux on PPC....
  • One thing that's kind of funny about Intel is the price pressure they're putting on themselves. In an effort to milk the top end, they're in the situation where ~10:1 price difference buys ~2:1 performance difference.

    I read that Intel's down to 75% of the x86 market, and dropping. I can only see that trend continuing. Merced's performance won't be compelling compared to x86 chips which will be available then. Maybe McKinley will turn things around for them??
  • I too was saddened by the demise of Mac clones. It was one way to keep PPC machines more competitive with x86 machines. Apple seems to have realized their previous pricing is not going to fly, and they are moving to be more competitive.

    One sad thing about Moto is that they have PREP boards, they make the chips, they have (had) a subsidiary to make the boxes. They COULD build a PPC box with Linux on it. Evidently, they would rather focus on the embedded market :-(
  • I thought this name was obvious. Kinda reminds me of Lisp...
  • I think the article is looking in the wrong direction for Intel's "spreading too thin" problem.

    Their sudden and unpredicted need to churn out a series of Celeron parts must have screwed up all their other schedules. A year ago there were just realising the problem. They quickly introduced a crippled processor everyone laughed at. They had to do a better one real quick. Now they can't make any money from it, as the package costs a bomb (or is that too much of the BOM?), so they have to put more effort into Socket 7, er, Socket 370 (is that like an IBM 370?).

    AMD's sales are small compared to Intel, but it seems that to a substantial extent AMD is now driving Intel's development plans.

    Everyone called Intel the one product company. Now they are fighting this claim with a host of fragmented product lines.

    When you compare Intel with a broad line supplier like AMD, the "spread too thin" argument doesn't hold water. From must lower total revenues AMD finances the development not only of their x86 series, but of a wide range of other products. Intel makes the CPUs, the supporting chip sets, and just a few other PC related parts. They keep pulling out of other activities, like their microcontrollers.
  • Example of using a fast PII fully under Windows 95, 98 or NT:

    1. Start the CPU monitor
    2. Put your mouse pointer over a large window's header bar
    3. Press the left button, and keep is pressed.
    4. Wiggle the mouse from side to side, and watch the CPU monitor.

    That can keep a PII/400 fully occupied. It even keeps a high end Alpha running NT 4.0 about 70% occupied. Even with the latest video accelerator cards Windows still requires huge compute power for crisp display update. NT3.51 required only a fraction of that power. Sure a slower processor works, but the feel of the machine is much better with the faster processor. Everything is so jerky with a more mundance CPU.
  • the end is nigh for the two
    traditional computing superpowers...

    bye bye, wintel machines.

Life is a healthy respect for mother nature laced with greed.

Working...