Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re:Awesome (Score 1) 271

What I'm saying is the only reason the P4 design is "broken" is because of the scaling problems encountered that were previously unexpected. It was deliberately designed that way with sound thinking behind it with the knowledge fo the day - because existign designs hit a brick wall at ~1.2-1.3ghz.. Unfortunately, the design didn't work as expected. Intel (and everyone else, but intel was on the bleeding edge) did not foresee the drastic power leakage problems approaching 3ghz.

Any CPU designer has to make some core design choices and they are all trade-offs. In the P4s case, the trade offs were IPC vs clock speed. Clock speed did not scale as expected, hence it was a dog. If the unforesee problems above DIDN'T happen, P4 would have scaled to 10Ghz, and no it would NOT have needed a refrigerator to cool it. The only reason they were hot was because of the above issue.

And yes, exact same thing with AMD today. They tried to automate their CPU designs to punch them out cheaper and more cost effectively than doing layout by hand. Different design choice, but it was still a risk. That didn't pay off as expected. Bulldozer is the result.

The thinking was sound, the execution, and reality differed from the theory.

Unfortunately CPU designs are a major investment, and like intel in the early 00s, AMD now have to live with it until they can spend the time and money to change direction. Hopefully, AMD have what it takes to weather the storm. I'm not so sure they do though.

Comment Re:Awesome (Score 1) 271

TO clarify - i'm not claiming a 10ghz p4 would be competitive today. I'm claiming that a 10ghz P4 VARIANT in 2006 with tweaks to the original 2001 design, when put on a modern process, up against the other things around IN 2006, would wipe the floor. The plan was to have the p4 running at 10Ghz on the process tech available in 2005-2006. NOT what we have today in 2013.

Of COURSE modern CPUs have better IPC. They have billions of transistors more than the P4, bigger cache, much smaller process tech, etc. The playing field is NOT LEVEL. Put a 10ghz P4 up against a 2006 spec AMD chip (NOT SOMETHING FROM TODAY) and the story is entirely different.

IF that had worked, we wouldn't simply have 10Ghz pentium 4s today. We would have 10 ghz or higher clock CPUs that had additional execution units, multi cores, etc.

But clock speed did not scale any more with process reductions like it had since the early 80s. There was a wall at approximately 3-4ghz, and the current CPUs are of COURSE much better IPC because they were FORCED to be that way because ramping up clock and shrinking process simply no longer works.

And again, the heat the P4 had was a symptom of the CPU frequency scaling hitting a wall and intel having to ramp up power to get the thing stable at even 3Ghz. Recall that prior to the P4, CPUs were only hitting 1-1.3 Ghz. That the first P4 hit 1.6 quite easily (pretty sure people were o/c'ing them to 2+ ghz on release) was perhaps a misleading indicator of things to come.

Comment Re:Awesome (Score 1) 271

6ghz is still WELL SHORT of where intel was planning on running this CPU - on AIR. And yes the ghz would help with the IPC as it was. The whole point was they thought they could ramp clockspeed higher to more than compensate for the IPC penalty of horrendously long pipelines.

But reality differed from projections. Comparing to modern CPUs is pretty pointless as they all learned from the mistakes intel made with the P4 b ack in 2000? 2001? The heat was a byproduct of having to run a lot more power through the CPU to compensate for other issues than originally planned - to even get the speeds they achieved out of it.

If intel had hit 10 Ghz successfully with the P4 (i.e., the unforeseen problems with transistor leakage did not intervene), the desktop CPU market would likely look very different today.

Comment Re:Awesome (Score 1) 271

In other words.... intel planned to have a P4 running at 10ghz in 2006 (i think?). Even if its IPC was half that of a Core 2 running at contemporary speed of 2-3ghz, the 10 ghz P4 would still be 1.5x faster.

The Core 2 onwards only exist because even though intel stretched the pipline for 10ghz other issues cropped up (forget what they were exactly leakage or something?) and the P4 was left as a dead duck. Intel salvaged a little performance from it with hyperthreading and various instruction set upgrades, but it wasn't really enough.

What amazes me though is that the current architectures we are still using in intel land are still (admitedly distant) descendants of the P6 from 1995.

Comment Re:2013 AMD has a message for 2005 AMD (Score 1) 271

To a point, sure. But purchase price would have to be massively different to have a significant impact on TCO. If we were to save say, 50% on compute costs, but it then took 2x the power, cooling and floorspace, it would be a net loss. Purchase cost is a one-off. Power, cooling and space costs are ongoing, and trending upwards.

Comment Re:Awesome (Score 1) 271

The pipeline was so long because longer pipelines makes it easier to ratchet up the clockspeed. Intel ran into other issues which precluded the ability to ramp up past 3Ghz. So the trade-offs they willingly made with pipeline length were still there, without the clockspeed benefit.

IF intel got the raw clockspeed to 10Ghz like they planned (by willingly trading some IPC to get there), Netburst would have been a winner.

As it is, it didn't work out, and intel paid the IPC penalty without getting the payoff. Lesson learned. HENCE as you say, the core series, which kick its arse.

Comment Re:Awesome (Score 1) 271

Netburst only failed because the planned clock-rates proved to be impossible. It was intended to be running at 10Ghz by 2006 if I'm not mistaken. The major issues with getting beyond 3Ghz (I had a 3Ghz machine in 2004, and my following machines have been 2.4ghz and 2.2ghz... i.e., clocking SLOWER) were not foreseen by intel.

If it had scaled to the intended clockspeeds, netburst would have been fine.

But it didn't. Intel tried and it didn't work out. Thems the breaks.

Comment irrelevant (Score 1) 271

220 watts? Haven't they heard that power consumption is the issue of the day? Both in mobile, AND in the datacenter. If you double the power consumption of your servers you need to double your UPS, double your battery, double your generator capacity and double your cooling. that is not cheap. This machine is a non-starter for the non-amd-fanboi demographic.

Comment Re:You can pry XP from my cold, dead hands (Score 1) 438

Or you know... you could just upgrade the OS to one that will actually run secure versions of your enterprise apps. Even if you disable ALL network services on a Windows XP box, the user will still need to get data on and off it. And if they can do that, then they can inadvertently or maliciously get malware onto the machine that will exploit bugs in say, directX, the kernel, the filesystem, etc.

The "solution" you propose is far more work than simply following the path of least resistance and upgrading to a supported release.

As far as security issues go, plenty of them originate from INSIDE the perimeter, so your claim of "oh, that's local network only" is a bit of a cop out. Unless all you want to do with your windows box is play minesweeper.

Slashdot Top Deals

"Spock, did you see the looks on their faces?" "Yes, Captain, a sort of vacant contentment."