AMD's 64-bit Plot 531
ceebABC writes "In a long interview with eWEEK, AMD's CEO Hector de Ruiz talks about struggling to compete with Intel, but more importantly about their upcoming 64-bit processors. He says that AMD's 64-bit chips will be comparatively priced to the 32-bit ones, and backwards compatible. He also thinks there will be a market for desktop 64-bit systems. Skip to the last page for the most interesting stuff."
Hmm (Score:3, Interesting)
Re:Hmm (Score:4, Funny)
Re:Hmm (Score:3, Interesting)
I can buy PC133 @ US$60 per half gig. For US$500, I can fill the address space of the 32 bit processor, yet a non-trivial home movie could occupy more than 4 GB in uncompressed form.
Re:Hmm (Score:3, Insightful)
Here's where consumers need 64-bit CPU's (Score:3, Insightful)
The first place where this will be useful is video editing. With the proliferation of MiniDV camcorders that have IEEE-1394 connections to desktop computers, many camcorder users are downloading video onto their computers for editing and creating home-made VideoCD or DVD-R discs. With 64-bit CPU processing we now can see the development much more sophisticated (yet easier to use) programs that make video editing and VideoCD or DVD-R disc creation almost a snap.
The second place this is useful is still image editing. With the proliferation of digital still cameras with USB ports people are doing more and more image processing of still images before printing out the pictures. With 64-bit CPU processing we can see image-editing tools that can do image processing that is far more sophisticated than what even Photoshop 7.0 can do today, yet would be easier to use than ever.
The final place is games. 64-bit processing makes it possible to do extremely sophisticated graphics effects in real time without over-reliance on an expensive high-end graphics card; a lot of games that need fast motion with complex backgrounds could benefit from going to 64-bit CPU processing.
Re:Hmm (Score:3, Insightful)
More bits not useful to games? (Score:5, Informative)
Re:Hmm (Score:5, Informative)
That's the biggest bunch of crap that I've ever heard. There are a bunch of games that do fixed point math because floating point does not give you enough accuracy.
Collision detection would certainly benefit from improved precision. Physics suck in games because it is difficult to do fast and accurate at the same time.
Epic has promised a 64bit version of games. I'm guessing they are doing so for a very good reason. And they are doing this despite the fact that they use a comparitively very robust physics engine in Karma.
I'm guessing you've never implemented a physics engine or even taken a Numerical Analysis course or read any books. So how about pulling your head out of your ass before disseminating FUD.
Re:Hmm (Score:3, Informative)
While x86-compatible CPUs have generally not been used in dedicated networking devices until very recently due to the cost to performance ratio, they have become a fairly popular high-performance embedded solution lately. Hammer should be an extremely attractive solution in the high-performance embedded space because:
Another nice factor of using large integers instead of floating point is that when you absolutely positively have to get the result back in the same number of cycles each time, you can do this. Math coprocessors are just that, coprocessors. I haven't kept up so I don't know just how fast you can expect things to come back from them these days, and if they are actually scheduled or not, but at least in the olden days you had to shovel the data at the math co, then query it to find out if it was done. One problem was that if you queried it too fast it might not have set the flags properly yet and you would get bogus results. Ah, x86 is so classy!
The address space will become significant to us all very quickly if we start doing entirely memory-mapped I/O. Isn't this an issue of the Hurd at the moment? While there are other ways to solve it (but who wants to deal with segmented addressing? not me!) certainly there are many advantages to mem-mapped I/O.
And finally, sure games do fine, but more power means bigger, shiner games with more gibs! Also the reason GPUs have become so popular is that CPU speed wasn't growing fast enough to satisfy the desires of the game industry. Expect to see some more graphics-related processing to be done in the CPU for a while, namely multires (the reduction of vertices in a model one at a time with re-meshing in between, with the greatest number of vertices assigned to the appropriate models and usually determined by a scoring system, using very high-vertex-count models which may never be rendered with all visible vertices plotted EVER.) Multires and the most simple of occlusion techniques is enough to make a scalable game which will look very good on even low-end hardware and still look fantastically better on high-end equipment. It does cost you CPU though, and I'm sure you can see where I'm going with this. Of course multires will be an inherent feature of a future generation of 3D accelerators which will do even more for the developer and likely have even crappier drivers.
Also the memory bandwidth of hammer doesn't seem like it's all that outstanding except that it's integrated into the CPU and so you can expect to do less waiting. The real advantages in terms of memory bandwidth will be in SMP systems. Of course I don't know too many people planning to go to Clawhammer who aren't planning to go to dual Clawhammer, but if they are less inexpensive than promised I'll be one sucker with only one of 'em.
What desktop users want to know.. (Score:4, Insightful)
Re:What desktop users want to know.. (Score:4, Insightful)
For desktops, you are right. However, a huge part of the 64-bit market is in servers, and the possibility of >4GB memory is a Big Thing. My SQL Servers will eat that much for breakfast.
Re:What desktop users want to know.. (Score:5, Insightful)
Nope. These days it's price. You can barely, oh so barely, tell the difference between 866MHz and 2.4GHZ, and only then when running certain high-end games or 3D modelling packages. Now go over to Dell's site and price a 2.4GHz system. You can easily get something with 256MB and no monitor for US$800. Now upgrade to a 3.06GHz P4. How much does that does that 27% increase in clockspeed cost you? Just over US$1000. And what does it get you? Remembering that clockspeed does not translate directly to more CPU performance, maybe you're getting a 20% across the board improvement, but _man_ are you paying for it, both in cost and power consumption. And was it worth it, for 27% faster than "more speed than I know what to do with?" Probably not (though I realize that all hardware site weenies will absolutely insist that they can feel the difference when browsing the web on such a machine).
Re:What desktop users want to know.. (Score:2)
Tubes versus solid-state...
Beta versus VHS...
Vinyl records versus CDs...
Air-cooled versus water-cooled...
Re:What desktop users want to know.. (Score:2)
Re:What desktop users want to know.. (Score:3, Funny)
ECS K7S5A mobo, newegg.com, $54
512MB of kingston PC133 memory, newegg.com, $50
Maxtor 80GB HD, newegg.com, $110
Liteon 48-24-48 CDRW, newegg.com, $50
Chieftec 450W Tower, newegg.com, $55
SB Augigy+firewire, newegg.com, $60
floppy, NIC, etc, newegg.com, $30
Shipping, Fedex + newegg.com, $50 (approx)
Building a sweet, powerful, linux-ready system for ~$515?
Priceless.
ps- If my math is off by a bit, sorry. And I never checked shipping, but newegg's is cheap. And, does anybody know about Turtle Beach Santa Cruz support under linux? Experience?
Re:What desktop users want to know.. (Score:2, Insightful)
But you DO notice the difference between a 866 Mhz processor and a 2.4 Ghz one in many ways. On of them is the time it takes for the computer to boot. But there are several other tasks that become much faster by going up with the frequency... also remember that a 2.4 Ghz processor has DDR whereas an 866 Mhz one probably won't (haven't heard of 866s with DDR, although I may be wrong). Hopefully another factor that will show you a nice speed increase in the future is the new Hyper Threading tech in the 3.06 Ghz Intel CPU.
The computer's overall speed is increased, and yes, you will notice the big difference when it comes to playing games, using programs like Pro Tools or doing Graphics, but that doesn't mean the rest isn't changed at all.
I have a K7 850 and an Athlon 1400 DDR and hell, do I notice the difference? Of course I do...
Decameron
Re:What desktop users want to know.. (Score:5, Insightful)
Sorry. Wrong. I went from a 1Ghz Athlon to an 1850Mhz AthlonXP. I use Windows XP. Programs opened faster. And when you're talking about Mozilla, or Office, or Photoshop, or Dreamweaver, or anything more complicated than notepad, really, you DO notice this. Especially when you're opening and closing programs all day long.
When I come across a webpage designed with complex tables and CSS elements, the speed improvement is noticeable (e.g. my banking website, which I frequent, is complex and now renders much faster).
You can never have enough speed. You will always notice a difference, eventually, because the more power that becomes available, the more complex things become that we use frequently.
And believe it or not, but many people like to play new games. Not just "gamers." Regular people, too. My dad can barely turn around in Quake, but he loves wandering around in god mode and shooting things. He wants to play Doom3 when it comes out. He will need new hardware.
I'm just sick of this lame argument that people aren't interested in new processors because they can't tell the difference between 800Mhz and 2Ghz. Bullshit. They might be able to LIVE with the difference in speed, especially if money is tight, but you can never have "too much" speed.
Re:What desktop users want to know.. (Score:5, Interesting)
Computers will be fast enough, when, for every conceivable operation, system delay between user requests and proper system response is less than the human ability to resolve, eg, instantaneous.
Not instantaneous, as in
Re:What desktop users want to know.. (Score:3, Interesting)
With up to 10^81 (2^273) atoms in the universe and then what level of subatomic detail? 512bit seems about right to me. Pity Moore's Law suggests we might have to wait til around 2674 til
Unfortuantely anyone who reads this today will be lucky to see a nice 128bit computer in 96 years time at 1.5MegaPetaHz
Re:What desktop users want to know.. (Score:3, Interesting)
If CPU performance is the _only_ differentiator, maybe. Back in the PII days, I had a 166MHz Alpha (21066) for a personal machine and at work, a PII, 300MHz, both systems ran Windows NT 4.0.
In just about every way, the PII was supposed to be two times or more times better, but the Alpha system actually felt "snappier", despite the computation performance and bandwidth of the PII being tested as 2x faster.
Re:What desktop users want to know.. (Score:2, Interesting)
Yes it will, due the larger register file of x86-64. Epic ported UT2K3 to x86-64 and said they saw a 15% perf increse vs. IA32-version running on same CPU.
Wow (Score:3, Insightful)
Re:Wow (Score:4, Funny)
Grandma and grandpa could check their email on a 16-bit computer. Don't forget grandpa's geri-porn, you need some horsepower for that.
640KB should be enough for anyone (Score:5, Insightful)
Re:Wow (Score:2)
Now, when you are a rich and famous IT star, you will regret saying that like this guy does :)
Microsoft has not changed any of its plans for Windows. It is obvious that we will not include things like threads and preemptive multitasking in Windows. By the time we added that, you would have OS/2.
-- Bill Gates, from "OS/2 Notebook", Microsoft Press, (c) 1990
Re:Wow (Score:5, Insightful)
Do you remember the opportunity brought about by the 386? Who needed that when all the modern applications ran fine with the 286? The 386 even broke some of the old 286 code. But it was still very useful to programmers who could spend focusing on quality (and bloat?) rather than worrying about how to confine data to 64 K blocks. Almost 20 years later we are still benefitting from the whole flat memory model that finally came to x86 (flat up to 4 GB, that is).
If you have to ask the question of who needs it, then it's not you... yet. Sure the first adopters are the Corporate people who know they need it as well as the "look what I have" crowd. But I'm pretty sure that there will be consumer applications that will make 64 bits necessary after there is enough consumers that have them.
640 TB should be enough for anybody.
Re:Wow (Score:2)
Re:Wow (Score:4, Informative)
The industry stalled for a while because NOBODY had introduced anything for the PC compatible industry that wasn't a clone of IBM's systems or peripherals until then. Finally, Compaq risked the company with the DeskPro 386 and IBM was in serious trouble.
Re:Wow (Score:5, Insightful)
That's a bit of a narrow minded view, don'tcha think? Consider this: We don't know what we'll be doing with computers 2-3 years from now. If it turns out that PVRs are a killer App, for example, then suddenly 64-bit processors are interesting.
The "who really needs it for the most basic stuff?" argument is extremely tired. Lots of people buy their machines based on their potential, not what they can do with them today. Don't believe me? Then look at all the people who bought an XBOX solely because of it's chipset and hard drive. They were (and are today) expecting to eventually buy games that blow them away.
If computers were strictly used for their most basic features (internet browsing, email, etc...) then 'internet appliances' would have been some sort of hit as opposed to the flop that they are. So please, put this 'how do I get my grandma to buy one?' argument to rest. The answer is she won't. But there is still a large market of people who do want/need 64-bit processing. You don't need for grandma to want one in order for the product to be a success.
Re:Wow (Score:3, Insightful)
You're forgetting something: What if Grandpa and Grandma want to view that shiny video email of their grandkids? And what if they want to play movie director in their copious free time and compose a video email themselves?
After all, today's crop of digital cameras already record mpg clips (about six seconds worth before the CF card fills up), but it won't be long before flash ram gets even cheaper and we start seeing 4/8 GB cards.
Once the processors are available, applications will be written to take advantage of the larger word sizes. There's no way to tell what will happen.
Heat. (Score:3, Interesting)
As much as I love AMD, my box is far too loud, and I'm too damned cheap to shell out another $100 for decently quiet fans.
Re:Heat. (Score:2)
I don't think anyone has a definitive answer for that question. However, you have to remember that the Athlon is an older part which is nearing the end of its life... Intel faced the same situation with the Pentium III beyond 1 GHz.
Silicon-On-Insulator (SOI) technology, which will debut with Opteron/Clawhammer, is supposed to reduce heat by around 15%
P4 no longer cooler operating than Athlon (Score:3, Informative)
I have a dual CPU Athlon 2400+ box, 2GHz each, using Thermalright SLK800 heatsinks and 80mm adjustable fans set to 2500RPM. My temps are 41C/43C/42C (case/CPU1/CPU2) at the moment with about 25% CPU utilization. Power consumption (as measured by my UPS load monitor) is the same as the dual Athlon 1800+ chips (1.53GHz) the new CPUs replaced.
Big Bets on Table (Score:5, Insightful)
Both Intel and AMD have been betting big on 64 bit computing and it will be interesting to see how this plays out.
Itanium 1 was a flop. Itanium 2 has respectable performance, but is not IA-32 backward compatible, where AMD x86-64 is backward compatible.
I will bet that backward compatibility will tilt the balance to Opteron and that Intel will scramble to introduce a new chip Yamhill(?) designed to provide the backward compatibility that IA64 lacks.
Re:Big Bets on Table (Score:2)
Once AMD and Intel have 64-bit processors that are affordable and faster than their 32-bit products, I imagine apps will be optimized for both x86 and IA64 architectures. This could be by using separate binaries compiled for each, or just by writing for Java or
I'm not sure how the emulation works though. Does the CPU have to switch modes using a lengthy switching time, or does the emulator just pick up x86 instructions and translate them to IA64 instructions?
Re:Big Bets on Table (Score:3, Informative)
As I understand it, AMD's 64-Bit processors actually have hardware for supporting the previous 32-Bit instructions. I could be misunderstanding, but if I'm not this will naturally mean that with 32-Bit instructions the AMD chip will outperform Intel's emulation.
Intel is banking heavily on people finally ditching x86 for good. There are good reasons for people to ditch x86, but there is one good reason to keep it: Legacy Support. How important that is will depend on the person and their needs.
Re:Big Bets on Table (Score:3, Interesting)
They don't *WANT* to make money?!?! (Score:2, Interesting)
I confused!
Re:They don't *WANT* to make money?!?! (Score:5, Informative)
(a quote from first paragraph of the Forbes article [forbes.com] "[a] strategy of developing processors for a wider range of products outside computers
Re:They don't *WANT* to make money?!?! (Score:3, Insightful)
All the article said was that AMD saw the ridiculous waste of time in simply jacking up the speed of processes continually... We're up to 3GHz now... and what actually requires that? Not much... so why not spend the time building COOLER chips that can be cooled in a QUIETER way... in fact, why not ship your chips with a QUIET fan, like really QUIET (why am I shouting the word QUIET? Oh yeah, so I can be heard over my AMD with noisy FAN!)...
Cooler... damn that would be nice... my media server, sitting in my entertainment cabinet... pumps out a lot of heat... it's ridiculous really... I got a relatively lowly Duron 1GHz and it's pouring the heat out.
Surely, now that they're up at 3GHz... rather than screaming towards 4GHz like mad things, why don't they work on making the 2GHz and lower cooler?
Benchmark's (Score:4, Informative)
http://www.aceshardware.com/
64 bits=$8=8 bytes etc??? (Score:2, Informative)
Re:64 bits=$8=8 bytes etc??? (Score:3, Informative)
"64bit" refers to the size of the instruction word, not "how much data the processor handles at once". That is a function of pipelining, ALUs, branch prediction, etc. This can be proved by a recompile of a 32bit application with 64bit flags. The application won't be "magically" twice as fast.
There is something else... a 64bit app may even be *slower* as the cache can only hold half the number of words, given an equal cache size. Cache misses are a huge performance hit these days, as RAM is much slower than Cache RAM.
Of course the big difference between AMD and IBM is that the new 64bit PPC970 doesn't take a performance hit switching between 32 and 64bit applications. This has more to do with the PPC ISA than anything in the processor.
The only thing that 64bits will give "normal" users is the ability to address a *huge* amount of LOGICAL memory. In most cases, it doesn't make sense to make 64bit versions of applications, due to the above cache issue. Also, note the allusion that users will require more RAM for 64bit applications, as it will be needed to store the larger word size.
.
Fot the Apple in all of us (Score:3, Funny)
What a shame for both Apple and AMD.
Especially since Apple has AMD support built in [apple.com]
64 bits sounds nice, but misses the point (Score:2, Interesting)
I think having 64-bit Linux without buying a SPARC, RS6000 or PA-RISC box will be huge for the enterprise. The rest of us will wonder why our apps still suck.
Will This be Linux's first killer app? (Score:4, Interesting)
Just a question.
Thanks for the replies
Re:Will This be Linux's first killer app? (Score:5, Informative)
MS have been quietly getting ready for 64 bit for at least 2 years; they've been shipping a 64 bit SDK on my MSDN disks for over a year. There are 64 bit NVidia drivers for WinXP-64. What makes you think MS isn't already there?
is M$ quiet about anything? (Score:3, Interesting)
Spare me the smoke and vapor. Don't you remember the sad story of Mica, errr, NT on Alpha [winntmag.com]? Loudly proclaimed, quietly killed, that's why I think they are not there. If you consider the number of bugs and holes in 32bit M$ work, you might conclude they never arived anywhere.
In the mean time, you can get Linux and BSD on Alpha and other 64 bit platoforms:
Oh, it hurts so much to remember and think!
Windows runs in 64 bit (Score:2, Informative)
Re:Will This be Linux's first killer app? (Score:2)
A marginal OS on a marginal processor? (Score:2)
New apps (as in killer apps)? No.
New OS features (by going 64bit)? None.
Speed? Somewhat.
Since when did a little bit more speed make linux a killer app? Also considering that if there is a marked, Windows will most certainly give out a 64bit-version.
Kjella
Wow, the MPAA is *SO* screwed. (Score:2)
Good thing it's backwards compatible or all the studios would have to upgrade their writers too.
The article (Score:2)
Re:The article (Score:5, Insightful)
Yah - AMD will offer it to the consumer combined with motherboards from tier-1 manufacturers like Asus, Abit, IWill, Tyan, and so forth, all at an attractive price (read: the same price as the Athalon XP CPUs).
Intel, on the other hand, will keep their 64 bit CPUs out of the consumer hands by pricing them above what most consumers are willing to pay, thus reaping a premium on them by selling them in servers through Dell and IBM (making even more money on cases and motherboards). There will be limited support for the CPU outside Intel's own motherboard offerings, and if you run with a hard-drive, video card, CD-Rom that has not been explicitly approved by Intel, then forget support (we've had this problem with Intel on some of their server motherboards).
Intel is taking the Cathedral approach, and AMD a Bazaar approach [tuxedo.org].
Over 10 years after DEC introduced Alpha .... (Score:5, Interesting)
This is 10 years after DEC has introduced the Alpha Architecture (in spring 1992).
The Alpha was fun to work with, not only because of it's 64 bit architecture, but because of the clean orthogonal instruction set and it's outstanding performance.
Rest in peace
Re:Over 10 years after DEC introduced Alpha .... (Score:4, Funny)
CART MASTER: What?
CUSTOMER: Nothing. Here's your ninepence.
DEAD PERSON: I'm not dead!
CART MASTER: 'Ere. He says he's not dead!
CUSTOMER: Yes, he is.
DEAD PERSON: I'm not!
CART MASTER: He isn't?
CUSTOMER: Well, he will be soon. He's very ill.
DEAD PERSON: I'm getting better!
CUSTOMER: No, you're not. You'll be stone dead in a moment.
CART MASTER: Oh, I can't take him like that. It's against regulations.
DEAD PERSON: I don't want to go on the cart!
CUSTOMER: Oh, don't be such a baby.
CART MASTER: I can't take him.
DEAD PERSON: I feel fine!
CUSTOMER: Well, do us a favour.
CART MASTER: I can't.
CUSTOMER: Well, can you hang around a couple of minutes? He won't be long.
CART MASTER: No, I've got to go to the Robinsons'. They've lost nine today.
CUSTOMER: Well, when's your next round?
CART MASTER: Thursday.
DEAD PERSON: I think I'll go for a walk.
CUSTOMER: You're not fooling anyone, you know. Look. Isn't there something you can do?
DEAD PERSON: [singing] I feel happy. I feel happy. [whop]
CUSTOMER: Ah, thanks very much.
CART MASTER: Not at all. See you on Thursday.
CUSTOMER: Right. All right.
Re:Over 10 years after DEC introduced Alpha .... (Score:5, Interesting)
32-bit compatible = a temporary half-solution (Score:4, Informative)
The problems to be hurdled are:
1) Reliance on the fact that size of pointer is equal to size of int.
2) Reliance on a particular byte order in the machine word.
3) Using type long and presuming that it always has the same size as int.
4) Alignment of stack variables.
5) Different alignment rules in structures and classes.
6) Pointer arithmetic.
A lot of engineering (and developer re-education) work also needs to be put into not only these issues, but also designing the application so that it is actually getting the most out of each clock cycle.
Re:32-bit compatible = a temporary half-solution (Score:2)
But the advantage of Hammer is that you don't need to migrate ALL of your apps to 64-bit to get a serious performance benefit. With the IA64, the performace of 32-bit applications is terrible, so it's a poor choice unless most of your software is 64-bit.
Re:32-bit compatible = a temporary half-solution (Score:3, Interesting)
1) Reliance on the fact that size of pointer is equal to size of int.
5) Different alignment rules in structures and classes.
AMD is puny (Score:5, Funny)
"comparatively priced"? (Score:3, Funny)
AMD's 64-bit chips will be comparatively priced to the 32-bit ones
So, they're going to be twice as much?
heh.
re: Skip to the last page for the most interesting (Score:5, Interesting)
"eWEEK: What does it mean to you personally, though, when a Gateway or an IBM not just stop, but announce that they'll no longer be offering AMD as an option?
Ruiz: I think it's terrible, obviously. It's terrible. I think if you were to talk with Ted Waitt at Gateway, and ask him, "Why'd you do that?" and if he would really tell you why, it's a question of he's being bribed to do it. Now, he's got to look out for his own hide and the company that's probably in great difficulty has got to listen to the huge amounts of money that can help him do that.
But you know what I find amazing, think about the power, is that despite all that, which obviously we really get emotional about the fact that somebody like Gateway gets bribed into doing that, is that despite that, according to Dataquest last week, we're still holding a 19 percent share of the market. That to me tells me we're in the throes of breaking this open"
Hey Intel, see you in court! Of course now that Intel is along with Microsoft backing a group to outlaw opensource in the government, I think its time for the opensource community to boycott Intel. Why should our money go to a company which is now attempting to hurt Linux and opensource? I know because these recent actions, I will NEVER buy Intel ever again!
Re: Skip to the last page for the most interesting (Score:3)
Maybe, maybe not. When Standard Oil undercut all its competitors by pricing its products BELOW production costs in order to drive them to bankruptcy and buy them out, that was ruled A Bad Thing and led to SO being broken up. There is a point where offering "special deals" is considered anti-competitive. If Standard Oil got nailed simply for offering product for too low a price, it's not unreasonable that Intel should likewise be nailed for offering product for a super low price, but only to companies that don't buy from Intel competitors. That's kind of shady territory there.
For example: BobComp buys Intel and AMD CPUs, so they get P4s for $35 each. JoeComp buys only Intel so they get the "deal" of $30 each. If BobComp buys 1000 CPUs a year from Intel and JoeComp buys only 500, then it's clearly not a "bulk discount".; it's a "helping us ace out the competition" discount. Now if $30 represents a significant loss of profit margin over $35 for Intel, then I'd say Intel is edging into some pretty anti-competitive territory.
Re: Skip to the last page for the most interesting (Score:3, Insightful)
The flaw in your logic is that Intel's actually making a profit, while AMD is still, I believe, in the red. Seeing as how it tends to be difficult to turn a profit when your primary product is sold at a loss, I'll take a stab in the dark and guess that they're not actually selling any chips for below the production cost.
Also, don't forget that Intel's manafacturing technology is about three years ahead of AMD. Their production costs are half of AMD's per unit.
A big problem with AMD chips, and something that I suspect is a not insignificant factor when it comes to the big OEMs, is that AMD builds fragile chips! If I need to build and ship x thousands of computers per day, if half the chips I buy get cracked during installation, I'm effectively paying double the unit cost.
Remember the Alpha (Score:3, Interesting)
Remember the Alpha? 64 bit goodness all the way. Has been running Linux for years.
And for those old enough to remember... Microsoft did support Win NT on the Alpha just a few years ago.
As far as the software goes, both Linux and Microsoft are ready for 64 bit computing.
NT on Alpha (Score:2)
They still do (Score:3, Interesting)
Just to remind people why more bits is good.. (Score:4, Insightful)
2^32 addressing limits addressable HD space to 2 terabytes. "2 terabytes? But that's way larger than even enthusiasts use in their PCs, despite their larger than average needs." This ignores the fact that many companies have storage arrays that are at 2 terabytes. Some work went into the 2.5 Linux kernel to increase the number of blocks that could be addressed by moving internally to 64-bits. Storage needs are always increasing. If we're hitting 2tb today, isn't it a good thing that we're moving to a better amount of bits?
2^64 addressing is not the only benefit of the change. FPUs see additional benefit when they have more bits. More bits means more precission; this is very important and desirable, especially when working with numbers that have fractional components. For proper 3D rendering, physics models, and anything else that involves computing numbers that have fractional parts, more is better. When the FPU can handle a double in one clock cycle because it works natively on 64-bit IEEE floating point numbers, you will notice a performance boost in addition to the increased accuracy.
64-bit word operations means that databuses can be slower, since each clock-tick sends more data. 64-bits means you can do more, more flexibly, with your computer.
There will always people who resist change, even when there is no reason to resist change. The same people are posting comments on Slashdot about how 32-bits is enough, and how happy they are with 32-bit applications. These are the same people who had to be carried, kicking and screaming, from their 286s to the new 386 and 486 machines which had 32-bit addressing and data operations. Don't let these people hold back your exploration of new technology!
For those of you who are saying, "what about 64 bits? Will 64 bits be enough?" 2^64 is 32 orders of magnitude bigger than 2^32. 2^32 is roughly 4.5 billion (unsigned). 2^64 unsigned is 18,446,744,073,709,551,616, or roughly 2220 * 8309 trillion. 4.5 billion goes into that number 4.5 billion times. 2^64 is certainly enough for at least a hundred years
Re:Just to remind people why more bits is good.. (Score:5, Funny)
Famous last words?
Re:Just to remind people why more bits is good.. (Score:5, Informative)
Um, all current x86s already handle 64-bit IEEE double-precision floats natively (actually more like 80 bits, for "extended double-precision"). The FP register file has been this wide for quite a while.
There will be no performance or precision boost for floating-point math from moving the rest of the chip to 64-bit registers/datapaths.
Re:Just to remind people why more bits is good.. (Score:2, Informative)
I wouldn't be to sure about the 100 years part either. But it out to be good for at least 10.
Re:Just to remind people why more bits is good.. (Score:3, Informative)
Miraculously, someone at Intel stowed the x86 crackpipe, preventing some sort of segmented/overlay nightmare like the one you describe.
Order of magnitude. (Score:3, Interesting)
Now, I don't know about you, but I only own a couple-hundred movies, and I only own a couple-hundred games. Even if they were the mega, mega high res I mention above, I'd still not use up more than a miniscule fraction of what I had available. That's why I think it'll last at least a century.
Desktop advantages of 64 bit (Score:2, Interesting)
Remember, many (most?) open source developers are private individuals and not huge corporations. Allowing individual open source developers to own an affordable 64 bit desktop machine will allow them to more effectively develop and debug the code that runs on the 64 bit servers.
It only seems natural that a developer, given a 64 bit system to develop and debug code on, is going to produce better 64 bit code. And we all want Linux (and the BSD's!) to be the best 64 bit platform it can be, right??
Alas, the memory... (Score:2, Interesting)
However, each chip is only going to get a single DDR333 memory path. With all of this time and effort, and so much at stake for AMD, you'd think that they'd make sure that they did it right, and move to a dual-channel solution, or at the very least, a DDR400 solution - which will be a pretty standard offering when the Opteron/Hammer/Athlon64/Whatever is released.
Sure, it'll perform pretty well with a single channel of DDR333. But I'll bet it would perform MUCH better with more bandwidth. And compared to all of the design and development that they've already done, implementing a dual-channel memory controller really wouldn't have been any significant challenge.
So, I'm not nearly as optimistic. On the other hand, I'm not a skeptic yet. When they come out, I'll see how they perform. But I'm certainly not as excited as I used to be.
steve
How do you build one? (Score:2)
32 bits != 4 gig max (Score:5, Informative)
For example, the z80 and 6502 were 8-bit processors, but they supported more than 256 bytes of RAM (2^8 bytes). The 68000 and 80286 were 16-bit processors, but they supported more than 64k of RAM (2^16 bytes). That's because the 8-bit processors had 16-bit address busses, and the 16-bit processors often had 24-bit address busses.
The current pentium-4 Xeon chip supports 64 gig of RAM, despite being a 32-bit processor.
64-bit computing means that you can hold a 64-bit quantity (long int or double) in a register. Also, you can load, store, or perform arithmetic on such quantities using one instruction and often in one clock cycle.
This offers very few benefits for the end consumer. Mostly it's about perception: consumers will percieve that a 64-bit chip is twice as good as a 32-bit one.
Re:32 bits != 4 gig max (Score:3, Informative)
One thing I'd like to point out, though: I've noticed that an awful lot of mathematics is being done using doubles (i.e., 64-bit floats) these days. It's partially laziness, but it's also really the case that 32-bit IEEE floats only give you 24 bits of accuracy. Doing math with doubles really cuts down on roundoff errors, so a lot of people switch to doubles and forget about it.
Re:32 bits != 4 gig max (Score:4, Informative)
However, the p4 actually has a 32bit address bus, with hacks to address 36bit space, but thats what it is.. a hack, the extra addressspace is not directly available to apps. There is also likely to be a performance hit when using these hacks..
Re:32 bits != 4 gig max (Score:3, Insightful)
Yes, that's true, but it's horribly hacky. Addressing your RAM in 4gb segments? It's enough to make any old-skool DOS coder cry.
There is no "desktop" market for 64 bit CPUs (Score:4, Interesting)
If you find you need that sort of mega addressing the chances are the app you need already runs on 64 bit Solaris. After that point it's up to the vendor (Think Avanti Corp /Apollo) Wheither it's worth their while.
Remember, You need their application. Unless your app is home
grown or you have some signifigant pull with a vendor the port isn't
going to happen.
The desktop is an afterthought. This chip was designed to be sold in quanties of 8 and higher in single large servers. Once they cut into that market the economies of scale just happen to make it cheap enough for the desktop market to pick it up. They have a much better chance at getting it down with their builtin backwards compatibility and keeping costs down. Alpha never hit that "sweet spot" for the volume to really bring down the price..
Now, Don't think Intel is going to sit on its hands while AMD eats their lunch. They're more likely to drop an Itanium instruction decoder into an Alpha EV7 core and push that than follow with an x86-64 processor line. Itanium is just to big and costs too much to at this stage of development to make inroads fast enough stop AMD in gaining marketshare but more importantly, mindshare. Intel would never take up x86-64, Doing so admits defeat to the industry i.e. You're not the leader anymore.
So to sum it up, Intel will either:
2 and 3 are much more likely than one, You know which one I'd rather see happen :).
Either way it'll be a boon for the OS community and certainly make our (The Alpha community) lives easier. The way I see it, even if hammer is moderatly successfull. You guys will 'clean' most of the popular soucecode out there to be 64 bit clean, reducing our matainence work by like 80%. The only thing we'll have to worry about is firmware, toolchain, libc, Xwindows, and kernel. So please buy a *hammer and learn the joys of porting to 64 bits. If it proves too painfull, please see the ld manpage for the "-taso" flag :).
Peter
Re:There is no "desktop" market for 64 bit CPUs (Score:3, Insightful)
After looking at your title and seeing no relation to your first paragraph, I know I wouldn't have to read the rest of your post and still know exactly where you went wrong. The "market" for a computer is not necessarily defined by what new applications it can offer. It can be defined by Joe Average coming home to his house carrying a huge box, telling his wife "You've gotta check this out. Its got sixty-four bits!"
(Chances are he's never worked with a "computer" in his life, and thinks he'll have to assemble all 64 pieces manually.)
Re:Big deal. (Score:2, Informative)
Yeah, and I have a 128-bit graphics card. (I know, they have like 100 Mbit ethernet cards now. :) ) However, The GPU and processor are totally different. The graphics card has more bits but obviously it doesnt run as fast as the cpu. All it does is make your fragfest a little more purty by letting you see the giblets all over. Having the CPU 64 bits is quite different, security-wise, code-wise, and speed-wise. If you have a 64-bit 2 GHz processor and a 32-bit 2 GHz processor, the 64-bit processor is going to be much faster. This speeds up the whole system, not just the rate at which you make giblets fly.
Re:Big deal. (Score:5, Insightful)
Ehrmm. no, if it were that easy we would all be using 64bit by now. 64bit has historically been faster because they belong to a better group of architectures called RISC, the new AMD 64-bit will be faster not because they have more bits but because AMD has upgraded the architecture and added more registers.
The number of bits is a meaningless as counting the number of seats in a car, twice as many seats doesnt make a faster car. In fact it makes the car harder to design to be fast, so does 64bit processors.
Re:Big deal. (Score:5, Informative)
Opteron's extra registers help.
64-bit calculations are easier, they don't have to be put into multiple 32-bit parts.
So...a 32-person bus is just as good as a 64-person bus? It may be harder to design and build, but when you have to move >32 people it's nice to have that big of a bus running around.
What I'm saying is, being 64-bit DOES make you faster. Not twice as fast, but definately faster and more powerful.
Re:Big deal. (Score:5, Insightful)
That's not exactly accurate. A 64 bit processor has a large data pathway, and is more comparable to a roadway than a car. The cars are the data, and a 64-bit roadway has twice the space for cars (data) on it, which is where the extra speed is. But I do agree with you otherwise.
Re:Big deal. (Score:3, Funny)
As for as I know, the SISC (single instruction set computing), typically embodied by the instruction SBN (subtract and branch if negative) is only used as a joke, in the same manner as Intercal and Malbolge.
Oh, you probably meant CISC, never mind...
Re:Big deal. (Score:4, Informative)
RISC = Reduced Instruction Set Computer
CISC = Complex
The basic idea of (most) RISC chip designs, such as the MIPS, Alpha, PowerPC & Sparc, was to have a large number of general purpose registers, fixed length instructions that could only refer to those registers, and only a handful of instructions that specifically read/wrote to main memory (which is why they're also referred to as 'load/store' architectures). This simplistic design allowed them to push clock speeds without too much trouble. RISC processors were also adopted superscalar designs (having multiple execution units, allowing the execution of multiple instructions 'simultaniously') before their CISC counterparts.
In contrast to the simplicity of the RISC systems, there are the CISC chips, such as the x86 and the old VAX processors, which tried to make their instructions resemble high-level languages, as well as having a smaller number of registers, many of them having a special purpose. With variable length instructions, and many different modes of operation for each instruction, the CISC methodology generaly resulted in much larger, more complex chip designs that were harder to speed up, pipeline & make superscalar.
To compare the two, lets take a simple operation, such as taking two numbers from memory & adding them together. A generic RISC system would do something like:
1) load 1st number into Register 1
2) load 2nd number into Register 2
3) add the value in R1 to R2, putting the value in R3
4) copy the value from Register 3 to memory
where a CISC chip, would more likely do something more like:
1)add the value at memory location 1 to the value at memory location 2, and store in a special Accumulator register
2) copy the Accumulator register back to memory
The difference being that where the RISC machine only had one addition operation (register+register->register), the CISC machine would have a handful of them, depending on where the data came from (memory (using multiple forms of reference), registers, constants, and various combinations).
In the early 80s, the RISC/CISC debate was a hot one in accademia, and RISC won out there, by virtue of its simplicity & easy of improvement. By the mid 80s, the debate was starting again in industry, as a number of RISC chips started entering the marketplace, where Intel's x86 architecture won by virtue of the IBM PC.
The whole debate is pretty much a moot point now,
since Intel's new x86 chips have RISC cores wrapped by a thin layer to translate the complex instructions. As an added bonus, the new 64b x86 systems should be adding a bunch of extra registers, further negating the penalty of the architecture.
Re:Big deal. (Score:5, Informative)
No. That's a myth. As it stands, Pentiums for many years now have sported 64 bit buses and 64-bit FPUs (well, 80-bit CPUS actually), so we're not talking about bus size and FPU width. We're talking about:
1. All addresses being 64-bits.
2. All internal integer registers being 64-bits.
For #1, realize that this is going to greatly increase the data size of many applications. The larger the data size, the higher the chance of cache misses. In general, this is a loss, not a win.
For #2, realize that some integer operations are O(N) where N is the number of bits involved. 64-bit multiplication and division are slower than the same 32-bit operations. Period.
The gain with 64-bit processors is one of address space and nothing more.
Re:Big deal. (Score:2, Interesting)
wouldn't the chance of cache misses depend on the caching policy? How does the data size matter? If your policy is good, when your misses will be rare. Otherwise you're screwed even if it is 8-bit
The gain with 64-bit processors is one of address space and nothing more.
Which includes better behaviour for those programs that have to fake larger address space. That would be a speed increase.
Re:Big deal. (Score:5, Informative)
wouldn't the chance of cache misses depend on the caching policy? How does the data size matter?
Data size matters because a program will typically access a fixed number of working variables, not a fixed amount of data. If a program's working set size stays at, say, 1000 words, and you move from a 32-bit to a 64-bit architecture, you need a cache with twice as much storage space to hold the working set without thrashing.
There's easily enough die area to double the sizes of the L1 and L2 caches; the problem is that it slows down cache access (more latency cycles fetching something from L1 is a Bad Thing).
Certain types of load work with constant size instead of constant word count, but most of those deal with working sets large enough that you'll thrash no matter what.
The gain with 64-bit processors is one of address space and nothing more.
Which includes better behaviour for those programs that have to fake larger address space. That would be a speed increase.
Nothing running on x86 will do that. Unless you're running old DOS programs in real mode, you're already working with a flat address space. Typically 2 gigs of this is available to user programs (with the rest being mapped to kernel or device space). If you have a problem with a working set larger than 2 gigabytes, you already have a Sun/$other_vendor machine to solve it on.
Larger address space targets the _future_ problem of desktop users who want many gigabytes of memory.
A fringe benefit is being able to more efficiently map multi-gigabyte files into memory space, but performance for this kind of task is limited by disk latency and controller bandwidth, not memory architecture.
FUD disguised as a technical comment. (Score:5, Informative)
For #1, realize that this is going to greatly increase the data size of many applications. The larger the data size, the higher the chance of cache misses. In general, this is a loss, not a win.
Furthermore, measurements by AMD indicate that op-code size did not increase with the expanded instructions, but actual *decreased* because the additional registers decreased the typical amount of spill/fill code emitted.
Therefore there is no additional cache pressure. The "code bloat" problem remains solely in the hands of the software developer, and is *NOT* worsened in any way by hammer.
For #2, realize that some integer operations are O(N) where N is the number of bits involved. 64-bit multiplication and division are slowerthan the same 32-bit operations. Period.
The reason AMD is able to do this is because arithmetic and logic operations can largely be implemented in a "more gates for more speed" fashion. They are closer to O(ln(N)) than O(N). But at this level of circuit design, you don't necessarily think in those terms (since N is constant, everything just looks like O(1)) -- these high speed circuit designers worry about other technical things like "latch speed".
The 64 bit integer divide may be a little slower, however, again you need to explicitely use 64 bit ints in your software, and division is a comparatively uncommon operation.
Although I don't know that its related to SSE, it should be pointed out that EPIC (as in the video game company) has ported the Unreal engine to x86-64! Like most people, I was quite surprised that they did this, however, they apparently found doing it to be worthwhile.
Do not underestimate the upside of going to 64 bits in the way that AMD has done it. They have literally made it a no-lose scenario -- that alone should spur (mostly new) application developer interest.
Re:Big deal. (Score:2, Funny)
gotta luv it (Score:2)
Re:Bug deal (Score:2, Funny)
Re:Bug deal (Score:2)
64 bit AGP what huh? (Score:2)
Re:Microsoft Quote, and Kernel Dev Question (Score:2, Informative)
Look for SuSE's Andi Kleen in the release-notes.
fpg
Re:Well... there was Alpha (Score:3, Insightful)
Really they should have continued the alpha, instead of creating a new architecture... The alpha is the cleanest of all the 64bit architectures, and has always been the most performant, plus by using an existing architecture you would already have a software and user base.