Yet Another Serial Graphics Bus From Intel 193
ottotto writes: "Techweb has a story about Intel's High Speed Graphics Initiative. After discussing another doubling of the AGP, VP Pat Gelsinger said "The next part of that road map is AGP8x, an evolutionary step from AGP4x, to be followed by a future serial graphics bus." ANOTHER serial graphics bus? Is not the upgrade path to IEEE 1394B (800 Mbps Fire Wire) and beyond sufficent? Is this, along with the USB 2.0 spec another
way around giving any credit or royalties to Apple?" I suppose companies have to make their plans somehow, and new products are better than living in the 1960s forever. But sometimes these "roadmaps" (which often turn out to be more like directions scribbled on the backs of napkins) seem to smack of planned obsolesence. Do you ever skip the current latest/greatest because you know what's around the corner?
A move by Intel to high clock, serial technologies (Score:2)
Re:blech! (Score:1)
The most controversial part of the headline has virtually no information to back it up.
no need for video cards (Score:2)
Unfortunately this does mean that a standard programming API for the video hardware on this bus would be needed to avoid making monitors not universally useful due to device driver issues.
At 133mbytes/sec, my current PCI matrox card is working just great and will be for a couple more years. The good old HD-15 connector is here to stay forever on CRTs.
Re:i never buy the latest and greatest (Score:2)
Are you deaf? The Harmon-Kardon speakers can't touch the Klipsch's on some Compaqs and are appreciably worse than my ACS495s. (Not to mention the fact that they lack a subwoofer!)
Could it be the.. (Score:1)
Re:FireWire is not affected... (Score:1)
USB 1.0 and 2.0 require that a computer is somewhere on the bus. FireWire does not. The idea behind FireWire is that you will eventually unpack your new home entertainment system and plug the DVD player into the TV with a FireWire cable, and the TV into the amp, and the amp into the speakers. Then you're done and they all talk to each other. If you screw up and go TV-amp-speakers-DVD it doesn't matter. Adding a device just means plugging the new one into the last one (until you reach 63). You also don't need to add a hub every four devices or whatever.
> Due to availability of devices?
That's another reason. Every DV camcorder and VCR on the planet has a 1394 port. It is THE way that DV moves between devices, including computers.
> I'm a big fan of Apple and their technology, but
> USB 2.0 looks more attractive than the FireWire.
> Apple should do something on this.
Not sure why you think USB 2.0 looks attractive, other than the fact that it speeds up the USB bus so you can print and scan at the same time. Things are more exciting in the FireWire world with the DV stuff and with 1394b coming out soon. Apple will add USB 2.0 to future Macs just like they added USB 1.0 to past Macs. They are very aggressively behind USB. In 1998, a Mac user could choose from a handful of ADB joysticks, but now you have your pick of any USB joystick in the industry, without even requiring a special Mac driver (USB Overdrive on the Mac just works with all keyboards, joysticks, and mouses). It's been fantastic for Apple.
Re:All Computer Companies Hate Us (Score:2)
It's obvious once you understand basic economics - I'll explain it for you.
First, in order to simulate the effects of years of economic training, drink a couple of shots of Whiskey (I'm using Knob Creek bourbon).
Ok - ready?
Let's say Intel sells the good chips at P1, and has a demand of D1 at that price. They are making a total of (D1)*(P1)=$1.
There is a set of people (D2) who aren't willing to pay P1 for the good chips - they want something at a lower price, P2.
Intel could drop the price of the good chips to P2 Is less than P1, and make (D1+D2)P2=$2. In some cases, $2 Is greater than $1, and this is the action they should take.
Now, there is a third option - Price Discrimination:
Intel releases a new chip, which they have crippled in some way.
For the majority of the people who are willing to pay full price for the good chip, it is of no use to them, but for many of the people who want something cheaper, it is barely usable.
Now the equation for how much money they make looks something like this:
(D1-(D1*e1%))*P1+(D2+(D1*e1%))*p2=$3
(e1% are people who will buy the inferior chip instead of the good one)
Now, the trick is, by keeping e1 low (by making the chip inferior to the good chip) and by adjusting the prices carefully, $3 can be made much greater than $1 and $2.
Make sense to you? If not, take another drink and read it again...
--
Re:Memory Mapped Graphics interface is obselete. (Score:2)
Actually, this is quite possible with todays hardware using texture compression. The problem is that most people do not have the most current generation of 3D-cards, and game designers need to make games that run on as many machines as possible.
A 1024x1024 16-bit texture is two megabytes.
If you want 32-bit it's four megabytes.
Apply S3TC texture compression, and voila it's under 700k. If you used 3dfx' texture compression sceme,and your four megabytes texture is a mere 512k. Most of the time you won't even need textures as large as this since you can do trickery with detail textures.
I don't think we'll ever see any gamers-level graphics cards pushing all textures over the bus, but I don't think we'll be seeing many cards with more than 128MB of ram. It might even go down a bit, but graphics cards will always have a bit of ram that's faster and more expensive than system ram, for the same reason CPUs will always have cache.
A penny for your thoughts.
Re:AGP all over again (Score:1)
Personally, I believe the capalert site is a misguided attempt by Christians that only ends up being used as a tool to mock Christians.
Why a site designed to promote choosing safer programs for children even bothers to review "R" movies is beyond me, that rating was designed to help exclude the underaged and in the last year or so is rigorously enforced, some areas enforced it for quite some time.
Re:All Computer Companies Hate Us (Score:1)
> if they could
Although everything else in the comments to this story seems to be misinformed bullshit, at least this much is true.
Well said, and thank you!
Re:Underestimating AGP (Score:1)
AGP sucks (for me) because there is just one fucking AGP slot. I have three video cards on my PC. What the fuck am I suposed to do ?
Maybe rethink your video strategy? <ducking>
I would have thought that something like an ATI All-in-wonder + a decent 3D card would get you all you wanted. Hell I think that both Matrox and NVIDIA have combo cards and if not, I am positive that there are very expensive commercial grade frame buffers which could capture your choice of composite/SVideo/maybe even RF modulated at XSVGA resolutions and tack that on the AGP and add your Voodoo3 for 3D.
AGP is yet-another-kind-of-driver adding to the general instability of PC hardware.
That's bullshit, plain and simple. AGP is a wonderful idea compared to some of the alternatives from a cost perspective.
I cried out loud when moto did the altivec.
Why, because you would never use it? I can think of a dozen things I could use extremely fast vector mathematics for. Yeah it takes code to implement it, but that's the problem with computers... you can't think of everything beforehand.
Same story then with the NeXT DSP.
I don't know anything about this but it sounds like they put the DSP on a slow bus or crippled it somehow. A decent DSP (those new TI DSPs look pretty hot) will blow the shit out of your 1.5GHz PIII in doing straight mathematics, which was why DSPs were created in the first place... Fast on math, not so fast everywhere else.
PC is general purpose hardware. THey should stay this way. Each time something specific is done, it is doomed to failure (VLB anyone ?).
I disagree. PCs should cater to the ubiquitous "general user". That doesn't necessarily mean general purpose hardware. And FYI, I am certain that VLB was actually a better bus that PCI, but since PCI was smaller and involved more money brought in, there were certain politics involved quashing VLB.
Re:Standard? What is this strange word? (Score:1)
I would strongly disagree that it is "much more powerful". One of the many reasons why I like my macs is that Apple is a "widget" company. They sell you the whole widget and it's a fully-integrated widget. Don't get me wrong, I love my linux box... but it's a Toshiba 4220 SatPro and it is so insanely non-standard in everything that the Linux install is an all day affair of driver-hunting, PnP-disabling and kernel-recompiling. As a geek(tm) I'm okay with that. But I'm also a "regular user" a lot of the time and the joy of knowing that the hardware and software are designed for each other in harmonious widget-ship can be wonderful. And, really, it's beefy hardware too. Seriously. Head to your local mac shop and open up a G4. The easy-access case alone makes me scream in joy (they don't let me into a lot of computer stores any more)... and they're more expandable than my dad's Dell...
Re:Intel sucks (Score:1)
why not just use fast DDR RAM as main memory? (Score:1)
NVidia should take a whack at Intel by integrating an x86-compatible CPU onto the next GeForce series cards. Make it possible for OEMs to ship a system that doesn't have a tradtional motherboard, but rather just a video card with a CPU on it to handle the general purpose computing tasks.
The 'motherboard' would then be a compact passive backplane with disk controllers,ethernet and USB ports and you upgrade your CPU when you upgrade your graphics card..
I'm sick of watching Intel dick around. The whole 'Slot' debacle is laughable - they deliberately make a 'slotted' CPU to try and slow down AMD/Cyrix, then they go back to making socketed CPUs because the slotted versions are too expensive.
Now its RAMBUS - they try and force everyone to use RAMBUS, but then they go back to SDRAM because RAMBUS is too expensive.
The Celeron has exactly the same cache-on-die technology as their supposedly high-end Xeons, and the Xeon is supposed to be expensive because it offers higher performance - why? it has exactly the same improvement as the lowend celeron has, as compared to the P2?? 'Oh yeah but the Xeons don't have the SMP support burnt out of their dies'
Intel is stumbling blindly with the P4 and the Itanium.. Even joe average knows that the P3 does not give anyone a 'better internet experience', and the difference between a 600MHz Celeron and a 1.3GHz P3 is unnoticeable during day to day tasks..
The Itanium will probably be outperformed by whatever the latest Celeron is when it finally gets released, and the P4 is just another 'Pentium'. Personally, i haven't noticed any difference in speed between the P2-450 i used to have and the P3-500 i have now at work..
Clock-for-clock, i doubt the P3 is any significantly faster than the P2, and i doubt that clock-for-clock, the P4 will be either.
The last two PCs i purchased had intel CPUs. The next one definitely won't.
Re:AGP all over again (Score:1)
AGP is cool whether you like it or not. Sure the AGP texture thing kinda busted, but the extra bandwidth is always welcome. The best part of AGP, however, is that it allows you to blit from system to graphics memory using the hardware blitter. (A major turn on for DirectDraw freaks like me
Intel really doesn't care about your segment. In the consumer arena, gamers are a very powerful segment. They're the ones who buy the 1GHz procs and the $500 graphics cards. Intel figures that if you keep the gamers on your bandwagon, with little cost to everyone else, then why the hell not?
Newsflash! (Score:3)
The hardware you buy will in fact become obsolete before you can "give" it to your kids. Sorry to dissapoint you after that peptalk you from the local big box salesman about "that top end computer will last you for 5 years!"
Re:it's written 'Obsolescence' (Score:1)
Re:i never buy the latest and greatest (Score:3)
2) They support Linux.
3) Yea, but todays mediocre is tomarrows Mac hardware.
4) If you can live with a $400 system, then more power to you. Some people actually NEED the extra power, and will pay for it. Question: Would you buy a Porsche?
5) It usually does. Either way, you get bragging rights.
6) I'm a consumer swayed by 40fps in 16K/12K Quake III.
7) To fill my needs, my system needs a lot of power. Live with it.
PS> Before y'all get all hot and bothered about that Mac comment, consider it. Right now, Macs are up to AGP2x and Radeon cards while PCs are at 4x GeForce2 Ultra cards.
Re:wrong focus (Score:2)
If you can find me a RAID array that does 500+MB/sec (the speed of 66/64 PCI) on a PC system, then call me. Otherwise, go talk to the Sun people.
Gamers don't have dual monitors or RAID. The practicality of the matter is, that 1GHz procs, AGP8x, and GeForce2Ultras are aimed at gamers. Gamers run the high-end of the processor business and that's just the way it is.
The `serial graphics bus' is not like firewire. (Score:1)
The PCI bus commitees are planning to convert to a serial bus as their clock speed increases. (Timing skew becomes unmanageable even for the short runs inside a single computer, pins and connectors cost money.) AGP is mostly a specialized PCI and will have many of the same issues as speed increases.
This `serial graphics bus' will likely be a couple wires from your CPU (or bridge chip if they still exist) to the graphics processor. IEEE 1394 will still be there moving realtime video between video devices and maybe data to and from external devices.
USB 4.0 will be there on your wired keyboard and Intel will be swearing they will replace everything else `real soon now'.
the short answer is... no (Score:2)
Re:Why faster graphic busses are needed (Score:2)
Re:Firewire? (Score:2)
Re:The AGP4X issue (and others) in a nutshell (Score:2)
Until Intel can revoke their contract with Rambus (or until Rambus finally kicks the bucket), we will be damned to partially faulty chipsets until the wave of corporate greed and the proprietarization of standards has ended. Right now, P3 builders are stuck with either the Rambus 820 ($1500 just for the board, chip, and ram) or a legacy of partially faulty chipsets from VIA.
The only non-Intel chipset I would ever consider buying would be NVidia, if they actually do make one for the P4. They got it more than right with video chipsets, just wait and see how they do with the mainboard chipsets.
As the original author's signature states ("Let's put out [a] sh!tload of new technologies, let's fix the hardware incompatibilities or bugs issues later."), companies are infatuated with releasing new hardware in order to maximize profits. They then decide to fix the arising problems with virtualized drivers which lag worse than Java.
Just think about it: The US Embassy would never think of hiring a team of German, Ukranian, Swahili, Spanish, and Russian translators to aid in a conversation with French diplomats.
So why are driver engineers using a horde of inefficient languages and subroutines to build drivers that would work better if the compile job was centralized under one style of coding and geared toward efficiency?
These are the things that companies like Creative, AMD, and Adobe don't want us to think about.
Re:i never buy the latest and greatest (Score:1)
Of course, I am a liberal environmentalist, but I don't think that has anything to do with it (how many liberal environmentalists buy computers OR cars to begin with?)
Vote Nader [votenader.com]
Re:Perspective Re:i never buy the latest and great (Score:1)
(Note: Don't think I'm attacking you, just the opposite. I'm just very opinionated and I had no idea who to reply to, to say this.)
Re:Memory Mapped Graphics interface is obselete. (Score:1)
Re:Underestimating AGP (Score:2)
BTW, switched fabric bus should be coming to PCs in the next view years - its called Infiniband.
read between the lines (Score:2)
High speed graphics, right? What they really mean is "faster porn".
Re:All the criticism (Score:2)
I know linux runs on other chips. The trick is to get the software companies to port to other chips... that is a battle that hardly ever can be won. In some cases Linux, without the apps, is just another whiz-bang-isn't-that-cool sort of OS.
Re:All Computer Companies Hate Us (Score:1)
--
enough with the graphics already.... (Score:2)
For the majority of computer users, graphic speed is *not* an issue anymore. If you want to, you can have Q3 running in 1600x1200 at very playable frame rates....
What we need is faster disk I/O, but as Intel knows, it's just not as sexy of a thing to put in press releases.
Re:Serial Graphics interface - It will never compe (Score:1)
The point I am trying to make that isn't getting across is that the slasheditor thought 800Mbps firewire(or ieee something) made sense and was fast.
Serial for graphics does not make sense, and is not fast.
All Computer Companies Hate Us (Score:3)
Just about everyone would be a "Microsoft" if they could: Intel intentionally crippling Celeron 2's so not to offer a reasonable price point vis a vis the Pentium 3; Rambus; Pentium Pro's could go 4 or 8 way, with P2 or P3, you can only go 2 way, need to pay the Xeon tax to go more the 2 way SMP. IBM used to be the "Microsoft". Sun's silly Java tricks (standardize, or not.... let us get back to you), etc.
So, this is mildly on topic, but it just shows how we shouldn't be surprised by stupid market leader tricks.
matt
i never buy the latest and greatest (Score:1)
1: Drivers sux
2: no multi os support
3: todays greatest is tomorrows junk
4: its way over priced im not paying $400 for a graphics card i can build a computer for that
5: it rarely lives up to the hype
6: im not a consumer to be swayed by flashy
fud filled adds
7: my 4 systems fill my needs
Re:Firewire (Score:3)
Re:Give me a break (Score:1)
Serial Graphics Bus (Score:1)
Think about it, Intel wouldn't slip up details of something so far away. If they are working on a competitor for FW:800mb, Why on earth would they call it a Serial Graphics bus? The only reason to call a bus a graphics bus is if the bus will be doing high performance 3D graphics. and that sugests a new interface for graphic accelerators, not an interconnect to compete with Firewire on any plane. (Firewire is far to slow, even at 800 mbs, to do 3d graphics. for instance, AGP is 16384 mbs, and it is a huge bottleneck on the GeForce. If texturing starts happening over AGP when your videocard runs out of RAM, your in for some big slowdown.)
Sorry, but I dont think they are that dumb. its like USB2. It is designed to reduce the strain on the shared bandwidth when you have many device connections, not provide a DV connection to compete with Firewire. I've Never heard of a camera with a USB2 connection on it, but I have had all my USB devices slow to a crawl when I print a page and scan at the same time over it.
Anyway, I thought Intel was friendly to IEEE1394? Why would the really want to compete?
My thoughts anyway. don't take my words for gospel, I've been known to be wrong.
Serial Graphics interface - It will never compete (Score:1)
Why do you think the graphics chips boast 256bit wide interfaces? AGP is 64 bits wide. Serial cannot compete.
Re:AGP all over again (Score:1)
What intel should be working on are new chipsets for motherboards. I have seen nothing but crap come out of intel since the bx.
Re:Excuse my lack of ignorance... (Score:2)
"serial graphics bus" in this case means a peripheral interface. (i think--we're basing the entire thread on a flippant one-sentence remark in a press release that mainly talks about 8x AGP...)
Re:Do I ever skip? (Score:1)
Actually, it's not a gamble at all, precisely because you know that the price/power is always decreasing (except with some commodities like RAM). You just have to be smart enough only to buy as much power as you actually need at the moment, because if you buy extra in advance, you're losing the time value of your money and you're losing the difference between what it cost you when you bought it and what you could pay for it when you finally need it. Yes, it often pays off to pay extra for a box that can be expanded at a future date (eg, with extra slots for that purpose), but that's about as far as you should take it.
The only wrinkle is that people are very poor judges of how much they need, as opposed to how much they merely want.
Why not use InfiniBand? (Score:1)
waiting.......( a bit off-topic ) (Score:1)
My advice is this: make a stand, bite-the-bullet and purchase sensibly, because you could be waiting forever for your time to "strike it big"!
Re:Should the world pay for Apple's "Standard"? (Score:1)
This is as standard as it can get, it's nice that all these manufacturers can agree on anything! I would certanly pay 2-5 dollars more for å tv/video/stereo/dv-camera that has a fireWire connection.
It's sad that other hardware manufacturers is too proud to just "join-in" on a standard, so they make their own, and the consumer looses. again.
Re:AGP all over again (Score:1)
You're certainly correct about the Voodoo2 not supporting more than a 256x256 texture size -- I believe in this case that they simply tiled textures to simulate said test conditions.
That said, swapping small textures should actually be easier than swapping one large texture, and only still serves to prove that the PCI bus is limiting in this case.
Moreover, LOD isn't always a suitable answer, and neither is compression. Both are compromises that reduce quality in some way, which is what companies like 3dfx and nVidia are trying to avoid with newer and newer generation cards.
Re:Fr0$T3d Flak3s.... FUCK YOU!@!! (Score:1)
Re:It's not about Apple... (Score:2)
---
Re:Should the world pay for Apple's "Standard"? (Score:1)
Re:AGP all over again (Score:1)
Intel can't keep growing based soley on CPUs. They have to continually expand their offerings (busses, RAM models, networking, hubs, video cams....) in order to continue thier growth, and give a basis for thier stock price.
Interstingly enough, Intel's main purpose in being in business is not to produce processors, or computer components, but to turn a profit for investors.
gotta keep things in perspective here.
Re:Firewire rocks your nuts. (Score:1)
Re:Great!.... (Score:1)
"standard"
ah so you're the one using XGA
Re:They like to keep it easy... (Score:2)
The guys at intel seem to be far more comfortable sticking with what they "know" than moving ahead. This explains why the ISA bus has lasted far longer than it should have, why they haven't substantially changed CPU architectures in ages, and why they have decided that the next big graphics bus will be, again, serial.
And then you go on to praise AMD. Intel invented the microprocessor, semiconductor memory, and the IEEE floating point format, has plenty of new technologies such as MMX and SSE, and is the main innovator of the newest technologies such as AGP, USB, and PCI. What has AMD done comparable? AMD uses the technologies which Intel invented (X86 architecture, Socket 7, PCI, etc.) and has yet to invent a new technology (e.g. the Athlon bus was bought from Compaq).
Intel has not done a new architecture in years? Merced is the most revolutionary new architecture since the advent of RISC (20 years ago), yet gets dogged becaused it's too different. Instead, people praise the Sledgehammer, which is merely a 64 bit extension but nothing revolutionary. Make up your mind!
Re:All Computer Companies Hate Us (Score:1)
oh really. don't lump me in with your capitalist, corporatist, elitists friends please
Skipping the latest/greatest..... (Score:1)
I never skip the current/latest offerings because I know what's around the corner.
I skip the current/latest-greatest offerings becuase I don't know what's around the corner!
I haven't played games in years, 'cause they current batch of games don't go where I want to go, and RAM costs too much to write the kind of sim I want to play. (With RAM prices coming down, and Linux clustering becoming common place.... drool.) Anyway, I'm still running a little 4MB PCI card fo video, and the last game I sampled was Need for Speed III.
When they release Ultimate RealWorld Sim (or whatever it will be called), I'll think about a new video card.
crosstalk? (Score:1)
Re:Credit royalties only apply to Firewire 1 (Score:1)
Firewire 2, the 800 Mbps version, and later versions are royalty free.
Lord Pixel - The cat who walks through walls
Re:AGP all over again (Score:2)
Moving the video onto a seperate bus gets your low latencies back, which is a Good Thing.
Re:no need for video cards (Score:2)
No need for a standard... I'm sure every manufacturer will thoughtfully include a floppy with both Win95 andWin98 drivers on it.
--Jeremy the embittered BeOS user
Re:wrong focus (Score:1)
AGP 4X, on the other hand, has been proven to be of limited utility. For most cards, the bottleneck is from the GPU to onboard RAM. It has nothing to do with the system bus. The jump from AGP 2X to 4X incrases performance by the slightest margin, that is all.
Grrrr...yes, this is slighty OT etc. - but I had to get it out of my system.
-Smitty
Re:Serial Graphics Bus (Score:1)
Quite the opposite. Intel is IEEE 1394's biggest enemy.
Maybe it is just post DVI, PanelLink, etc... (Score:3)
The new "Serial Graphics Bus" referred to in this article may actually be a follow-on to the serial digital standards being pushed today for flat panels. Digial Video Interface, Plug n Display, etc, are all in various states of acceptance, and then there is always the fact that Silicon Image [siimage.com] owns the patent for PanelLink, which is the link layer protocol that runs on the DVI/PnD connection (Intel is intimately involved in this effort, but does not own anything outright)
Also interesting is the fact that the MPAA is emphatic about an encrypted link from the source (DVD, for example), right to the display... they want to disable any possibility of copying pristine digital content-- you may have heard about this elsewhere... When I worked on flat panel stuff (at Philips, go figure), intel was definitely getting behind such an encrypted link (which would be serial).
Supporting larger, higher-density digital displays with a digital input stream will require better and better connections as well, and if a P4 is needed for better peer-to-peer networking, than certainly intel will find themselves getting involved in some "critical" way here as well.
But all of this is just a guess
FireWire is not affected... (Score:2)
Because of these differences, I don't think this is a sign of Intel putting up another front against Firewire. It's simply what they see as the "next generation" after their AGP design tops out at 8x.
I'm somewhat disappointed that there will be *yet another* type of card out there. Luckily, it probably won't be out for at least 2 years... so we have time to enjoy what we have now and worry about this new thing when it shows up.
And as for the whole "1394 vs. USB 2" thing... god... we've gone over this so much. It's obvious that FireWire has a very strong foot hold right now... and USB 2 will still not be suitable for DV, so what's the problem? Judging buy the sheer number of companies now producing 1394 products, it's obviously not the licensing fees!
Re:All Computer Companies Hate Us (Score:2)
Pope
Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
Serial video = Encrypted content to output device. (Score:2)
It allows encrypted DVD's and broadband streams to be sent directly to the output devices without an easily accessable decrypt stage in the general purpose processer.
So next it'll require an e-beam prober or another key management 'accident' to expose the raw digital content to the immoral ravages of fair use, all subject to felony charges courtesy of DMCA.
Re:This IS needed (by 3D graphics)... (Score:3)
Either you just accidently contradicted yourself, you forgot to add a final sentence, or you're confused about RDRAM and SDRAM.
SDRAM is parallel. The Rambus is hard to manufacture because it has to be high clock speeds.. very high clock speeds, to beat out the parallel technology. That's because a normal PC100 DIMM doesn't have to be very fast (thus avoiding the more severe effects of cross talk) because it can push 64bits per clock cycle. It can acheive a peak bandwidth of 800Mbps. This is half of what an 800Mhz RDRAM RIMM can do. But the "simpler" RDRAM RIMM is much, much more expensive. Adding extra datapins at slow speeds isn't too hard compared to cranking even the smallest number of pins to incredibly high speeds.
Based on your assertation that crosstalk is bad thing, one would think that RDRAM would be easier to manufacture as they use the simple serial process. This isn't completely true, as you've oversimplified the case and sound like you contracticted yourself ("RDRAM is hard to mfr.. Serial RDRAM is easy to mfr").
---
Re:A move by Intel to high clock, serial technolog (Score:2)
(Note that the "Reduced instruction set complexity" expansion of that acronym came later and represents the strategy which actually works.)
Does AGP accomplish anything? (Score:2)
Actually, AGP bypasses the PCI bus. That was the whole point of AGP -- to bypass the (sometimes multiple) bus bridge chips and abstractions that the PCI design uses, and thus streamline the pipe between the graphics adapter and main system memory.
Unfortunately, every benchmark I've seen only gave about a 10% performance boost for AGP over PCI. It appears that system memory is always going to be dog slow when compared to local graphics card memory. The only design to actually try to offload graphics memory to AGP was the Intel i740, which was an abysmal failue. So there is some question as to whether AGP accomplished anything except generating patent revenues for Intel.
Re:AGP all over again (Score:2)
You missed the point completely. AGP wasn't free to switch to. It had a lot of behind the scenes costs that *were* transferred to the consumer, though you may not care. And it required all motherboard manufacturers to modify their existing products. Was this worth it? No.
A better example is the vector instruction set of the Pentium III (aka the Katmai instructions). These have proven to be almost completely irrelevant, and yet that was Intel's big push behind the Pentium III. The processor prices went through the roof as a result? And for what? Nothing.
Hardware features are not free. Period.
Re:Lack of ignorance, says Sherlock Holmes (Score:2)
However, as expected, non-serial interfaces (I loathe saying "parallel interfaces" because then everyone thinks about that 8-bit geezer, the LPT port) tend to perform much faster, with the only drawback being the limited scalability. But seriously, does ANYONE use 127 USB devices at once or 63 ieee1394 devices at once? If so, the transfer rate must be slow as hell.
I would personally like to see the IEEE renamed to the SICC: Sextoy Interface Connector Consortium. I've had enough with serial devices; why can't we have something with 68 pins with the signaling rate of 1394? That might break the landspeed for data transfer rate!
This IS needed (by 3D graphics)... (Score:4)
There are really two questions people seemed to be confused over:
There is only one reason for more bandwidth: 3D graphics. AGP was originally designed so that graphics cards could use main memory for textures. Alas, it was too slow, and graphics card vendors began to just pile up RAM on the video card. However, most people don't realize that textures aren't the only thing sent down bus. Polygon data needs to be sent down every frame as well. Enter hardware transform and lighting. Suddenly games are starting to be designed with a LOT more polygons. Due to things like animation and dynamic level of detail, this polygon data can't just be stored on the graphics card like textures are, they have to be downloaded every frame. If you extrapolate the curve of supported polygons / second by the history of Nvidia cards, you'll see that AGP4X and AGP8X will be saturated relatively soon. All graphics card vendors (except maybe 3dfx, we'll see what they say after the Rampage ships) are clamoring for more bandwidth.
The reason for going to serial is physics oriented. Parallel wires switching at high speeds can generate electrical fields that cause signals in nearby wires to change. This is known as crosstalk (or noise). This is the reason that Rambus (16-bit 400 MHz) is so hard to manufacture compared to SDRAM. Moving to a serial bus allows the clock speed to be cranked much, much, MUCH higher without worrying as much about data errors.
Re:Should the world pay for Apple's "Standard"? (Score:2)
Second, Apple doesn't charge royalties for FireWire devices beyond a very reasonable 50-cent-per-port charge, which I'll bet is less than M$ charges for USB or plans to charge for USB 2.0 . The main royalties that Apple demands are for use of its "FireWire" brand name, which is copyrighted. Anyone can just label their products as IEEE 1394 (and many companies do) and go around this limit. Since 1394 is cross-platform anyway, and FireWire is identified with the Macintosh product line, companies can advertise compatibility with 1394 on Windows by labelling them as such.
Also, don't dis a good standard. FireWire is very convenient: it allows hot-plugging easily. I've never heard about trouble getting FireWire to work properly. Just about everyone I know who sets up a lot of computers has "plug-n-cry" horror stories about fussy USB. Microsoft's choice of putting only 500mW of power through USB is pointless, and makes it such that you need [to buy, and pay royalties to M$] powered hubs to use any reasonable number of devices. FireWire carries 20W, enough to actually power devices, and it allows daisy-chaining of devices so you don't need hubs. Do we want to replace this standard with another USB? Do we want a specialized serial graphics bus from Intel to split everything into a buch of different standards so we have to buy more equipment, increasing royalties? I think not. Let Apple (and Sony, Panasonic, JVC, Western Digital, etc.) be.
Fsck this hard drive! Although it probably won't work...
foo = bar/*myPtr;
Re:Great!.... (Score:2)
or worse, rambus
This post wasn't meant to be flamebait, but a
true statement. I didn't want to jump in the
4x bandwagon after seeing the poor integration
and most of the board had to be switched down
to 2x or to keep 4x you have to turn off some of the options that makes it worth the speed increase. Which is totally pathetic, it's like
buying a pentium III and having to shut down the cache because the supply current is too borderline, stupid? well it's about the same thing exept for a graphic card.
I hate it when they rush standards that aren't
"standard" or need a patch to work, and with the recent chipset fiasco from intel, I can't beleive the AGP8X will be any better than the current 4x incarnation.
On the bright side, there's always the fact that
they did it with NVIDIA and they are the #1 gfx card company for now. So probably the next chipset + nvidia boards will work properly
Underestimating AGP (Score:3)
A lot of people misunderstand and underestimate the benefits of AGP. It is not particularly great at making video faster; this is because very few graphics cards actually bother to use the primary function of AGP, texture caching in main memory.
What AGP is very good at is improving total system performance by taking graphics data off the main PCI bus.
The PCI bus is a non-switched fabric bus. All PCI devices have to share the bandwith of the bus and a graphics card can consume almost the entire bus bandwidth. By moving graphics data to the PCI bus, you can significantly improve total system performance. In that sense, it roughly is the equivalent of segmenting a network.
A better fix for this problem would be a switched-fabric main bus, like the kind that high-end workstations use. Unfortunately, that solution is very expensive and AGP is relatively cheap.
All the criticism (Score:2)
New standards need to be made for things to advance. You may not need the power, but things that you enjoy need the power.
Think of the Animation industry. We are either paying big bucks for SGI boxes (for this kind of bandwith) or using NT boxes that don't have quite the pep.
The next step allows us to get that much closer to matching SGI boxes. When the Intel boxes can match the SGI boxes, then the Linux movement in the Animation industry can really take hold.
Trust me... this isn't a bad thing.
Re:Memory Mapped Graphics interface is obselete. (Score:3)
There's a big problem with using a purely serial interface to the video card. For really good 3D graphics, the serial interface would need to be horrifically fast, or you would need an awful lot of RAM on the video card.
Most games will run in 16 MB or less of video RAM for textures, but this is using fairly low-res / small textures. I play a lot of 3D games, and they all have the same visual problems: lighting and shading anomalies, low resolution textures tiled over surfaces, and low polygon counts. These are some of the main problems preventing current 3D games from looking "real". John Carmack and the other gods of 3D programming do a lot of work to get around these problems and disguise them, but it would be better to just fix them.
The use of really high resolution textures would make games look dramatically better, but is difficult or impossible with current video cards. A single high resolution, 24-bit texture, covering a wall or floor for instance, might easily be over 1 MB. And a typical 3D game needs dozens of textures all at once.
For example, assume that you are running a game at 60 frames per second. Suppose your screen resolution is 1280 x 1024. Then for really realistic graphics in a big outdoor area you might want to have about 32 high-res (1024x1024), 24-bit static textures. That's 96 MB right there. Add in some mip-mapped downsampled versions and it's at least 128 MB.
Suppose you've "only" got 64MB of RAM on your video card. For a really rough approximation, you will have to transfer about 64MB of textures through your bus, 60 times a second. Do the math - it's 30 gigabits per second. I'm no hardware engineer, but I don't think it's that easy to even make silicon that switches at 30 gigahertz, and that's what you would need for a serial bus.
Even AGP 4x only manages 8 gigabits per second, and it's a parallel bus.
Even if you have enough texture RAM on the video card, many games use procedurally generated textures, usually for things like water, fire, and other effects. These change every frame, and look dramatically better than a simple repeating loop. Unfortunately, procedurally generated textures must be uploaded to the video card every frame. Even one high-res procedural texture could suck up 500 Mbps of bandwidth.
Hardware supported texture compression helps a lot, but can't completely solve the problem.
Really, I think the best thing for high speed graphics would be for the video card and the CPU to just share a big whack of high-speed DDR-DRAM. Interestingly enough, this is the approach that Microsoft's X-Box is taking.
Using shared memory between the CPU and the video card would also make it much easier to experiment with more esoteric forms of 3D graphics generation, like hardware support for voxels.
That could lead to some gaming breakthroughs. I'm getting tired of 3D games with worlds built out of perfectly flat triangles and rectangles with blurry textures plastered across them.
Torrey Hoffman (Azog)
My upgrade schedule (Score:2)
Do you ever skip the current latest/greatest because you know what's around the corner?
For some odd reason, I find my upgrades are synchronized to the release schedule of Id Software. :)
--
Re:All Computer Companies Hate Us (Score:2)
IEEE1394 card + IEEE 6pin to 4pin cable + DV video camera with 4pin connector.
If you look at the size of the 6pin end, it's twice the height of a USB (rectangle one)
the 4 pin one, however is about 1/4 it's size. Portable electronics that don't need to be driven by external power, can easily put the connector in a accessable place then.
I'm still waiting for a Fibre Optic cable (2.4Gbit/sec = 300MegaBytes per second, or 6X faster than Firewire.) to just replace EVERYTHING that comes outside the computer.
(think about it, get a pair or two of Fibre cables, and you are already faster than computers Expansion Bus's)
I want to see every house connected to the Internet via High-speed Fibre... then we can get rid of conventional Telephones, and Cable and replace it with a Peer to Peer and Client Server Audio/Video uni and multicasts.
(After all, current "CABLE" wastes 95% of it's bandwidth)
Re:Firewire (Score:2)
Maybe something that looks more vivid than real life. Then I won't have to go out in the Big Blue Room anymore. The savings on sunscreen alone should pay for my new Super-Ultra-Neato-Keen-AGP 32x video card...
Credit royalties (Score:2)
Re:Firewire (Score:2)
Re:AGP all over again (Score:3)
Most games don't even take full advantage of AGP 2x! Because video RAM is so cheap nowadays, with 32 meg cards being common, the AGP bus is going to waste most of the time now. There are benchmarks showing very insignifcant gains going from AGP 2x to AGP 4x... and they're talking about AGP 8x now? Big waste of money.
And yes, serial lines *ARE* much more able to made faster than parallel. Parallel links suffer from one problem - the need for every dataline to be locked 'solid' before sending the next logic level. A slight skew in arrival times can corrupt data. The limit is just how powerful your parallel line drivers are to quickly propagate all the lines. (This also explains all the zig-zags you see on motherboards and RAM blocks - they act as delay lines so the signal doesn't arrive too early before the other signals).
Yes, I have skipped planned obsolescence (Score:2)
When Intel decided to come out with their own proprietary slot interface for the MB-CPU connection, I vowed never to tie myself into it.
So I never bought a slot CPU - I went from Pentiums to AMD K6, to K6-2, to K6-III, to K6-2+
BTW, I think I must be one of the few people to have bought a K6, K6-2, K6-III, AND K6-2+
Re:They like to keep it easy... (Score:2)
Er, ISA survived because people demand it, not because of Intel conservatism. Intel has tried to kill ISA for five years, much like the consortium of PC makers behind EISA ten years earlier.
ISA survived not because the bus and chip makers liked it, but because nobody wanted to throw away an old but solid peripheral in favor of a new one that did the exact same thing no better; and in low-bandwith situations, that was what was being asked.
Steven E. Ehrbar
It's not about Apple... (Score:2)
Intel would rather promote their own technologies and earn royalties than pay someone else. That's why they are trying to position USB 2.0 against FireWire.
wrong focus (Score:2)
--
Do I ever skip? (Score:2)
Oh hell yes.
The history of computers in my life looks something like this:
... and that's where I am today. I still have the last three. Why haven't I upgraded to a PIII (or AMD equivalent)? First, because all the programs I run regularly run fast enough for me. I would probably upgrade to 256Mb RAM before I upgraded the PII to a PIII.
Second, the PIII is old news. In the next year or so, Intel will release something even better. (When I bought my PII, the PIIIs had already been out for half a year.)
Purchasing computer equipment is a gamble. It's kinda like buying a new car, except obsolescence is measured in weeks rather than years. Remember the old Tandy computers sold at Radio Shacks? I think they became obsolete the moment you walked out the store...
More on topic, a third reason I don't upgrade is because I don't have a need for "new connectivity abilities" that may or may not be around in a few years. I had to hack USB support on my PII, yet within the next year or so USB 2.0 devices will be out. And then there's FireWire (or whatever IEEE it is); sounds nice, but there are already competing standards (like USB 2.0).
This could turn out to be an IDE vs. SCSI debate -- one is common in lower-end models, one is more common in higher-end models -- but I think consumers are becoming a bit more savvy (or is it a bit more weary?) regarding multiple standards for the same devices. How many types of Zip drives can you get? A quick look reveals ATAPI, SCSI, parallel, USB, and FireWire. Good God. This kind of multiple-connection production cannot be good for a company's bottom line.
Re:i never buy the latest and greatest (Score:2)
Re:Standard? What is this strange word? (Score:2)
Re:A move by Intel to high clock, serial technolog (Score:2)
Firewire? (Score:2)
Re:read between the lines (Score:2)
An Intel initiative designed to double the graphics performance for next-generation PCs and workstations was detailed on Thursday at the chip maker's developer forum.
Santa Clara, Calif.-based Intel Corp. (stock: INTC) offered a glimpse of its new PORN8x road map for future graphics applications in desktops.
The PORN8x is an updated version of its previous graphics initiative, called PORN4x. Like PORN4x, the new PORN8xspecification implements a 32-bit wide bus. But the new specification doubles the graphics performance to 533 MHz and supports a data-transfer rate of 2 Gbytes.
Intel said the PORN8x specification is tuned for its upcoming Pentium 4 processor.
"The forthcoming introduction of the Intel Pentium 4 processor means that the external graphics attach point must advance to take advantage of higher processor and bus speeds and meet the need for better 3-D visualization in games and on the Internet," said Pat Gelsinger, vice president and chief technology officer of the Intel Architecture Group.
"We are focusing on a unified approach that embraces all high-end PC desktop and workstation market segments," Gelsinger said. "The next part of that road map is PORN8x, an evolutionary step from PORN4x, to be followed by a future serial graphics bus."
The PORN8x specification also received endorsements from leading graphics vendors, such as ATI Technologies Inc. (stock: ATYT), Matrox Graphics Inc., and Nvidia Corp. (stock: NVDA).
"ATI has been working closely with Intel to develop a robust PORN8x bus specification, and is pleased with the increased bandwidth enabled in this new graphics attach port. ATI will offer future members of the RADEON family that fully exploit PORN8x," said Henry Quan, vice president of corporate development at ATI.
"Collaborating with Intel on the development of the PORN8x spec is particularly exciting since the extra PORN bandwidth will benefit the many innovative technologies being developed for future Matrox products," said Jean-Jacques Ostiguy, chief architect at Matrox, Quebec.
blech! (Score:4)
We can't even know what they mean a "Serial Graphics Bus", but I would bet its not a replacement for Firewire or USB. Please save the mindless speculation for the comment area
If slashdot wants to be a rumor site, how about you post some real rumors?
Re:Credit royalties (Score:5)
Re:wrong focus (Score:2)
The first reason why cards with AGP 4x don't do better than AGP 2x is that all cards concentrate on having textures on-board, thus almost never using AGP texturing. Which is why the speed of AGP doesn't show up on the benchmarks in the end. If we had a GeForce 2 GTS with only 8 Meg onboard, I bet we would see a difference between AGP 2x and 4x because the card would be forced to use it.
The other reason is that the memory bandwidth of most PC is not big enough and the AGP 4x video card has to wait for the memory, reducing the advantage of the faster bus. PC100 SDRAM has only 800 M/s bandwidth, which is far from enough to feed a CPU AND an hungry AGP 4x video card.
AGP all over again (Score:4)
Intel needs to stop taking niche products and pushing them to everyone as a necessity. How about just focusing on decent processors, okay?
Re:Memory Mapped Graphics interface is obselete. (Score:2)
Yes, the graphics card and CPU both need to access all that memory one way or another. But look at how graphics cards do it - private interfaces, often to high speed DDR - DRAM over 256-bit wide pathways. They have huge bandwidth.
So, one option is to stop using buses and use a point-to-point system. I think some of the Athelon and DEC Alpha systems already use this: the CPU has a dedicated private pathway to the memory, and so does the video card, and so does the chipset which provides the PCI bus and other interfaces in the system.
Anyway... yes, you are right, both the CPU and the graphics processor will need some sort of connection to the hypothetical shared memory, and it would have to be extremely fast. The graphics processor would probably need a cache, just like the CPU. Given that, it might be better just to give up the shared memory idea altogether, use AGP 8X and texture compression, and design applications to deal with it and work around the problems.
Sigh.
Idea for a slashdot interview: John Carmack on video card architechture!
Torrey Hoffman (Azog)
Yes!!! (Score:2)
I have not bought a new computer since 1996 because I have been waiting for the Merced--doh! It seems Intel was only able to give the project a new name (Itanium) by the scheduled release date--go figure.
Memory Mapped Graphics interface is obselete. (Score:3)
Your 2D graphics card (lets say a bog standard Matrox Millenium) takes a list of commands (i.e, draw a few lines, do a few bitblts) and processes them indepenentantly of the CPU. Infact you can pretty much bet that your video driver hardly reads/write directly to the frame buffer. This is pretty much extended to 3D except they take it a step further with geometry engines and so forth.
As things have got to this stage now it pretty much makes sense to just have a really fast serial link to your graphics card which would use up less tracks on the mother board.
I personally like the way things have turned full circle. X is designed from a serial point of view and all the nay-sayers and critiques should take a step back and see why it is such a good design (after you take away some of the bloated "extensions").
Give me a break (Score:2)