Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel

Yet Another Serial Graphics Bus From Intel 193

ottotto writes: "Techweb has a story about Intel's High Speed Graphics Initiative. After discussing another doubling of the AGP, VP Pat Gelsinger said "The next part of that road map is AGP8x, an evolutionary step from AGP4x, to be followed by a future serial graphics bus." ANOTHER serial graphics bus? Is not the upgrade path to IEEE 1394B (800 Mbps Fire Wire) and beyond sufficent? Is this, along with the USB 2.0 spec another way around giving any credit or royalties to Apple?" I suppose companies have to make their plans somehow, and new products are better than living in the 1960s forever. But sometimes these "roadmaps" (which often turn out to be more like directions scribbled on the backs of napkins) seem to smack of planned obsolesence. Do you ever skip the current latest/greatest because you know what's around the corner?
This discussion has been archived. No new comments can be posted.

Yet Another Serial Graphics Bus From Intel

Comments Filter:
  • It seems that Intel wants to move to an entirely serial world. First it was USB. Then USB2, then Serial ATA, and now a serial graphics bus? As in one bit? The clock-speed of this bus is going to have to be in the dozens of gigahertz! Or am I missing something here?
  • by Anonymous Coward
    I have to agree. Could we get a link to the roadmap that indicates serial.

    The most controversial part of the headline has virtually no information to back it up.
  • Future computers may not have video cards at all. With a standard graphics bus port the current concept of a video card could exist within the display (probably a flat panel of some sort but it doesn't matter).

    Unfortunately this does mean that a standard programming API for the video hardware on this bus would be needed to avoid making monitors not universally useful due to device driver issues.

    At 133mbytes/sec, my current PCI matrox card is working just great and will be for a couple more years. The good old HD-15 connector is here to stay forever on CRTs.
  • The Radeon is slower than a GeForce2GTS. Given the scary small price differential between the two (none) I don't see how you can consider bundling them a good thing.

    Are you deaf? The Harmon-Kardon speakers can't touch the Klipsch's on some Compaqs and are appreciably worse than my ACS495s. (Not to mention the fact that they lack a subwoofer!)
  • ..288th post?
  • > Well, but why USB 2.0 is not suitable for DV?

    USB 1.0 and 2.0 require that a computer is somewhere on the bus. FireWire does not. The idea behind FireWire is that you will eventually unpack your new home entertainment system and plug the DVD player into the TV with a FireWire cable, and the TV into the amp, and the amp into the speakers. Then you're done and they all talk to each other. If you screw up and go TV-amp-speakers-DVD it doesn't matter. Adding a device just means plugging the new one into the last one (until you reach 63). You also don't need to add a hub every four devices or whatever.

    > Due to availability of devices?

    That's another reason. Every DV camcorder and VCR on the planet has a 1394 port. It is THE way that DV moves between devices, including computers.

    > I'm a big fan of Apple and their technology, but
    > USB 2.0 looks more attractive than the FireWire.
    > Apple should do something on this.

    Not sure why you think USB 2.0 looks attractive, other than the fact that it speeds up the USB bus so you can print and scan at the same time. Things are more exciting in the FireWire world with the DV stuff and with 1394b coming out soon. Apple will add USB 2.0 to future Macs just like they added USB 1.0 to past Macs. They are very aggressively behind USB. In 1998, a Mac user could choose from a handful of ADB joysticks, but now you have your pick of any USB joystick in the industry, without even requiring a special Mac driver (USB Overdrive on the Mac just works with all keyboards, joysticks, and mouses). It's been fantastic for Apple.
  • What I don't understand is what kind of mentality could result in doing that.

    It's obvious once you understand basic economics - I'll explain it for you.

    First, in order to simulate the effects of years of economic training, drink a couple of shots of Whiskey (I'm using Knob Creek bourbon).

    Ok - ready?

    Let's say Intel sells the good chips at P1, and has a demand of D1 at that price. They are making a total of (D1)*(P1)=$1.
    There is a set of people (D2) who aren't willing to pay P1 for the good chips - they want something at a lower price, P2.

    Intel could drop the price of the good chips to P2 Is less than P1, and make (D1+D2)P2=$2. In some cases, $2 Is greater than $1, and this is the action they should take.

    Now, there is a third option - Price Discrimination:
    Intel releases a new chip, which they have crippled in some way.
    For the majority of the people who are willing to pay full price for the good chip, it is of no use to them, but for many of the people who want something cheaper, it is barely usable.
    Now the equation for how much money they make looks something like this:
    (D1-(D1*e1%))*P1+(D2+(D1*e1%))*p2=$3
    (e1% are people who will buy the inferior chip instead of the good one)

    Now, the trick is, by keeping e1 low (by making the chip inferior to the good chip) and by adjusting the prices carefully, $3 can be made much greater than $1 and $2.

    Make sense to you? If not, take another drink and read it again...
    --
  • ...really high resolution textures would make games look dramatically better, but is difficult or impossible with current video cards. A single high resolution, 24-bit texture, covering a wall or floor for instance, might easily be over 1 MB.

    Actually, this is quite possible with todays hardware using texture compression. The problem is that most people do not have the most current generation of 3D-cards, and game designers need to make games that run on as many machines as possible.

    A 1024x1024 16-bit texture is two megabytes.
    If you want 32-bit it's four megabytes.
    Apply S3TC texture compression, and voila it's under 700k. If you used 3dfx' texture compression sceme,and your four megabytes texture is a mere 512k. Most of the time you won't even need textures as large as this since you can do trickery with detail textures.
    I don't think we'll ever see any gamers-level graphics cards pushing all textures over the bus, but I don't think we'll be seeing many cards with more than 128MB of ram. It might even go down a bit, but graphics cards will always have a bit of ram that's faster and more expensive than system ram, for the same reason CPUs will always have cache.

    A penny for your thoughts.

  • I doubt that's your page. You've also posted offtopic, and annonymously.

    Personally, I believe the capalert site is a misguided attempt by Christians that only ends up being used as a tool to mock Christians.

    Why a site designed to promote choosing safer programs for children even bothers to review "R" movies is beyond me, that rating was designed to help exclude the underaged and in the last year or so is rigorously enforced, some areas enforced it for quite some time.
  • > Just about everyone would be a "Microsoft"
    > if they could

    Although everything else in the comments to this story seems to be misinformed bullshit, at least this much is true.

    Well said, and thank you!
  • AGP sucks (for me) because there is just one fucking AGP slot. I have three video cards on my PC. What the fuck am I suposed to do ?

    Maybe rethink your video strategy? <ducking>

    I would have thought that something like an ATI All-in-wonder + a decent 3D card would get you all you wanted. Hell I think that both Matrox and NVIDIA have combo cards and if not, I am positive that there are very expensive commercial grade frame buffers which could capture your choice of composite/SVideo/maybe even RF modulated at XSVGA resolutions and tack that on the AGP and add your Voodoo3 for 3D.

    AGP is yet-another-kind-of-driver adding to the general instability of PC hardware.

    That's bullshit, plain and simple. AGP is a wonderful idea compared to some of the alternatives from a cost perspective.

    I cried out loud when moto did the altivec.

    Why, because you would never use it? I can think of a dozen things I could use extremely fast vector mathematics for. Yeah it takes code to implement it, but that's the problem with computers... you can't think of everything beforehand.

    Same story then with the NeXT DSP.

    I don't know anything about this but it sounds like they put the DSP on a slow bus or crippled it somehow. A decent DSP (those new TI DSPs look pretty hot) will blow the shit out of your 1.5GHz PIII in doing straight mathematics, which was why DSPs were created in the first place... Fast on math, not so fast everywhere else.

    PC is general purpose hardware. THey should stay this way. Each time something specific is done, it is doomed to failure (VLB anyone ?).

    I disagree. PCs should cater to the ubiquitous "general user". That doesn't necessarily mean general purpose hardware. And FYI, I am certain that VLB was actually a better bus that PCI, but since PCI was smaller and involved more money brought in, there were certain politics involved quashing VLB.

  • On the other hand I can go get a cheap $400 PC that is much more powerful. Go figure.

    I would strongly disagree that it is "much more powerful". One of the many reasons why I like my macs is that Apple is a "widget" company. They sell you the whole widget and it's a fully-integrated widget. Don't get me wrong, I love my linux box... but it's a Toshiba 4220 SatPro and it is so insanely non-standard in everything that the Linux install is an all day affair of driver-hunting, PnP-disabling and kernel-recompiling. As a geek(tm) I'm okay with that. But I'm also a "regular user" a lot of the time and the joy of knowing that the hardware and software are designed for each other in harmonious widget-ship can be wonderful. And, really, it's beefy hardware too. Seriously. Head to your local mac shop and open up a G4. The easy-access case alone makes me scream in joy (they don't let me into a lot of computer stores any more)... and they're more expandable than my dad's Dell...

  • Very colorful.. I'm nearly embarrassed to say that I concur with what you said. Yup, that's about it. Later.
  • Why is it that my video card will soon have more RAM, a much faster bus and probably a more powerful CPU than the ones on my motherboard?

    NVidia should take a whack at Intel by integrating an x86-compatible CPU onto the next GeForce series cards. Make it possible for OEMs to ship a system that doesn't have a tradtional motherboard, but rather just a video card with a CPU on it to handle the general purpose computing tasks.

    The 'motherboard' would then be a compact passive backplane with disk controllers,ethernet and USB ports and you upgrade your CPU when you upgrade your graphics card..

    I'm sick of watching Intel dick around. The whole 'Slot' debacle is laughable - they deliberately make a 'slotted' CPU to try and slow down AMD/Cyrix, then they go back to making socketed CPUs because the slotted versions are too expensive.

    Now its RAMBUS - they try and force everyone to use RAMBUS, but then they go back to SDRAM because RAMBUS is too expensive.

    The Celeron has exactly the same cache-on-die technology as their supposedly high-end Xeons, and the Xeon is supposed to be expensive because it offers higher performance - why? it has exactly the same improvement as the lowend celeron has, as compared to the P2?? 'Oh yeah but the Xeons don't have the SMP support burnt out of their dies'

    Intel is stumbling blindly with the P4 and the Itanium.. Even joe average knows that the P3 does not give anyone a 'better internet experience', and the difference between a 600MHz Celeron and a 1.3GHz P3 is unnoticeable during day to day tasks..

    The Itanium will probably be outperformed by whatever the latest Celeron is when it finally gets released, and the P4 is just another 'Pentium'. Personally, i haven't noticed any difference in speed between the P2-450 i used to have and the P3-500 i have now at work..

    Clock-for-clock, i doubt the P3 is any significantly faster than the P2, and i doubt that clock-for-clock, the P4 will be either.

    The last two PCs i purchased had intel CPUs. The next one definitely won't.

  • Then don't USE it. AGP is just an extension of PCI, and as such, it adds very little to the cost of the system.

    AGP is cool whether you like it or not. Sure the AGP texture thing kinda busted, but the extra bandwidth is always welcome. The best part of AGP, however, is that it allows you to blit from system to graphics memory using the hardware blitter. (A major turn on for DirectDraw freaks like me ;)

    Intel really doesn't care about your segment. In the consumer arena, gamers are a very powerful segment. They're the ones who buy the 1GHz procs and the $500 graphics cards. Intel figures that if you keep the gamers on your bandwagon, with little cost to everyone else, then why the hell not?
  • by onyxruby ( 118189 ) <onyxruby&comcast,net> on Saturday August 26, 2000 @02:48PM (#825455)
    Hardware companies like to contiously upgrade the hardware available to the consumer. They do this because their competition does this. I understand that hardware companies that don't do this tend to have negative reflections listed in their stock price. Sometimes they get really sneaky and get togethor on things. This can result in things like the development of the PCI standard or waiting until a standard is released before selling hardware for that standard. These conspiricies are often wrapped up into coded documents called "RFC's"

    The hardware you buy will in fact become obsolete before you can "give" it to your kids. Sorry to dissapoint you after that peptalk you from the local big box salesman about "that top end computer will last you for 5 years!"

  • Instead of countering with something as equally bigoted as "Wow, you commies are so obviously deprived of sanitary facilities," I'll just say this.
    Learn to speak "sans accent."
  • by be-fan ( 61476 ) on Saturday August 26, 2000 @02:51PM (#825457)
    1) NVIDIA GeForce2 drivers are rock solid
    2) They support Linux.
    3) Yea, but todays mediocre is tomarrows Mac hardware.
    4) If you can live with a $400 system, then more power to you. Some people actually NEED the extra power, and will pay for it. Question: Would you buy a Porsche?
    5) It usually does. Either way, you get bragging rights.
    6) I'm a consumer swayed by 40fps in 16K/12K Quake III.
    7) To fill my needs, my system needs a lot of power. Live with it.

    PS> Before y'all get all hot and bothered about that Mac comment, consider it. Right now, Macs are up to AGP2x and Radeon cards while PCs are at 4x GeForce2 Ultra cards.
  • Right focus. It's incredible how many people don't realize sexy techs rule the roost. AGP is sexy, PCI is not.

    If you can find me a RAID array that does 500+MB/sec (the speed of 66/64 PCI) on a PC system, then call me. Otherwise, go talk to the Sun people.

    Gamers don't have dual monitors or RAID. The practicality of the matter is, that 1GHz procs, AGP8x, and GeForce2Ultras are aimed at gamers. Gamers run the high-end of the processor business and that's just the way it is.
  • I think as the years drag on we will find that this serial graphics bus is in no way like IEEE 1394.

    The PCI bus commitees are planning to convert to a serial bus as their clock speed increases. (Timing skew becomes unmanageable even for the short runs inside a single computer, pins and connectors cost money.) AGP is mostly a specialized PCI and will have many of the same issues as speed increases.

    This `serial graphics bus' will likely be a couple wires from your CPU (or bridge chip if they still exist) to the graphics processor. IEEE 1394 will still be there moving realtime video between video devices and maybe data to and from external devices.

    USB 4.0 will be there on your wired keyboard and Intel will be swearing they will replace everything else `real soon now'. :-)
  • 1394, in no current or planned incarnation, has nearly enough bandwidth for 3d graphics. I think there's plenty of room in the market for two standards that DON'T EVEN COMPETE. I can't speculate too much since there wasn't any information provided, but I would guess it tries to deal with some of the issues arising from limited main memory bandwidth. Don't ask me how; so long as it's faster, I'll be happy to buy it, or whatever it displaces from the high price point, depending upon how much money I have at the time.
  • Do some math buddy. The only time a video stream goes over the graphics bus (expect in cases like ATI's) is the final display. With a 1080 x 720 x 32 bpp x 30 fps HDTV stream, that works out to around 93MB/sec. A paltry sum even for PCI. Face it, the only reason AGP exists is for gaming and modeling.
  • What are the problems with AGP? Aside from the one card limitation, I can't think of any. It's fast, it's cheap, AGP cards don't fall out of their socket, it allows direct access to RAM, what more do you want?
  • The reason why AGP4X isn't so dominant in the market right now is Intel's inability to create a chipset as reliable as the 82443BX (Case in point: the 820 with the MTH issue, and its pricey replacement, the 820 with RDRAM).

    Until Intel can revoke their contract with Rambus (or until Rambus finally kicks the bucket), we will be damned to partially faulty chipsets until the wave of corporate greed and the proprietarization of standards has ended. Right now, P3 builders are stuck with either the Rambus 820 ($1500 just for the board, chip, and ram) or a legacy of partially faulty chipsets from VIA.

    The only non-Intel chipset I would ever consider buying would be NVidia, if they actually do make one for the P4. They got it more than right with video chipsets, just wait and see how they do with the mainboard chipsets.

    As the original author's signature states ("Let's put out [a] sh!tload of new technologies, let's fix the hardware incompatibilities or bugs issues later."), companies are infatuated with releasing new hardware in order to maximize profits. They then decide to fix the arising problems with virtualized drivers which lag worse than Java.

    Just think about it: The US Embassy would never think of hiring a team of German, Ukranian, Swahili, Spanish, and Russian translators to aid in a conversation with French diplomats.

    So why are driver engineers using a horde of inefficient languages and subroutines to build drivers that would work better if the compile job was centralized under one style of coding and geared toward efficiency?

    These are the things that companies like Creative, AMD, and Adobe don't want us to think about.

  • You seem to be saying that the cheapest system that fits your needs is the system for everybody. Maybe our needs are different? The only games I play are Worms: Armageddon, Starcraft, and AOE/etc. I don't like FPS's, for the sole reasons that there's no point behind them. Luckily, that lets me buy cheaper hardware. Speaking to someone who bought a computer for $550, monitor included, ~$1200 sounds like a lot. It may not be a lot in the scheme of things, but what's not?

    Of course, I am a liberal environmentalist, but I don't think that has anything to do with it (how many liberal environmentalists buy computers OR cars to begin with?)

    Vote Nader [votenader.com]

  • Thanks for your support. :) I can certainly understand the pleasure of driving down the Pacific Coast in a 1956 Porsche Speedster, that's just not the kind of thing I'm going to do with money. I also have some reservations about what I buy from (i.e. you're not making me drive a American-made car anytime soon, any more than making me use a large-scale OEM computer). Personally, I love small cars, and I don't care about any numbers that say I shouldn't get one. Given my environmentalist nature (as previously stated :) I'm more concerned about fuel economy, which is why I'm leaning toward the Prius. Seems like a perfect combination, small car, great economy, good styling, reputable company... And it costs $20K MSRP, more than most mid-size cars.

    (Note: Don't think I'm attacking you, just the opposite. I'm just very opinionated and I had no idea who to reply to, to say this.)
  • We're moving into an era where your Wintel CPU is really a sort of front-end processor which "drives" a pixel-pushing graphics subsystem which is almost a computer in itself. Actually we're well entrenched in this era and now we're trying to fold the high-speed transfer functions previously handled by specialized hardware, back into our main CPU (viz., MMX, AltiVec, the entire design of the PlayStation 2). It's called the "cycle of reincarnation" and it's happened many times before in hardware (remember when FPUs were sold separately?).
  • Thanks for clearing this up, that is one point I always wondered about.

    BTW, switched fabric bus should be coming to PCs in the next view years - its called Infiniband.
  • by Anonymous Coward
    Techweb has a story about Intel's High Speed Graphics Initiative.

    High speed graphics, right? What they really mean is "faster porn".

  • yeah yeah... troll away...

    I know linux runs on other chips. The trick is to get the software companies to port to other chips... that is a battle that hardly ever can be won. In some cases Linux, without the apps, is just another whiz-bang-isn't-that-cool sort of OS.
  • Ahhhh, ok, so the chips were pretty much useless as P3s. I thought they had intentionally disabled perfectly good P3s, but it makes sense if they were forced to do something with slightly defective chips. Smart move, actually...
    --
  • Let's have some R&D on disk I/O...

    For the majority of computer users, graphic speed is *not* an issue anymore. If you want to, you can have Q3 running in 1600x1200 at very playable frame rates....

    What we need is faster disk I/O, but as Intel knows, it's just not as sexy of a thing to put in press releases.
  • Okay, I stand corrected on the PCI speed. (But there is some overhead on the PCI bus, so it will be a little slower than 1056Mbps)
    The point I am trying to make that isn't getting across is that the slasheditor thought 800Mbps firewire(or ieee something) made sense and was fast.
    Serial for graphics does not make sense, and is not fast.
  • by ostiguy ( 63618 ) on Saturday August 26, 2000 @02:16PM (#825473)
    Flipping through a computer catalog today, I saw that there were 4 pin and 6 pin FireWire cables. How on earth, after twenty years of personal computer incompatibilities, have we developed a ultrafast serial transport, and yet still have incompatible cables?

    Just about everyone would be a "Microsoft" if they could: Intel intentionally crippling Celeron 2's so not to offer a reasonable price point vis a vis the Pentium 3; Rambus; Pentium Pro's could go 4 or 8 way, with P2 or P3, you can only go 2 way, need to pay the Xeon tax to go more the 2 way SMP. IBM used to be the "Microsoft". Sun's silly Java tricks (standardize, or not.... let us get back to you), etc.

    So, this is mildly on topic, but it just shows how we shouldn't be surprised by stupid market leader tricks.

    matt
  • a few reasons:
    1: Drivers sux
    2: no multi os support
    3: todays greatest is tomorrows junk
    4: its way over priced im not paying $400 for a graphics card i can build a computer for that
    5: it rarely lives up to the hype
    6: im not a consumer to be swayed by flashy
    fud filled adds
    7: my 4 systems fill my needs
  • by be-fan ( 61476 ) on Saturday August 26, 2000 @03:06PM (#825475)
    Actually broadcast quality fullscreen fullmotion video is pretty low-res by PC standards. A 1600x1200 32bpp 60 fps video chews up around 460MB of bandwidth, significantly more than Firewire's paltry 50MB/sec. Although, I have now idea why the hell you'd pump a game through firewire, but higher quality video IS beyond firewire. (In fact, it would seem that Firewire can only barely keep up with a 30fps HDTV 1080i clip.)
  • yes but apple(and some others like sony) developed it and own the patents to it, so if you want to include a firewire(IEEE1394) port on your product you have to pay the royalties to apple. even though the price is only like 52 cents per port, that addes up to a lot of money that intel would have to pay apple.
  • I dont think they would work on another serial bus. I think they mean another AGP revision or a new bus all together.
    Think about it, Intel wouldn't slip up details of something so far away. If they are working on a competitor for FW:800mb, Why on earth would they call it a Serial Graphics bus? The only reason to call a bus a graphics bus is if the bus will be doing high performance 3D graphics. and that sugests a new interface for graphic accelerators, not an interconnect to compete with Firewire on any plane. (Firewire is far to slow, even at 800 mbs, to do 3d graphics. for instance, AGP is 16384 mbs, and it is a huge bottleneck on the GeForce. If texturing starts happening over AGP when your videocard runs out of RAM, your in for some big slowdown.)

    Sorry, but I dont think they are that dumb. its like USB2. It is designed to reduce the strain on the shared bandwidth when you have many device connections, not provide a DV connection to compete with Firewire. I've Never heard of a camera with a USB2 connection on it, but I have had all my USB devices slow to a crawl when I print a page and scan at the same time over it.
    Anyway, I thought Intel was friendly to IEEE1394? Why would the really want to compete?
    My thoughts anyway. don't take my words for gospel, I've been known to be wrong.

  • A serial graphics bus at 800Mbs is about as fast as twice PCI 1x. The AGP 8x that is mentioned is 2Gbyte per second. or about 20Gbs or 20000 Mbs.
    Why do you think the graphics chips boast 256bit wide interfaces? AGP is 64 bits wide. Serial cannot compete.
  • Actually, power gamers are a small weak segment. A couple hundred people buying new processors is nothing compared to an oem deal.

    What intel should be working on are new chipsets for motherboards. I have seen nothing but crap come out of intel since the bx.
  • You don't generally connect your DV camcorder through AGP....

    "serial graphics bus" in this case means a peripheral interface. (i think--we're basing the entire thread on a flippant one-sentence remark in a press release that mainly talks about 8x AGP...)

  • Purchasing computer equipment is a gamble.

    Actually, it's not a gamble at all, precisely because you know that the price/power is always decreasing (except with some commodities like RAM). You just have to be smart enough only to buy as much power as you actually need at the moment, because if you buy extra in advance, you're losing the time value of your money and you're losing the difference between what it cost you when you bought it and what you could pay for it when you finally need it. Yes, it often pays off to pay extra for a box that can be expanded at a future date (eg, with extra slots for that purpose), but that's about as far as you should take it.

    The only wrinkle is that people are very poor judges of how much they need, as opposed to how much they merely want.
  • Infiniband is similar to Sun's UPA and SGI's XIO. Direct, switched connection to memory. Makes tremendous sense for a graphics connection, unless it has a weakness I do not know about.
  • Back in the mid-seventies, I became interested in stereo equipement -- high-end stereo equipment that a friend "exposed" me to. I dreamt of the day when the price of this stuff would drop, and it did. But newer-and-cooler technology came along in the meantime. To make a long story short, it's now the year 2000 and I'm still waiting for the coolest tech to drop in price! Twenty-five years and I have YET to purchase a stereo system!
    My advice is this: make a stand, bite-the-bullet and purchase sensibly, because you could be waiting forever for your time to "strike it big"!
  • As someone pointed out earlier in this thread, it's not only Apple who are making this standard. Sony, canon and others too are into firewire (or iLink or whatever).
    This is as standard as it can get, it's nice that all these manufacturers can agree on anything! I would certanly pay 2-5 dollars more for å tv/video/stereo/dv-camera that has a fireWire connection.
    It's sad that other hardware manufacturers is too proud to just "join-in" on a standard, so they make their own, and the consumer looses. again.
  • Believe it or not, I have a Voodoo2 board, and understand how it works quite well, thanks.

    You're certainly correct about the Voodoo2 not supporting more than a 256x256 texture size -- I believe in this case that they simply tiled textures to simulate said test conditions.

    That said, swapping small textures should actually be easier than swapping one large texture, and only still serves to prove that the PCI bus is limiting in this case.

    Moreover, LOD isn't always a suitable answer, and neither is compression. Both are compromises that reduce quality in some way, which is what companies like 3dfx and nVidia are trying to avoid with newer and newer generation cards.

  • Am I wrong? I thought that Apple on got royalties on the "fireWire" name, not IEEE 1394?
    ---
  • It isn't about pride, it's about money.
  • Intel needs to stop taking niche products and pushing them to everyone as a necessity. How about just focusing on decent processors, okay?

    Intel can't keep growing based soley on CPUs. They have to continually expand their offerings (busses, RAM models, networking, hubs, video cams....) in order to continue thier growth, and give a basis for thier stock price.

    Interstingly enough, Intel's main purpose in being in business is not to produce processors, or computer components, but to turn a profit for investors.

    gotta keep things in perspective here.
  • Have you ever used Firewire? if you have you WOULD want it near your nuts to rock them.
  • I hate it when they rush standards that aren't
    "standard"


    ah so you're the one using XGA
    .oO0Oo.
  • The guys at intel seem to be far more comfortable sticking with what they "know" than moving ahead. This explains why the ISA bus has lasted far longer than it should have, why they haven't substantially changed CPU architectures in ages, and why they have decided that the next big graphics bus will be, again, serial.

    And then you go on to praise AMD. Intel invented the microprocessor, semiconductor memory, and the IEEE floating point format, has plenty of new technologies such as MMX and SSE, and is the main innovator of the newest technologies such as AGP, USB, and PCI. What has AMD done comparable? AMD uses the technologies which Intel invented (X86 architecture, Socket 7, PCI, etc.) and has yet to invent a new technology (e.g. the Athlon bus was bought from Compaq).

    Intel has not done a new architecture in years? Merced is the most revolutionary new architecture since the advent of RISC (20 years ago), yet gets dogged becaused it's too different. Instead, people praise the Sledgehammer, which is merely a 64 bit extension but nothing revolutionary. Make up your mind!

  • at least this much is true

    oh really. don't lump me in with your capitalist, corporatist, elitists friends please
    .oO0Oo.
  • Do you ever skip the current latest/greatest because you know what's around the corner?

    I never skip the current/latest offerings because I know what's around the corner.

    I skip the current/latest-greatest offerings becuase I don't know what's around the corner!

    I haven't played games in years, 'cause they current batch of games don't go where I want to go, and RAM costs too much to write the kind of sim I want to play. (With RAM prices coming down, and Linux clustering becoming common place.... drool.) Anyway, I'm still running a little 4MB PCI card fo video, and the last game I sampled was Need for Speed III.

    When they release Ultimate RealWorld Sim (or whatever it will be called), I'll think about a new video card.
  • Isn't that why they always alternate data & ground lines? Is that no longer effective with today's high clock speeds & tranfer rates?
  • AFAIK the royalties Apple receives for Firewire/1394 only apply to version 1, the current 400Mbps technology.

    Firewire 2, the 800 Mbps version, and later versions are royalty free.


    Lord Pixel - The cat who walks through walls
  • I'll have to say that AGP is cool for audio people (musicians, content creators) too! PCI video cards have the nasty habit of mastering the bus for several milliseconds at a time for burst transfers. This messes up latency big time, which is an issue for hard disk recording and live performance.

    Moving the video onto a seperate bus gets your low latencies back, which is a Good Thing.

  • Unfortunately this does mean that a standard programming API for the video hardware on this bus would be needed to avoid making monitors not universally useful due to device driver issues.

    No need for a standard... I'm sure every manufacturer will thoughtfully include a floppy with both Win95 andWin98 drivers on it.

    --Jeremy the embittered BeOS user

  • And you trashed the Mac in your above post since it used AGP 2X? Know your facts. You claimed the mac uses yesterday's mediocre technologies, while the hardware is actually every bit as up-to-date as most x86 hardware. G4s have 66 MHz PCI on all slots, which doubles bandwidth for cards that support it. This is obviously useful.

    AGP 4X, on the other hand, has been proven to be of limited utility. For most cards, the bottleneck is from the GPU to onboard RAM. It has nothing to do with the system bus. The jump from AGP 2X to 4X incrases performance by the slightest margin, that is all.

    Grrrr...yes, this is slighty OT etc. - but I had to get it out of my system.

    -Smitty
  • Anyway, I thought Intel was friendly to IEEE1394?

    Quite the opposite. Intel is IEEE 1394's biggest enemy.

  • by philipsblows ( 180703 ) on Saturday August 26, 2000 @03:37PM (#825501) Homepage

    The new "Serial Graphics Bus" referred to in this article may actually be a follow-on to the serial digital standards being pushed today for flat panels. Digial Video Interface, Plug n Display, etc, are all in various states of acceptance, and then there is always the fact that Silicon Image [siimage.com] owns the patent for PanelLink, which is the link layer protocol that runs on the DVI/PnD connection (Intel is intimately involved in this effort, but does not own anything outright)

    Also interesting is the fact that the MPAA is emphatic about an encrypted link from the source (DVD, for example), right to the display... they want to disable any possibility of copying pristine digital content-- you may have heard about this elsewhere... When I worked on flat panel stuff (at Philips, go figure), intel was definitely getting behind such an encrypted link (which would be serial).

    Supporting larger, higher-density digital displays with a digital input stream will require better and better connections as well, and if a P4 is needed for better peer-to-peer networking, than certainly intel will find themselves getting involved in some "critical" way here as well.

    But all of this is just a guess

  • Just to clarify one of things that was said in the original note... AGP and FireWire (and indeed AGP/USB) have nothing to do with each other. One (1394) is a bus designed specifically to transfer data from a huge variety of internal and external devices that can be daisy chained together. The other (AGP) is a standard meant for expansion cards that serve a very specific purpose (Graphics) and provides a very direct and large link to the CPU.

    Because of these differences, I don't think this is a sign of Intel putting up another front against Firewire. It's simply what they see as the "next generation" after their AGP design tops out at 8x.

    I'm somewhat disappointed that there will be *yet another* type of card out there. Luckily, it probably won't be out for at least 2 years... so we have time to enjoy what we have now and worry about this new thing when it shows up.

    And as for the whole "1394 vs. USB 2" thing... god... we've gone over this so much. It's obvious that FireWire has a very strong foot hold right now... and USB 2 will still not be suitable for DV, so what's the problem? Judging buy the sheer number of companies now producing 1394 products, it's obviously not the licensing fees!
  • Dude, there are also 2 different types of USB plugs. I was mighty pissed when I brought my printer home from the store and saw that I needed to go back and get a special flat to square USB cable!

    Pope

    Freedom is Slavery! Ignorance is Strength! Monopolies offer Choice!
  • The reason behind this architecture has been mentioned in a number of trade journals. Encrypted monitors and speakers are coming, to better "protect" digital content. A stream cipher on a serial bus does not degrade the signal quality, and so is the preferred technical means to acomplish this protection (Especially for video.)

    It allows encrypted DVD's and broadband streams to be sent directly to the output devices without an easily accessable decrypt stage in the general purpose processer.

    So next it'll require an e-beam prober or another key management 'accident' to expose the raw digital content to the immoral ravages of fair use, all subject to felony charges courtesy of DMCA.
  • by Inoshiro ( 71693 ) on Saturday August 26, 2000 @07:44PM (#825518) Homepage
    "This is the reason that Rambus (16-bit 400 MHz) is so hard to manufacture compared to SDRAM. Moving to a serial bus allows the clock speed to be cranked much, much, MUCH higher without worrying as much about data errors."

    Either you just accidently contradicted yourself, you forgot to add a final sentence, or you're confused about RDRAM and SDRAM.

    SDRAM is parallel. The Rambus is hard to manufacture because it has to be high clock speeds.. very high clock speeds, to beat out the parallel technology. That's because a normal PC100 DIMM doesn't have to be very fast (thus avoiding the more severe effects of cross talk) because it can push 64bits per clock cycle. It can acheive a peak bandwidth of 800Mbps. This is half of what an 800Mhz RDRAM RIMM can do. But the "simpler" RDRAM RIMM is much, much more expensive. Adding extra datapins at slow speeds isn't too hard compared to cranking even the smallest number of pins to incredibly high speeds.

    Based on your assertation that crosstalk is bad thing, one would think that RDRAM would be easier to manufacture as they use the simple serial process. This isn't completely true, as you've oversimplified the case and sound like you contracticted yourself ("RDRAM is hard to mfr.. Serial RDRAM is easy to mfr").
    ---
  • Parallel:CISC::Serial:RISC. Which is better?
    The way you mean it, the first two. Parallel ports were simply better. There are no real RISC processors in use today - current PowerPC processors have more instructions than any of the CISC processors which existed back when the RISC wave started. Intel's P4 will introduce something like 100 new instructions just for SSE2.

    (Note that the "Reduced instruction set complexity" expansion of that acronym came later and represents the strategy which actually works.)

  • AGP is just an extension of PCI, and as such, it adds very little to the cost of the system.

    Actually, AGP bypasses the PCI bus. That was the whole point of AGP -- to bypass the (sometimes multiple) bus bridge chips and abstractions that the PCI design uses, and thus streamline the pipe between the graphics adapter and main system memory.

    Unfortunately, every benchmark I've seen only gave about a 10% performance boost for AGP over PCI. It appears that system memory is always going to be dog slow when compared to local graphics card memory. The only design to actually try to offload graphics memory to AGP was the Intel i740, which was an abysmal failue. So there is some question as to whether AGP accomplished anything except generating patent revenues for Intel.
  • Then don't USE it. AGP is just an extension of PCI, and as such, it adds very little to the cost of the system.

    You missed the point completely. AGP wasn't free to switch to. It had a lot of behind the scenes costs that *were* transferred to the consumer, though you may not care. And it required all motherboard manufacturers to modify their existing products. Was this worth it? No.

    A better example is the vector instruction set of the Pentium III (aka the Katmai instructions). These have proven to be almost completely irrelevant, and yet that was Intel's big push behind the Pentium III. The processor prices went through the roof as a result? And for what? Nothing.

    Hardware features are not free. Period.
  • Damn right on the bit/byte argument. My popular example is USB: Its signaling rate is 12 megabits per second; therefore its data transfer rate is 1.0 megaBYTES per second. However, due to USB's scalability, the maximum sustained data rate is 667K/sec. This equates to 4X CD-recording (okay, but 6X CD-reading is not), partial 10base-T speed compatibility, and 30 frames per second of YUV video at 352x288.

    However, as expected, non-serial interfaces (I loathe saying "parallel interfaces" because then everyone thinks about that 8-bit geezer, the LPT port) tend to perform much faster, with the only drawback being the limited scalability. But seriously, does ANYONE use 127 USB devices at once or 63 ieee1394 devices at once? If so, the transfer rate must be slow as hell.

    I would personally like to see the IEEE renamed to the SICC: Sextoy Interface Connector Consortium. I've had enough with serial devices; why can't we have something with 68 pins with the signaling rate of 1394? That might break the landspeed for data transfer rate!

  • First off, the comparison to USB2 and 1394B is WAY off base. The current AGP4X beats both of those quite easily. This is more related to Infiniband, but anyway...

    There are really two questions people seemed to be confused over:

    • Why do we need more bandiwdth?
    • Why do we need a serial bus?

    There is only one reason for more bandwidth: 3D graphics. AGP was originally designed so that graphics cards could use main memory for textures. Alas, it was too slow, and graphics card vendors began to just pile up RAM on the video card. However, most people don't realize that textures aren't the only thing sent down bus. Polygon data needs to be sent down every frame as well. Enter hardware transform and lighting. Suddenly games are starting to be designed with a LOT more polygons. Due to things like animation and dynamic level of detail, this polygon data can't just be stored on the graphics card like textures are, they have to be downloaded every frame. If you extrapolate the curve of supported polygons / second by the history of Nvidia cards, you'll see that AGP4X and AGP8X will be saturated relatively soon. All graphics card vendors (except maybe 3dfx, we'll see what they say after the Rampage ships) are clamoring for more bandwidth.

    The reason for going to serial is physics oriented. Parallel wires switching at high speeds can generate electrical fields that cause signals in nearby wires to change. This is known as crosstalk (or noise). This is the reason that Rambus (16-bit 400 MHz) is so hard to manufacture compared to SDRAM. Moving to a serial bus allows the clock speed to be cranked much, much, MUCH higher without worrying as much about data errors.

  • First off, Apple didn't invent USB. That's a Microsoft standard that Apple popularized.

    Second, Apple doesn't charge royalties for FireWire devices beyond a very reasonable 50-cent-per-port charge, which I'll bet is less than M$ charges for USB or plans to charge for USB 2.0 . The main royalties that Apple demands are for use of its "FireWire" brand name, which is copyrighted. Anyone can just label their products as IEEE 1394 (and many companies do) and go around this limit. Since 1394 is cross-platform anyway, and FireWire is identified with the Macintosh product line, companies can advertise compatibility with 1394 on Windows by labelling them as such.

    Also, don't dis a good standard. FireWire is very convenient: it allows hot-plugging easily. I've never heard about trouble getting FireWire to work properly. Just about everyone I know who sets up a lot of computers has "plug-n-cry" horror stories about fussy USB. Microsoft's choice of putting only 500mW of power through USB is pointless, and makes it such that you need [to buy, and pay royalties to M$] powered hubs to use any reasonable number of devices. FireWire carries 20W, enough to actually power devices, and it allows daisy-chaining of devices so you don't need hubs. Do we want to replace this standard with another USB? Do we want a specialized serial graphics bus from Intel to split everything into a buch of different standards so we have to buy more equipment, increasing royalties? I think not. Let Apple (and Sony, Panasonic, JVC, Western Digital, etc.) be.

    Fsck this hard drive! Although it probably won't work...
    foo = bar/*myPtr;

  • Probably the moderator has some shares of intel
    or worse, rambus :) and he's pissed off :)

    This post wasn't meant to be flamebait, but a
    true statement. I didn't want to jump in the
    4x bandwagon after seeing the poor integration
    and most of the board had to be switched down
    to 2x or to keep 4x you have to turn off some of the options that makes it worth the speed increase. Which is totally pathetic, it's like
    buying a pentium III and having to shut down the cache because the supply current is too borderline, stupid? well it's about the same thing exept for a graphic card.

    I hate it when they rush standards that aren't
    "standard" or need a patch to work, and with the recent chipset fiasco from intel, I can't beleive the AGP8X will be any better than the current 4x incarnation.

    On the bright side, there's always the fact that
    they did it with NVIDIA and they are the #1 gfx card company for now. So probably the next chipset + nvidia boards will work properly :)

  • by bwoodring ( 101515 ) on Saturday August 26, 2000 @08:08PM (#825543)

    A lot of people misunderstand and underestimate the benefits of AGP. It is not particularly great at making video faster; this is because very few graphics cards actually bother to use the primary function of AGP, texture caching in main memory.

    What AGP is very good at is improving total system performance by taking graphics data off the main PCI bus.

    The PCI bus is a non-switched fabric bus. All PCI devices have to share the bandwith of the bus and a graphics card can consume almost the entire bus bandwidth. By moving graphics data to the PCI bus, you can significantly improve total system performance. In that sense, it roughly is the equivalent of segmenting a network.

    A better fix for this problem would be a switched-fabric main bus, like the kind that high-end workstations use. Unfortunately, that solution is very expensive and AGP is relatively cheap.

  • I don't understand why everybody is against this.
    New standards need to be made for things to advance. You may not need the power, but things that you enjoy need the power.

    Think of the Animation industry. We are either paying big bucks for SGI boxes (for this kind of bandwith) or using NT boxes that don't have quite the pep.

    The next step allows us to get that much closer to matching SGI boxes. When the Intel boxes can match the SGI boxes, then the Linux movement in the Animation industry can really take hold.

    Trust me... this isn't a bad thing.

  • by Azog ( 20907 ) on Saturday August 26, 2000 @09:10PM (#825548) Homepage
    Actually, I think the opposite might happen - eventually, all graphics interfaces will use shared memory with the CPU. Neither serial or parallel interfaces will be fast enough in the long run.

    There's a big problem with using a purely serial interface to the video card. For really good 3D graphics, the serial interface would need to be horrifically fast, or you would need an awful lot of RAM on the video card.

    Most games will run in 16 MB or less of video RAM for textures, but this is using fairly low-res / small textures. I play a lot of 3D games, and they all have the same visual problems: lighting and shading anomalies, low resolution textures tiled over surfaces, and low polygon counts. These are some of the main problems preventing current 3D games from looking "real". John Carmack and the other gods of 3D programming do a lot of work to get around these problems and disguise them, but it would be better to just fix them.

    The use of really high resolution textures would make games look dramatically better, but is difficult or impossible with current video cards. A single high resolution, 24-bit texture, covering a wall or floor for instance, might easily be over 1 MB. And a typical 3D game needs dozens of textures all at once.

    For example, assume that you are running a game at 60 frames per second. Suppose your screen resolution is 1280 x 1024. Then for really realistic graphics in a big outdoor area you might want to have about 32 high-res (1024x1024), 24-bit static textures. That's 96 MB right there. Add in some mip-mapped downsampled versions and it's at least 128 MB.

    Suppose you've "only" got 64MB of RAM on your video card. For a really rough approximation, you will have to transfer about 64MB of textures through your bus, 60 times a second. Do the math - it's 30 gigabits per second. I'm no hardware engineer, but I don't think it's that easy to even make silicon that switches at 30 gigahertz, and that's what you would need for a serial bus.

    Even AGP 4x only manages 8 gigabits per second, and it's a parallel bus.

    Even if you have enough texture RAM on the video card, many games use procedurally generated textures, usually for things like water, fire, and other effects. These change every frame, and look dramatically better than a simple repeating loop. Unfortunately, procedurally generated textures must be uploaded to the video card every frame. Even one high-res procedural texture could suck up 500 Mbps of bandwidth.

    Hardware supported texture compression helps a lot, but can't completely solve the problem.

    Really, I think the best thing for high speed graphics would be for the video card and the CPU to just share a big whack of high-speed DDR-DRAM. Interestingly enough, this is the approach that Microsoft's X-Box is taking.

    Using shared memory between the CPU and the video card would also make it much easier to experiment with more esoteric forms of 3D graphics generation, like hardware support for voxels.

    That could lead to some gaming breakthroughs. I'm getting tired of 3D games with worlds built out of perfectly flat triangles and rectangles with blurry textures plastered across them.

    Torrey Hoffman (Azog)
  • Do you ever skip the current latest/greatest because you know what's around the corner?

    For some odd reason, I find my upgrades are synchronized to the release schedule of Id Software. :)


    --

  • You might also call it the "right-side up" type of cable.

    IEEE1394 card + IEEE 6pin to 4pin cable + DV video camera with 4pin connector.

    If you look at the size of the 6pin end, it's twice the height of a USB (rectangle one)
    the 4 pin one, however is about 1/4 it's size. Portable electronics that don't need to be driven by external power, can easily put the connector in a accessable place then.

    I'm still waiting for a Fibre Optic cable (2.4Gbit/sec = 300MegaBytes per second, or 6X faster than Firewire.) to just replace EVERYTHING that comes outside the computer.
    (think about it, get a pair or two of Fibre cables, and you are already faster than computers Expansion Bus's)

    I want to see every house connected to the Internet via High-speed Fibre... then we can get rid of conventional Telephones, and Cable and replace it with a Peer to Peer and Client Server Audio/Video uni and multicasts.
    (After all, current "CABLE" wastes 95% of it's bandwidth)
  • Games always seem to keep pace. Very-high-res "real"-looking video could eat up most of what Firewire currently is capable of.

    Maybe something that looks more vivid than real life. Then I won't have to go out in the Big Blue Room anymore. The savings on sunscreen alone should pay for my new Super-Ultra-Neato-Keen-AGP 32x video card...

  • Actually, AFAIK, only the FireWire name is trademarked by Apple. So a company could conceivably use IEEE1394 without paying Apple, so long as they didn't call it FireWire. (Note that many digital video cameras call it "DV link", probably for this reason).
  • What are you talking about. As far as I can tell 50MB/sec is the limit for firewire. (400Mbps) I'm not talking about compressed rate, but an uncompressed image. True, most firewire transfers were are compressed, but this guy was talking about sending a video-game stream, and you wouldn't compress that.
  • by tlhIngan ( 30335 ) <slashdot.worf@net> on Saturday August 26, 2000 @05:28PM (#825588)
    Well, you know what?

    Most games don't even take full advantage of AGP 2x! Because video RAM is so cheap nowadays, with 32 meg cards being common, the AGP bus is going to waste most of the time now. There are benchmarks showing very insignifcant gains going from AGP 2x to AGP 4x... and they're talking about AGP 8x now? Big waste of money.

    And yes, serial lines *ARE* much more able to made faster than parallel. Parallel links suffer from one problem - the need for every dataline to be locked 'solid' before sending the next logic level. A slight skew in arrival times can corrupt data. The limit is just how powerful your parallel line drivers are to quickly propagate all the lines. (This also explains all the zig-zags you see on motherboards and RAM blocks - they act as delay lines so the signal doesn't arrive too early before the other signals).
  • ... in the form of Intel's slot chips ... i.e. PII, PIII.

    When Intel decided to come out with their own proprietary slot interface for the MB-CPU connection, I vowed never to tie myself into it.

    So I never bought a slot CPU - I went from Pentiums to AMD K6, to K6-2, to K6-III, to K6-2+ ... with a couple of Cyrix chips thrown in there. Now that everything is going back to socket form, looks like I'll never have to buy a slot chip after all.

    BTW, I think I must be one of the few people to have bought a K6, K6-2, K6-III, AND K6-2+ ... go AMD!
  • The guys at intel seem to be far more comfortable sticking with what they "know" than moving ahead. This explains why the ISA bus has lasted far longer than it should have

    Er, ISA survived because people demand it, not because of Intel conservatism. Intel has tried to kill ISA for five years, much like the consortium of PC makers behind EISA ten years earlier.

    ISA survived not because the bus and chip makers liked it, but because nobody wanted to throw away an old but solid peripheral in favor of a new one that did the exact same thing no better; and in low-bandwith situations, that was what was being asked.

    Steven E. Ehrbar
  • Look, just because Intel isn't hot on IEEE 1394 doesn't mean that they are intentionally thumbing their nose at Apple...the fact is, Apple gets royalties on IEEE 1394 and Intel doesn't.

    Intel would rather promote their own technologies and earn royalties than pay someone else. That's why they are trying to position USB 2.0 against FireWire.

  • For now, there's plenty of bandwidth for graphics. Where's the extra bandwidth for traditional PCI devices like RAID and secondary displays?

    --

  • Oh hell yes.

    The history of computers in my life looks something like this:

    • Commodore Plus 4 (as in 4k RAM IIRC)
    • Leading 286/4Mhz, 640k RAM
    • Cumulus 486SX/20MHz, 2Mb (later 4Mb) RAM
    • Built-my-own 486DX2/80MHz, 16Mb RAM
    • CTX laptop Pentium/200 w/MMX, 40Mb RAM
    • Paradigm PII/400, 64Mb (later 128Mb) RAM

    ... and that's where I am today. I still have the last three. Why haven't I upgraded to a PIII (or AMD equivalent)? First, because all the programs I run regularly run fast enough for me. I would probably upgrade to 256Mb RAM before I upgraded the PII to a PIII.

    Second, the PIII is old news. In the next year or so, Intel will release something even better. (When I bought my PII, the PIIIs had already been out for half a year.)

    Purchasing computer equipment is a gamble. It's kinda like buying a new car, except obsolescence is measured in weeks rather than years. Remember the old Tandy computers sold at Radio Shacks? I think they became obsolete the moment you walked out the store...

    More on topic, a third reason I don't upgrade is because I don't have a need for "new connectivity abilities" that may or may not be around in a few years. I had to hack USB support on my PII, yet within the next year or so USB 2.0 devices will be out. And then there's FireWire (or whatever IEEE it is); sounds nice, but there are already competing standards (like USB 2.0).

    This could turn out to be an IDE vs. SCSI debate -- one is common in lower-end models, one is more common in higher-end models -- but I think consumers are becoming a bit more savvy (or is it a bit more weary?) regarding multiple standards for the same devices. How many types of Zip drives can you get? A quick look reveals ATAPI, SCSI, parallel, USB, and FireWire. Good God. This kind of multiple-connection production cannot be good for a company's bottom line.

  • Of course you can't argue that in general, second-rate hardware is put into first rate Mac systems. Take, for example, the G3. This multi-proc unfriendly, floating-point weak processor had no place in a Mac. The dual-G4 is seriously cool, and I'm actually considering it for my next purchase. However, it seems that all Macs come bundled with inferior hardware. Why put in a Radeon when you can get a GeForce2GTS at the same price? What is up with those gay-ass Harmon Kardon speakers on the cube. The less than spectacular CRT on the iMac, the list goes on. I have nothing against Macs and seriously want a dual G4. However, I have a problem with the fact that I can't use some top-flight hardware on it.
  • Except for gamers. (Awefully twitchy in Quake.)
  • You are. Serial enables higher clockspeeds and that's about it. A serial link running at 50MHz will always be slower than a 64bit wide link running at 50Mhz. The thing is, that serial links run at a higher clock-speed than parallel ones (generally.) However, serial is killer on latency.
  • Firewire is general purpose multidevice bus, ie. NOT suited for 3D-graphics. The protocol processing would generate consirable overhead, and even if didn't IEEE1394B would be too slow even if the bandwith was increased ten fold. AGP does need to be replaced sooner or later, and considering the poor performance and other problems, I'd rather have it happen sooner.
  • (08/24/00, 9:47 p.m. ET)
    An Intel initiative designed to double the graphics performance for next-generation PCs and workstations was detailed on Thursday at the chip maker's developer forum.

    Santa Clara, Calif.-based Intel Corp. (stock: INTC) offered a glimpse of its new PORN8x road map for future graphics applications in desktops.

    The PORN8x is an updated version of its previous graphics initiative, called PORN4x. Like PORN4x, the new PORN8xspecification implements a 32-bit wide bus. But the new specification doubles the graphics performance to 533 MHz and supports a data-transfer rate of 2 Gbytes.

    Intel said the PORN8x specification is tuned for its upcoming Pentium 4 processor.

    "The forthcoming introduction of the Intel Pentium 4 processor means that the external graphics attach point must advance to take advantage of higher processor and bus speeds and meet the need for better 3-D visualization in games and on the Internet," said Pat Gelsinger, vice president and chief technology officer of the Intel Architecture Group.

    "We are focusing on a unified approach that embraces all high-end PC desktop and workstation market segments," Gelsinger said. "The next part of that road map is PORN8x, an evolutionary step from PORN4x, to be followed by a future serial graphics bus."

    The PORN8x specification also received endorsements from leading graphics vendors, such as ATI Technologies Inc. (stock: ATYT), Matrox Graphics Inc., and Nvidia Corp. (stock: NVDA).

    "ATI has been working closely with Intel to develop a robust PORN8x bus specification, and is pleased with the increased bandwidth enabled in this new graphics attach port. ATI will offer future members of the RADEON family that fully exploit PORN8x," said Henry Quan, vice president of corporate development at ATI.

    "Collaborating with Intel on the development of the PORN8x spec is particularly exciting since the extra PORN bandwidth will benefit the many innovative technologies being developed for future Matrox products," said Jean-Jacques Ostiguy, chief architect at Matrox, Quebec.

  • by dox ( 34097 ) on Saturday August 26, 2000 @02:31PM (#825609)
    I'm becoming digusted with slashdot's journalistic ability. For all practical purposes, the link contains no information what so ever about a "Serial Graphics Bus" except that there might possibly be another one (and couldn't you assume as much?), and yet it's important enough for the title of the post?

    We can't even know what they mean a "Serial Graphics Bus", but I would bet its not a replacement for Firewire or USB. Please save the mindless speculation for the comment area

    If slashdot wants to be a rumor site, how about you post some real rumors?
  • by yofal ( 168650 ) on Saturday August 26, 2000 @02:33PM (#825610)
    Actually Apple shares all the royalties from IEEE1394 licensees with a consortium of other manufacturers including Sony (who brand it as iLink), Canon, etc. read all about it: http://www.1394la.com/lic_agreement.html
  • Humm well it is kinda true - and kinda not.

    The first reason why cards with AGP 4x don't do better than AGP 2x is that all cards concentrate on having textures on-board, thus almost never using AGP texturing. Which is why the speed of AGP doesn't show up on the benchmarks in the end. If we had a GeForce 2 GTS with only 8 Meg onboard, I bet we would see a difference between AGP 2x and 4x because the card would be forced to use it.

    The other reason is that the memory bandwidth of most PC is not big enough and the AGP 4x video card has to wait for the memory, reducing the advantage of the faster bus. PC100 SDRAM has only 800 M/s bandwidth, which is far from enough to feed a CPU AND an hungry AGP 4x video card.
  • by Junks Jerzey ( 54586 ) on Saturday August 26, 2000 @02:35PM (#825622)
    The problem with these crazy hardware plans is that they're adding additional cost and waste for people who don't need it. AGP was nice...for gamers. Didn't make a bit of difference for anyone else. And in all honesty it didn't do anything for gamers either. All that talk about using main RAM for textures went away, and video card makers just starting putting more and more VRAM on their cards. Everyone lost. Kinda like MMX.

    Intel needs to stop taking niche products and pushing them to everyone as a necessity. How about just focusing on decent processors, okay?
  • And how will the CPU communicate with the ``shared'' memory if you're not using either a serial or a parallel interface?
    Um. Oops, good point. I should have said serial or parallel bus, not interface. But you've still got me...

    Yes, the graphics card and CPU both need to access all that memory one way or another. But look at how graphics cards do it - private interfaces, often to high speed DDR - DRAM over 256-bit wide pathways. They have huge bandwidth.

    So, one option is to stop using buses and use a point-to-point system. I think some of the Athelon and DEC Alpha systems already use this: the CPU has a dedicated private pathway to the memory, and so does the video card, and so does the chipset which provides the PCI bus and other interfaces in the system.

    Anyway... yes, you are right, both the CPU and the graphics processor will need some sort of connection to the hypothetical shared memory, and it would have to be extremely fast. The graphics processor would probably need a cache, just like the CPU. Given that, it might be better just to give up the shared memory idea altogether, use AGP 8X and texture compression, and design applications to deal with it and work around the problems.

    Sigh.

    Idea for a slashdot interview: John Carmack on video card architechture!
    Torrey Hoffman (Azog)
  • Do you ever skip the current latest/greatest because you know what's around the corner?

    I have not bought a new computer since 1996 because I have been waiting for the Merced--doh! It seems Intel was only able to give the project a new name (Itanium) by the scheduled release date--go figure.
  • by Anonymous Coward on Saturday August 26, 2000 @02:39PM (#825630)
    Well the only reason graphics cards originally mapped thier frame buffer onto the computers memory space was because as you know the graphics hardware relied on the CPU to do the grunt work.

    Your 2D graphics card (lets say a bog standard Matrox Millenium) takes a list of commands (i.e, draw a few lines, do a few bitblts) and processes them indepenentantly of the CPU. Infact you can pretty much bet that your video driver hardly reads/write directly to the frame buffer. This is pretty much extended to 3D except they take it a step further with geometry engines and so forth.

    As things have got to this stage now it pretty much makes sense to just have a really fast serial link to your graphics card which would use up less tracks on the mother board.

    I personally like the way things have turned full circle. X is designed from a serial point of view and all the nay-sayers and critiques should take a step back and see why it is such a good design (after you take away some of the bloated "extensions").
  • "Is this, along with the USB 2.0 spec another way around giving any credit or royalties to Apple?" Hmm its a IEEE standard...look it up.

On the eighth day, God created FORTRAN.

Working...