Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software

NVIDIA Cg Compiler Technology to be Open Source 234

Jim Norton writes "This announcement from nVidia states that their Cg compiler technology for 3D applications will be Open Source and available under a free, unrestrictive license. The ETA for this is in August and will be available here." The linked company release says it will be under "a nonrestrictive, free license," but does not give further details. BSD?
This discussion has been archived. No new comments can be posted.

NVIDIA Cg Compiler Technology to be Open Source

Comments Filter:
  • If you like it (Score:5, Insightful)

    by mc6809e ( 214243 ) on Tuesday July 23, 2002 @03:43PM (#3939395)

    Money talks. If you like what they are doing, tell them you like it by buying one of their cards.
    • Re:If you like it (Score:3, Insightful)

      by Anonymous Coward
      Now if only they would do the same with their drivers! It would be great to have open-source Nvidia drivers, so *BSD and non-X86 users can enjoy 3D acceleration, too.
      Until then, I won't buy one of their cards. Period.
      • Re:If you like it (Score:1, Informative)

        by The Rogue86 ( 588942 )
        They dont need to open source to port it to *BSD. But it would make developing simpler. It seems you have fallen victim to the assumption that closed source is evil. It can be a good thing in a comptative system. Nvidia doesnt need to open their drivers just port them. Last time I checked Creative didnt have open drivers for the SoundBlaster cards but they run just fine on Linux (I have not checked for BSD support and dont really plan on doing it). That is an example of a company with a great product that is cross platform compatible. Open drivers are nice but ultimatly not that important. It's the existance of drivers that is the issue.
        • Re:If you like it (Score:1, Interesting)

          by oddo ( 111788 )
          creative has free software drivers for gnu/linux. and as far as I know there are some sort of specs that would make it easy to port to another platform.
        • Closed source is not evil, no. But in the case of these Nvidia drivers for *BSD, it's not exactly Angelic. The problem is that the people in Nvidia just aren't too interested in BSD, so they don't put resources behind the driver. Sadly, they can't just let the code go and be done with it, they have to maintain the thing just like the windows and linux drivers.

          Anyway, they do have working drivers, which were done by 2-3 people who run FreeBSD there. But other than them, noone else there is interested, so the drivers are not likely to be released (2-3 people can't maintain the thing on their own while staying in sync with the other two drivers). Also, the two initiatives by non-nvidia people to get a working driver for FreeBSD seem to have died, one officially, the other just seems to have gradually slowed to a halt. :-/

          Now if the drivers had been open, *BSD would have working drivers, and all NVidia would have to do is look after the Windows driver. My next card will be a Radeon, since the only reason I run Linux is because of the lack of nvidia drivers for BSD :-(

          By the way, before anyone starts telling me I'm making things up, they really DO have working drivers, I found this out from an nvidia employee on OPN who joined #FreeBSD.

          • Well, the problem is the ball is mostly in the NVidias court WRT FreeBSD drivers. Supposedly the Linux drivers should be very similar to the FreeBSD drivers, but so far nothing has come out of NVidia.
      • Re:If you like it (Score:3, Informative)

        by be-fan ( 61476 )
        Theoretically, you don't need source to port to BSD. The kernel driver has an abstraction layer (which comes in source form) and is fully portable. The XFree86 module (by design of the XFree86 driver model) is platform independent and can be loaded on any x86 OS. As it stands, it is not in NVIDIA's best interest to release the driver code. First, parts of it are copyrighted by other parties. Second, you can bet that ATI and Matrox would love to get their hands on it. Remember, an OpenGL ICD is an entire OpenGL implementation, not just a hardware banger. That makes the situation rather unique. ATI has some hardware that could be seriously compatitive with NVIDIA's if it had proper drivers. Why should NVIDIA jepordize their company to placate 0.01% of its users?
      • "Now if only they would do the same with their drivers!"

        I known someone who once worked for MetroLink. He was part of the team that was writing the NVidia device driver for Metro-X. They were a source licensee, under NDA, yadda yadda, so they had access to the NVidia driver source.

        He said that the NVidia driver source is highly coupled with the chip design. Apparently, the NVidia driver people have intimate knowledge of the hardware design, and take advantage of it. This lets the driver exploit as much of the hardware's potential as possible. However, it also means that the driver has specific knowledge of the hardware design.

        Given that NVidia's sole business is chip design, you can bet that they will never release source for that driver. It contains too much of their business. (No, it is not a chip schematic, but that isn't the point. It contains enough to make their lawyers unhappy.)

        For better or worse, that is the way it is with NVidia. If you do not like it, do not buy their cards.
    • I bought the GF4 Ti4600 from evga.com during the pre-order phase (and am very happy with it). The cool thing is evga's upgrade program.. Within 2 years I can trade it in for the latest/greatest card, and have the full amount I bought it for applied to the new card. :)
    • Re:If you like it (Score:3, Insightful)

      by scott1853 ( 194884 )
      Is there really anybody here that hasn't bought one? Personally I buy their cards because they're the best. I really don't see the need to make the purchase of a certain video card to make a political statement. If you want to support open source donate toward blender, or sign up for an account with the makers of your favorite linux distro.
      • Saying "nVidia is the best" got a lot harder on July 18. Their new 9000 is much nicer than nVidia's crippled 4MX series, and the 9700 is 6 months ahead of nVidia. Which is amazing, because nVidia used to be consistently 6 months ahead of ATI.

        Of course, ATI is missing a strong competitor to the 4Ti that the original poster referred to.

        Bryan
        • It's great to see some real competition in the video card market again, but in all fairness I don't think it is accurate to say ATI is 6 months ahead here. I haven't seen any 9700s on the shelves.
        • From what I've seen, I thought the 9700 series wasn't going to be available for several months, and that by then or soon after, nvidia should have their next-gen card out (if they can overcome the .13micron fabrication problem [businessweek.com]).

          In any case, saying that ATI blows nVidia's GF4 away with the 9700 series card (when it's not even out yet) is like comparing a Pentium 4 3GHz processor to an AMD Athlon 2200 even though the 3GHz version isn't out yet...

          • They claim August, which isn't that far away. They aren't a software company, so they might make the date, but I'd need good odds to lay money on it...

            You're right, the comparisons are similar. Lots of people have 3GHz p4's: either early samples or overclocks, but the average Joe doesn't.

            Bryan
      • Is there really anybody here that hasn't bought one? Personally I buy their cards because they're the best.

        me

        but then again, what do I know.

        NVidia's cards might be the best, if you define "best" as "most FPS in Quake". They're not "best" if you care about things like accurate color, stable drivers (several of my cow-orkers have shiny new laptops with NVidia chipsets/drivers that bring the things down every hour or two), etc. ATI still has them beat there, as do other manufacturers.

        And yes, money talks. If people like something but nobody buys it, that something is usually considered a failure. In this case, sending a friendly thank-you note to NVidia along with your order is probably a good course of action...

        • Huh? ATI is famous for crappy drivers, both in Windows and in X. As for me, I've never had any problem's with NVIDIA's drivers (Windows 95 -> Linux 2.5*), and I've only used NVIDIA cards in all my machines since my PII-300.
          • Well, I guess we've each different experiences then. Of the several dozen gfx cards I've come in contact with over the years, only the NVidia cards have given me any shit. ATI's drivers have been butt-slow, but they worked.
            • I think you're going to find your experience in the minority. ATI's driver reputation is really, really bad compared to nvidia.
            • Unfortunately I have had the same experience.
              My NVidia cards (TNT and TNT2) used to crash ALL THE TIME under xfree86 (3.3.x and 4.x.x) so I finally thought "Enough of this shit" and bought myself a Radeon, and I haven't looked back since. The drivers are very fast (I used to play Quake 3 and Tribes 2 fine on my Athlon 500 (now I have Athlon 1600XP) and it's nice to watch the DRI drivers mature and get faster & more feature complete.
          • Don't know about you - but I had until few months ago few problems with NVidia cards + VIA chipset and AMD Athlon. Those problems have dissapeared lately.

            As for ATI - well, their X driver is very good (written by non ATI people), but their Windows NT 4 drivers really sucks!
        • stable drivers...ATI still has them beat there

          You have got to be kidding. nVIDIA is known for having rock solid drivers - I've never had a crash while running them, and most other people I know haven't either.

          ATI is known for its poor drivers, and has been for a long time.
          • Not only are nvidia drivers stable, but they squeeze every last drop of performance they can out of their cards on a consistant basis. ATI's drivers dont realize 100% of the potential performance of the hardware, why would I buy such a beast of a card if it effectively has a governor on it.
            • I'll second that. I bought an NVIDIA tnt2 ultra 32mb card a few years ago. During the time I had it, I saw about 4 revisions of drivers, with each revision the 3d output got more full-featured and faster. The very last driver release (while I was using it, upgraded last year) nearly doubled the framerate it was getting.

              I can only pray that they're able to squeeze performance like that out of my geforce4 ti4200 in the future. I can already overclock the hell out of it but it's just not wise :)
        • PowerVR writes great drivers for the Kyro 2 chips. The K2 boards also have beautiful internal 32 bit true color rendering, and the 2D graphics look incredible too. They also have good Linux drivers, but are currently in a beta state.

          I totally reccommend them to other Linux users if you just want high-GeForce 2 level performance (they are almost 2 years old now.). But they are great for games like RTC Wolfenstein. Very stable too, as they were in Windows 2000.
      • yup. ive bought only ati and matrox myself. (i only really started caring about which card i had about a year ago and i then waited a few months and got an 8500. it's been pretty good, although the linux drivers took forever to come out, and i still dont have a really good 3d driver.
    • Amen to that. (Score:4, Informative)

      by gatesh8r ( 182908 ) on Tuesday July 23, 2002 @04:39PM (#3939823)
      Even though they don't have GPL'ed drivers for Linux, they have full support. They give me the source code for the kernel driver so I can compile my own kernel and not be stuck to a stock kernel. Yeah, the licence doesn't allow redistribution, but I'm not exactly concerned about that since 1) They're giving the program to me for free as in beer, and 2) I'm not a developer so issues about code modification isn't my concern.

      They have kick-ass products that officially support my platform of choice. 'Nuff said. :-)

      • "2) I'm not a developer so issues about code modification isn't my concern."

        I was born to rich family in rich country, so issues about growing hunger isn't my concern.

        I'm going to die within next hundred years, so it's not my concern how badly we pollute our planet.

        ...
      • Re:Amen to that. (Score:2, Insightful)

        by smiff ( 578693 )
        They give me the source code for the kernel driver

        Does that source code include a one MB file called "Module-nvkernel"? Tell me, what language was that file written in?

        They're giving the program to me for free as in beer

        Presumably you shelled out a fair sum of cash to buy an Nvidia card. That card will not work without drivers. I can assure you, you paid for the drivers when you bought the card.

        I'm not a developer so issues about code modification isn't my concern.

        Even though you might never exercise your right to modify code, it should still be a concern for you. You wouldn't be running Linux if it weren't for the ability to modify code. Developer or not, the ability to modify (and audit) code benefits almost everyone (it's debatable whether or not it benefits Nvidia more than keeping the source closed).

        What happens when someone restrains a freedom that you want to exercise? Should I support those restraints because they don't effect me? Even if the ability to modify code never benefits you, that doesn't mean you should disregard other people's freedoms.

        For the record, if Nvidia were to open source their driver, developers could port it to other operating systems, such as FreeBSD and AtheOS. The X11 side of their driver could be ported to other graphic systems, such as Berlin or the graphics system for AtheOS. The kernel side could be integrated and distributed with the Linux kernel. The X11 side could be integrated and distributed with XFree86. Their code could be used in research projects for new graphics systems. It is possible that Nvidia's GPU can perform operations that could accelerate other computations (perhaps image recognition, speech recognition, or some other project which the drivers were never intended for). Since Nvidia won't open the source, we may never know.

      • i was under the impresion that they just gave you a binary module and sources to compile a loader against your current kernel version.
    • Since when do I need any sort of excuse - like encouraging Cg - to buy an Nvidia card? I needed an excuse?
    • well, actually this thing nvidia's doing is called 'promotion'.

      opensourcing the Cg stuff is a great way to get lots of people using it, and it gets into the press well, since OSS is all the rave right now.

      still, it's a nice thing...

    • The parent is 100% correct. They need to hear from real, genuine customers who are interested in their products because of their support for open source. Here's a letter I composed to the contact on the press release. Do NOT copy this word-for-word. It is only meant to give an idea of what we should probably be saying. (Incidently, the letter is entirely true. That is also important.)
      Greetings!


      I have used an NVIDIA TNT2 for the past several years, and have been hesitant to buy a new 3D card lately. The offerings have (mostly) all been good, but I've been looking into which company was most willing to support open source initiatives. I choose open source because I believe the community is able to produce superior products compared to closed source alternatives. As a result, I look to purchase from companies who are willing either to make their products available and interoperable with open source technologies, or make contributions the open source community can use.

      Today, I read NVIDIA's announcement (http://www.nvidia.com/view.asp?IO=IO_20020719_726 9 "NVIDIA Open Sources Cg Compiler Technology") and was immensely pleased. As a direct result, my decision was made. I took the plunge and picked up an NVIDIA GeForce 4 Ti based video card on my way home from work. I'm incredibly happy with the product (performance under XFree86 is excellent) and the company that produced its core technology.

      In conclusion, I want to make it clear that this purchase was triggered by NVIDIA's move to open source Cg. This makes a powerful technology available to those of us who chose not to be bound to one vendor's idea of how the industry ought to be shaped. I am eager to remain an NVIDIA customer as your company enables my platform of choice!
    • They make cards now? I thought they just made kick ass chipsets, belive if they made a card I would purchase it.
    • Money talks. If you like what they are doing, tell them you like it by buying one of their cards.


      But first you'd better understand what they're doing and what they're not. They are NOT open-sourcing their video card drivers. Until they do or somebody manages to reverse engineer the binary ones, their products remain proprietary. IMHO, nobody that supports Free Software should buy proprietary hardware that requires closed-source drivers. So it seems instead this Cg thing is just a language for programming shaders so you don't have to use assembly. Big deal. It's a step in the right direction to have a standard, but it doesn't make their products any more friendly to Free Software.
  • I've read the article, but I believe I'm not enough of a graphics geek to understand it O:-) What's a "Cg compiler"? What's it for?
  • so are they looking to get this into OpenGL 2.0 ?

    there was some debate on which was better 3DLabs or this as well as an ATI solution

    anyone know more ?

    but whatever happens Thank you
    after all the chip business needs a reason to sell more chips and graphics is a big one the faster people can use the new features the more games/apps need powerfull chips

    regards

    john jones
    • "so are they looking to get this into OpenGL 2.0 ?"

      Maybe this is a move to help stem off the patent problems with MSFT related to OpenGL and provide an alternative system.

      Perhaps what I just said doesn't mean anything and I am confusing different things - is Cg an alternative to OpenGL or something else? I do not know enough about graphics to tell.

      • This was detailed in an article several weeks ago. Cg will compile for both OpenGL and DirectX, but people were worried because nvidia has had such close relationships with microsoft that they would phase out OpenGL support or something of the like. Personally, I wouldnt worry about nvidia doing something like that.
  • license (Score:1, Interesting)

    by bsDaemon ( 87307 )
    ass the article fails to spell out the details, other than it is a "non-restrictive open source lisence" we can assume this means something like the BSD or X lisences, especially since MS co developed it. The was a daemonnews thread a while back about Bill Gates saying how governments should use BSD-style lisences for the absolue maximum effectevness on stuff they develope. It just allows more embracing and extending to happen.
    • we can assume this means something like the BSD or X lisences, especially since MS co developed it

      If MS helped develop it, no way will it be BSD.

      Bill Gates saying how governments should use BSD-style lisences

      Well DUH! Can't this be translated as "World's greediest person says 'other people should give me stuff!'"?

      Gates thinks that other organizations should release stuff under the BSD license, so that MS can profit from it... you'll note that he's never said Microsoft should use the BSD license.

      I'd think that it'll probably be something more like the MPL or Apple's version (ie. if you release changes, you have to give nVidia license to bundle and sell it.)
  • Nvidia's Cg (Score:2, Informative)

    by Anonymous Coward
    I've been coding in Cg for some time, there have been a number of problems I've faced so far:

    1. The vertex engine calls are not logical. Sometimes you call passing a referenced pointer, other time you have to pass a referenced strucute, some form of standarization to calls would have made it easier for developers to write function calls (more insane than POSIX threads).

    2. The lanugae is not truely Turning complete. Which could have been fixed by taking some more time and making the language more complete.

    3. The compiled bytecode is giving a security mask that disables it's use on chips that do not carry a compliment decoder (To keep competetors away?).

    4. Confusing definitions of pointer/references. They could have made this easier by removing the entire pointer usage.

    5. Class calls outside of friend functions can at certain times reach memory outside of parent definitions (Bad language design?! I think this is one of the most debated feature/bug, since you can piggyback this to implement vertex calls within lighmaps).

    6. No SMP support in current implmentation and no thoughts to future support (What about threading?!).

    7. Inlining support is bad and possibly unusable outside the scope of inling cg within c.
    • erm... yeah (Score:4, Funny)

      by lingqi ( 577227 ) on Tuesday July 23, 2002 @03:55PM (#3939519) Journal
      The lanugae is not truely Turning complete. Which could have been fixed by taking some more time and making the language more complete.

      i would really love to give some witty comments here -- but am at a loss of words. which could be fixed by thinking up a few words to form a witty comment with.

    • by Anonymous Coward

      Class calls outside of friend functions can at certain times reach memory outside of parent definitions (Bad language design?! I think this is one of the most debated feature/bug, since you can piggyback this to implement vertex calls within lighmaps).


      Agree, What I feel is that this is bad design, not a bug nor is it a feature. I have been testing on Nvidia's TestA hardware interface with cg and that has been the most annoying uptodate (along with the pointer disaster).

      Cg should not have been done the way it was done. I for one would have welcomed them embracing an established language instead of creating one buggy one like this.

      There was thought given into using Lisp intitally, but I guess the powers that be decided it should have to create a new and totally confusing language.
      • There was thought given into using Lisp intitally, but I guess the powers that be decided it should have to create a new and totally confusing language

        Well, better a new and totally confusing one than an old and totally confusing one.

        Yes, I've coded lisp before. But I recovered.
    • Well if it's unrestrictive open source, it should be possible to write or reorganize the Cg language structure to something more complete. As long as it compiles compatible byte code... right?
    • by Anonymous Coward
      The lanugae is not truely Turning complete.
      I thought the whole reason they made a new language (Cg) is because the chipsets weren't Turing complete. If they WERE Turing complete, then it would be a complete waste of time to make a new language -- just make a new back-end for your favourite C compiler and write a bit of run-time.

      However, the chips themselves can't do very much -- they can't do a conditional branch, for example. This makes it quite difficult to make a C compiler target them :)

      It would be very cool to just be able to do gcc -b nvidia-geforce9 ... or what have you since you'd be able to take advantage of a rich existing toolchain. But, alas, it's not to be.

    • Re:Nvidia's Cg (Score:5, Informative)

      by ToLu the Happy Furby ( 63586 ) on Tuesday July 23, 2002 @04:27PM (#3939745)
      2. The lanugae is not truely Turning complete. Which could have been fixed by taking some more time and making the language more complete.

      This was done on purpose. Current (and next-generation) GPU shader hardware is not Turing complete, so it'd be quite silly for Cg to be. The problem is that while most extensions to Cg can be added with a vendor-specific profile, extensions which would make the language more general purpose (like pointers, full flow control, etc.) are apparently considered changes to the language design and only NVidia can add them. This isn't a problem now, but it would be if another vendor came out with a more general purpose shader implementation first. (Technically it may be possible to make Cg Turing complete through the extensions NVidia has made available, but probably not in a very friendly way.)

      3. The compiled bytecode is giving a security mask that disables it's use on chips that do not carry a compliment decoder (To keep competetors away?).

      Well, supposedly anyone can write a compiler/interpreter for Cg bytecode to target and optimize for their hardware just like NVidia can. (Of course they would need to introduce new functionality to the language through extensions, but the point is any standard Cg bytecode should execute on any DX8+ GPU with a compiler.) Indeed, this is one of (perhaps the only) huge plus to Cg: because it can be interpreted at runtime, rather than just compiled to shader assembly at compile time, new GPUs can (assuming they have an optimizing compiler) optimize existing shader code. This will be nice, for example, in allowing the same shader bytecode to run optimized on DX8 parts (few shader instructions allowed per pass), upcoming DX9 parts (many but not unlimited instructions per pass), and probably future parts with unlimited program length shaders.

      Yes, it does require the other vendors to write their own backend for Cg, but NVidia has supposedly released enough for them to do that with no disadvantages. The question is whether they will want to, given that doing so would support a language that NVidia retains control over (as opposed to MS-controlled DX and by-committee OpenGL).

      6. No SMP support in current implmentation and no thoughts to future support (What about threading?!).

      Presumably this can be done via an extension, although it might get ugly to retain backwards compatability.

      7. Inlining support is bad and possibly unusable outside the scope of inling cg within c.

      What about inlining shader assembly in Cg? And beyond that, what sort of inlining would you want?
    • 2. The lanugae is not truely Turning complete. Which could have been fixed by taking some more time and making the language more complete.

      On what basis do you make this claim? Turing (note spelling) completeness can be achieved in very simple languages (for example: Iota [ucsd.edu]) and judging by the Cg language spec. [nvidia.com] I can't see any reason to doubt that Cg is.

      Was there something specific you were thinking of?
    • by Anonymous Coward
      Mark,

      It's no point posting on /. as an AC, we know you cause these are the points you always highlight. I know you were against the language redesign and I know you were one of the few who wanted Lisp in there, but the decision has been made and Microsoft has helped us create a good language that we can extend (helpfully with the help of the community out here).

      Now, please get back to work :)

      The guy sitting behind you with a real desk :P
      • by Anonymous Coward
        Both of you get back to work.

        The guy sitting behind you with a real office and a real desk.
        • It's time to get the telecom and security group over there to block these "time-wasting" sites from the coder's machines!

          The guy who owns NVidia stock (and owns you!)

    • Re:Nvidia's Cg (Score:1, Informative)

      by Anonymous Coward
      I have trouble following what you are saying. First of all, there are no pointers in Cg. Input variables have the qualifier "in" (this qualifier can be omitted), and output variables have the qualifier "out". All parameters are passed by value. There are no pointers, and there are no references in the style of C++.

      While there are no pointers explicitly in the language, you can effectively get pointer dereferencing by doing a dependent texture lookup. This is a common technique today with DX8 (e.g. reflective bump mapping) but so far it isn't commonly discussed as a "pointer dereference" or "indirection".

      Also, in your comment you seem to be confusing the Cg shading language and the Cg runtime environement API, which are two quite-different things.

      Eric
    • Re:Nvidia's Cg (Score:5, Informative)

      by nihilogos ( 87025 ) on Tuesday July 23, 2002 @04:56PM (#3939923)
      I am pretty sure the above post is just rubbish someone made up to make a couple of moderators look stupid.

      AFAIK Cg is a C like language designed to make writing vertex and pixel shaders easier. Real time shaders for nvidia's and ati's are currently done in assembly. It is not supposed to be a new language like C or Python or insert-language-here. All it has to do are transforms on 3d vertex or pixel information.

      A vertex shader takes as input position, normal, colour, lighting and other relevent information for a single vertex. It performs calculations with access to some constant and temporary registers, then outputs a single vertex (this is what the chip is built for). It does this for every vertex in the object being shaded. Pixel shaders are a little more complex but similar.

      Points 1-7 have nothing to do with Cg.

      There is a very good article on vertex and pixel shaders here [gamedev.net]
  • this one should be called gcgc

    or gc^2 or gc**2 or pow(gc,2)

  • Nvidia's GLIDE (Score:1, Offtopic)

    by papasui ( 567265 )
    I can't help but think back to 3DfX and the glide library and think that this may be Nvidia's goal, it probably would be a great way for Nvidia optomized code to be developed if the compiler automatically did some special things for Nvidia cards even if it did output a product that works on almost all video cards (OpenGL).
  • Will only a 'portion' of this compiler be free? Some of you may or may not know that their Linux modules require a proprietary 'binary stub'. Thus the difficulty in porting any of their stuff to other UNIXs such as FreeBSD.

    Hopefully that won't be the case with this.
  • Cooperation (Score:2, Interesting)

    Some of the best news is that they've openly said they'll include support for ATI and other large manufacturers of competing graphics products. I'm glad to see that Nvidia isn't being closed-minded or trying to undermine their own intentions for ease of development by using the proprietarity card.
  • by Anonymous Coward on Tuesday July 23, 2002 @03:51PM (#3939484)
    I'm in a field totally different from graphics programming and hardware, but:

    In my reading of earlier coverage of Cg, my understanding that most people weren't concerned about Cg or its compiler being open source, but rather that Cg would depend to some extent on hardware specs that are proprietary. This would have the effect of driving other hardware competitors out of business because they can't implement Cg components because of hardware patents. Sort of similar to fears associated with MS open sourcing part of C# while keeping a good deal of it dependent on proprietary stuff. The fear is that Cg would lead to people saying things like "well, your video card is so crappy it doesn't even support a standard graphics programming language" (all the while being unaware that the card can't because of hardware patents). Just because the language and compiler is open-source doesn't mean the hardware it will run on is.

    Anyone more knowledgable care to comment? Am I misunderstanding this?
  • About the license (Score:1, Informative)

    by Anonymous Coward
    Please read here:
    http://www.cgshaders.org/contest/
    As you can see from the terms and conditions on that CG site, they favour and link to the ZLib license.

    I think that CG will be under the PHP/ZLib license.
  • Everyone is excited about this but the only thing is that it's really not that important because you still have to have a compiler. Until other graphics card manufacturers make Cg compilers this won't really be a standard and still an nVidia solution.

    Besides there are already C compilers that will turn your normal C code in to vector code. For PS2 and 3D-Now/SSE instructions. Check out codeplay [codeplay.com] for more info. Yes you have to pay for it. They don't have a compiler for the DirectX shading machine yet but this proves that they could. It's not like we have to invent a new language for every machine.
    • Wow. This is so off base its not even funny. This has nothing to do with vector code. Any given Cg program is highly non-vectorizible. They have a very limited number of inputs and only one output. Sure, multiple vertex units can run the same vertex program on different verticies (or pixel shaders can run same pixel program on different pixels) but a vectorizing compiler has nothing to do with it. Besides that fact, a C compiler is quite a leap away from a Cg compiler. We're talking about a machine that can't do loops properly...
    • Until other graphics card manufacturers make Cg compilers this won't really be a standard and still an nVidia solution.

      "source code", "non restritctive licence" and stuff like that means that oter gc makers don't have to make their own compilers.

  • This drammatically increases the chances that CG will become somewhat of a standard. Right now, it looks as if this is a case of NV putting forward the technology merely in order to push their products forward. Any standard without industry acceptance would be dead in the water, and its failure would invalidate the 3-dTbufferAnisographotopically mapped TexSurfaces features they would be including on their next cards. This way they have a way to get past the slow-as-hell OpenGL board and actually retain some control over the standards they work on without either ceding control to BillyG or keeping their competitors out. It works like this: NV knows it does not have the force to push its own Glide-esque language on the industry. There are too many other cards out there *cough*R300*cough* that could potentially grab enough market share to lure developers away from anything proprietory into existing standards that work on everything. Open-sourcing CG is also a way of putting pressure on other companies to adopt it, as ATI seems a little reluctant [theinquirer.net] to adopt something that NV controls tightly. In the war between OpenGL2.0 and DirectX9.0, CG looked like it didn't have a chance to replace these venerable industry standards, but with a lot of developer support before either of these is released(by giving any potential CG programmer the source code for free) will validate it. It's explained pretty well in this article [theinquirer.net] about the impending split in developer's plans.
  • Practically, Cg is less useful than RenderMonkey because it is readily integrated into popular graphics packages.

    However, there are some pretty good potential there, to make a Cg plugin for everything under the sun.

    Controlling the Shader Language standard is almost as important as making a better video card, as you'll have a feature set your competitors have to follow - if Cg becomes the most popular language, then NVidia can say on their marketing material "GeForce 10: 100% Cg compatible, Radeon 50000: only supports 80%".
  • Nvidia, if you're reading this, please read.

    For as long as I remember, the #1 complaint from the open source community has been the lack of open source X drivers, and the lack of documentation for directly accessing the hardware.

    This still isn't direct access to the hardware is it? This is an API that goes through a compiler that translates things into machine code. Absolutely no real access to speak of.

    Sometimes I wonder if nvidia cards are truly the hardware marvels that they are. Their implementation sort of reminds me of Play Incorporated's snappy video snapshot, where the hardware functions and bios get's loaded by an external program. I don't know if this is the exact case with nvidia hardware, but i'm pretty sure i'm not that far off the mark.

    If that really is the case, it means that TNT2 cards are capable of all the neat tricks gforce cards only alot slower. I can see why you wouldn't want it opened up to the public. What's to stop a competetor from using the same hardware/software implementation you are?

    I don't think it would seriously put a dent in the bottom line however. People tend to keep loyaltee's towards a company if it doesn't fuck their customers. Look at how many hits a day voodoofiles.com gets!

    So be bold and daring like the new dorito's. Let other companies mimic your techniques, and try not to worry about the bottom line so much. If you let a bunch of open source guru's hack on your code, you could fire a few of those internal programmers thereby making up the cost. If you do this, anytime a relative, friend, customer asks us what 3d card solution they should get, we will respond NVIDIA.

    yours truly

    --toq
    • If that really is the case, it means that TNT2 cards are capable of all the neat tricks gforce cards only alot slower.
      Well, duh. If that wasn't the case we'd not have had any computer graphics for the last few decades or so. If the hardware isn't doing it the CPU is. That's the whole *point* behind the GeForce line of cards.

      Yes, you can do pixel and vertex shaders on the CPU, but it will make the application so slow as to be unusable.

      Don't think that your 6 year old TNT2 card will become some magic speed demon if nVidia gives you driver source. It won't. Your argument is akin to saying, "Intel, give us the internals to the P4. I know I can make my 80286 run all new code if you do!"
      • Don't think that your 6 year old TNT2 card will become some magic speed demon if nVidia gives you driver source.

        Did I say that? I thought I said...
        If that really is the case, it means that TNT2 cards are capable of all the neat tricks gforce cards only alot slower.

        Please read comments before replying, thank you.

        --toq
        • I'll try to sum it up for you in simple terms, since you can't seem to grasp the concept.

          TNT2 doesn't have the transistors to do hardware transform and lighting. It can't do pixel and vertex shaders. Those can only be done in dedicated hardware or on the CPU. No amount of driver source code will change that.

          Via proper software drivers (OpenGL and/or DirectX) TNT2 cards can *already* run games that use pixel and vertex shaders. It's just that since the card is offloading all of those calcuations to the CPU the programs are intolerably slow.

          Please take your random thoughts to logical conclusions before posting insipid open letters to corporations.
          • Microcode updates? Bios Updates? Programmable Grid Arrays? Ever hear of any of these? Probably not.
            I won't flame you, you obviously don't know enough about hardware to make any logical conclusions yourself..

            • Would you PLEASE stop talking about stuff you obviously know nothing about? There is no PGA in the TNT. A bios/microcode update cannot make up for the lack of vertex and pixel shader silicon.
              In fact: you cannot even emulate the vertex and pixel shader path in software because there is no
              way of inserting it into the correct rendering path on the TNT2.

              You cannot emulate rendering 16 textures at once by rendering several times either because there is not enough framebuffer alpha accuracy to do it.

              You're living in a land of make believe,
              with elves and fairies and little frogs with funny green hats!
    • "If that really is the case, it means that TNT2 cards are capable of all the neat tricks gforce cards only allot slower."

      Kinda sorta but not really. An updated driver for a TNT2 board could emulate in software all the silicon a TNT2 is missing. That's true regardless of what card you have. There are software-only OpenGL drivers out there. ...but that's not really what you're after, is it? Maybe I just didn't understand you. I usually understand raving loonies just fine (professional courtesy and all that :P), but you're thinking just strikes me as a little out of kilter.

      • Great another raving looney :P

        Well, I used to work with the guys from play inc. One of them basically explained how the snappy video snapshot worked.

        Their custom chip was sort of combination of rom/ram and logic. The rom acted as a bootstrap for basic parrelel port communications. The ram would store code downloaded via the parrelel port, and the logic would chew on that.

        Basically the snappy never really got any REAL upgrades to the hardware. (note this is where nvidia and play differs, nvidia adds faster hardware) Versions 1 2 and 3 of the snappy were all nothing more than "soft upgrades"

        I think nvidia cards work in the same fashion, that's why we see such an performance increase between driver releases because the actual chip logic is loaded at boot.

        1. Preboot, vga compatible mode
        2. Boot, load custom OS specific hardware register code
        3. Load OS specific driver for glue between the OS and the hardware (which is really software)

        There is only so much you can do from calls to the OS for speed. If on the otherhand you could "soft upgrade" the hardware on boot, everytime you optimized that boot software a little more, it would stand to reason that the card would run faster.

        So basically if you wanted to add that "gwhiz AA x4" feature to your card, you could write it in software, and load it into your card at boot.

        Like I said earlier, nvidia open sourcing it would probably lead to a lot of the newer cards features being found on older cards, only a helluva lot slower. This too, is a reasonable assumption because the hardware is slower. It's no less capable of running the same code though.

        Hope that clears things up.
        • So basically if you wanted to add that "gwhiz AA x4" feature to your card, you could write it in software, and load it into your card at boot.
          Total BS. The registers to do HW T&L don't exist on anything below a GeForce 1 card. The registers to do pixel and vertex shaders don't exist on anything below a GF3 card.

          There is no way you could write a new driver for a TNT2 card that would allow it to do those advanced features. Give up the pipe dream. A programable pipleline graphics card != a simple video convertor box. It doesn't matter how much you believe that the hardware/software design behind a Snappy can be transfered to a video card, it just isn't going to work.

  • The positive by NVIDIA so far is that they licensed technology from someone else to construct their drivers and hardware, and they are not at liberty to release open drivers. Fine, that's something I can accept for now.

    What I'd really like to know is, as they move forward to new hardware and new drivers and new technologies, will they do so with the free software philosophy in mind, so that they can be more open about their work, and help the community adopt their hardware on other platforms than Windows, Linux, and MacOS.

    Certainly, if they release this compiler under a free license, then that's a good first step, because it could mean that they recoginize the value of free software and how it aids the spread of technologies to new platforms, not to mention how good free interfaces can become standards. Seems clear that NVIDIA would like to be the new SGI, settings the standard by which graphic innovation is defined.
    • Well, being a company with shareholders, they'll (continue to) do whatever they think will increase their revenues the most. Cg is one example, and if open-sourcing it guarantees it wide adoption, that's what they'll do. I doubt we'll see them open-sourcing their drivers until their quality and integration (one driver for all cards, etc) is no longer the significant competitive advantage that it is now.
  • ATI will be providing plug ins to compile Renderman or Maya code to run on its Radeon 9700 rather than on the central processor. Although not directly competing with Cg, this does seem to be a much better approach. Provided of course that you could take your 'binaries' from Renderman/Maya and use them in your video game or whatever.

    Bryan
  • of course nvidia would make the compiler free, because they make hardware, not software. Think of the compiler as a "marketing vehicle." They make this really cool cg compiler, so everybody uses it to make some sweet graphics, and consumers need to buy new hardware to get 8 billion (or 40) frames per second.
  • It's important to make the distinction here. nVidia has opensourced the parser and compiler for CG, but they control the language. Look at it this way, nvidia needs something to show off NV30 with, and CG will be the thing to do it. This is in direct competition with the opengl 2.0 and DX9 HLSLs though, and you can bet that they won't be steering CG in any directions favorable to their competition like ATI or 3DLabs. It's fine if nVidia wants to do their own thing, but realize that this cg isn't nearly as open as the "opensource" headline makes it sound.
  • C for graphics?

    Does this mean I can segfault my video card now?

    After all, it's not C if my first version of the code that compiles doesn't segfault immediately.
  • Cg technology could be a great step forward. By releasing it under a nonrestrictive license, companys like splutterfish [splutterfish.com] can accelerate their plans for a shading language in brazil [splutterfish.com].

    The standardization on a shading language is going to push forward renderers two a new level, creating a massive pool of competition to Pixar's Photorealistic Renderman.

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...