Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Businesses

Nvidia Hits $2 Trillion Valuation (reuters.com) 65

Nvidia hit $2 trillion in market value on Friday, riding on an insatiable demand for its chips that made the Silicon Valley firm the pioneer of the generative AI boom. From a report: The milestone followed another bumper revenue forecast from the chip designer that drove up its market value by $277 billion on Thursday - Wall Street's largest one-day gain on record. Its rapid ascent in the past year has led analysts to draw parallels to the picks and shovels providers during the gold rush of 1800s as Nvidia's chips are used by almost all generative AI players from chatGPT-maker OpenAI to Google.
This discussion has been archived. No new comments can be posted.

Nvidia Hits $2 Trillion Valuation

Comments Filter:
  • What goes up must come down.

    • How to say "I missed out on AAPL" without saying "I missed out on AAPL"

    • by gweihir ( 88907 )

      What must raise, must fall...

      What is really fascinating how every hype finds so many idiots that are sure that this time it must all pan out.

  • by Rosco P. Coltrane ( 209368 ) on Friday February 23, 2024 @12:21PM (#64262818)

    when it pops.

    • by Calydor ( 739835 )

      Consider how companies at a tenth of this value were considered 'too big to fail' back in 2008 and then wonder what'll happen when everyone pivots to the NEXT big thing after AI.

      • Consider how companies at a tenth of this value were considered 'too big to fail' back in 2008 and then wonder what'll happen when everyone pivots to the NEXT big thing after AI.

        I think the most hilarious outcome will be some breakthrough in AI where we can dump the concept of having to have tremendous GPU farms in order to do even the simplest tasks. There's bound to be a better way to do what current systems are doing that we haven't found yet. And if that happens to be stumbled over in the near future, NVIDIA will implode so fast it'll make a cartoonish squeak on the way out.

        And before anybody jumps into lecture mode about how it can't possibly happen, yeah, nothing can happen u

        • by gweihir ( 88907 ) on Friday February 23, 2024 @01:37PM (#64263028)

          There's bound to be a better way to do what current systems are doing that we haven't found yet.

          Not necessarily. Some problems are just hard and this is probably one of them. Remember that the current hype is not a "breakthrough", but small, incremental steps for something like 70 years now. A ton of really smart people have invested a ton of time into this and the hallucinating semi-morons we currently have are a very advanced end-result. Many people think these are just some early results and there is a mass of low-hanging fruit still to be found. Not so.

        • by backslashdot ( 95548 ) on Friday February 23, 2024 @02:13PM (#64263116)

          The biggest issue I see is that nVIDIA is fabless. The speed of CPUs is far more limited by the affordability and physics of the silicon than the engineers working on CPU design. There's very little gained in implementation algorithms. There are not many ways stay ahead in GPU design .. especially for AI. Sure there are process specific optimizations that you can do during place and route and LVS (it'll soon be at a point where computers do that better than humans). If you compare CPUs from nVidia, AMD, Qualcomm, and Apple .. you'll notice that the die-area (how much cache, how many processing units) and the process node is the most important predictor of performance.

          Take AMD's Radeon RX 7900 XTX and compare it to the GeForce RTX 4090 .. The RTX 4090 performs up to 50% better, but that's because it's built on 4 nanometer process and gets 609mm^2 of die area whereas the AMD card had to use a 5 nanometer process and work with only 530 mm^2 of die space. That's a pretty significant difference that's like a CPU with 423^mm of 4-nanometer die area versus 609^mm nanometer die area .. that's a 50% different in the number of transistors. And sure enough, the 4090 has 76 billion transistors versus 56 billion on the XTX. The XTX has less transistors and uses an inferior process node .. of course it will perform worse. It has nothing to do with nVIDIA having better engineers.

          My point -- nVIDIA's lead comes almost entirely from them being able to afford a superior silicon process node from TSMC .. not from some genius-level GPU design chops.

          What happens when process node improvements stall, or AMD says fuck everything .. we're going to 2 nanometer?

          They would just need to get a TSMC executive drunk and have him agree to let AMD get that node at a discount.

          • There are not many ways stay ahead in GPU design .. especially for AI.

            There is a way, you build a chip dedicated for AI [groq.com].

            That's also what Apple has been doing, and I think SnapDragon has been making custom model processors as well...

            Since the valuation of a company is forward looking, it's absolutely nuts to think NVidea deserves the valuation they have when profits are from needs of the moment, before too long large AI data centers will be using chips other than general purpose GPUs.

          • The biggest issue I see is that nVIDIA is fabless. The speed of CPUs is far more limited by the affordability and physics of the silicon than the engineers working on CPU design. There's very little gained in implementation algorithms.

            Hmm, isn't this idea obviously false? After all, every company has access to the same fabs. In fact, it wasn't so long ago that Nvidia was a second-class citizen at TSMC, falling behind Apple and arguably behind AMD. In fact, even today, would Nvidia get priority if Apple decided it wanted more wafers?

            If the fab is the important part of the design/manufacturing chain, then either companies are incompetent at finding fabs (i.e., the only two leading fabs in the entire world) or the companies are incompete

            • If every company has access to the same fabs why did AMD use 5N while NVIDIA used 4N ?

              I'm sure NVIDIA paid extra for 4N (and AMD could have done the same) .. but that's a business decision rather than their engineers being smarter.

          • My point -- nVIDIA's lead comes almost entirely from them being able to afford a superior silicon process node from TSMC .. not from some genius-level GPU design chops.

            Just my personal anecdote: for well over 20 years, every time I've had an AMD card I've had endless problems with drivers. nVidia's stuff always works perfectly, even with incredibly old games and stuff. Rag all you want about performance, but the only thing I care about is that shit works. Hence, I've been nVidia exclusive for years and probably will be for a long time.

            Yeah, algorithms matter.

        • NV has taken a circuitous route to dominance of the AI marketplace. A company with purpose built hardware and a decent software stack should be able to take them down a notch, if not eventually surpass them. Tenstorrent looks promising.

          In any case such competition may bring power requirements down a bit for llm training.

        • Qubits! QPUs! just kidding nobody knows when quantum computers will be useful. That's what's so exciting about them! Not to investors though.

      • Consider how companies at a tenth of this value were considered 'too big to fail' back in 2008

        The problem with bank failures wasn't just their 'big' size, but their central functional role throughout the economy. They're the middleman for everything.

        NVidia is not.

      • by bn-7bc ( 909819 )
        OK so value today 200Billion ( 1/10th of 2Trillon adjusted for inflation) according to the feds inflation calculator was 164.82 Billion in 2008 so what company (at what date in 2008, ded did tend to drop rather fast that year) where considered to big to fail and had a market cap of at or below 164.82 Billion, or did you mean the company had a market cap of 200Billion in 2008, ugh inflation and the future/past valu of mony makes things complicated
    • Yet Bitcoin surges again. NVIDIA's forward PE rationalizes the valuation to some degree...

    • How can they meet expectations when they are set that high? Along with many racing to get into the demand... long term this isn't likely to be good as they fight to keep that valuation.

    • by gweihir ( 88907 )

      Indeed.

    • the cost savings from the mass layoffs will prevent that. Yes, the border economy will collapse, but that's a you & me problem, not a them problem.

      The King didn't need to sell iPhones to make money. He just took it.
    • by UpnAtom ( 551727 )

      AI is a goldrush and they're all using proprietary CUDA.

      Even though AMD has a faster chip, it can't use CUDA.

      Does this monopoly of accelerated-through-parallelism maths justify Nvidia's share price?

  • 3dfx (Score:4, Interesting)

    by MBGMorden ( 803437 ) on Friday February 23, 2024 @12:23PM (#64262830)

    Somewhere in a bar there's a guy who started 3DFX thinking "We could have had it all and we just screwed up.".

    They went from "best in the industry" to out of business within 5 years.

    • Maybe they should have used better Voodoo against nVidia.

    • Re:3dfx (Score:5, Informative)

      by Luckyo ( 1726890 ) on Friday February 23, 2024 @01:22PM (#64262984)

      Not really. 3DFX lost to Nvidia in Riva TNT era, and got a finishing blow with first GeForce chip. None of these were GPUs as related to "GPUs as AI accelerators", i.e. shader compute units. TNTs were all about rendering as much triangles as fast as possible, and GeForce's main addition was to move transform and lighting from CPU to the accelerator. None of these could do programmable shaders, which is what we define as GPU nowadays. They were "graphics accelerators" before that.

      And the reason why 3DFX lost was two fold. First it got utterly mauled by Riva TNT and Riva TNT2 for refusing to include 32 bit color depth mode in Voodoo cards. Yes, 3DFX basically went "640kB of memory is good enough for everyone", but unlike Microsoft, they actually stuck to this in hardware (yes they did internal rendering at 22 bit and then did some fuckery to get 16 bit output to look better than native 16 bit. Point stands regardless). And second step was nvidia successfully getting enough games to actually support moving transform and lighting to the graphics accelerator off CPU. So basically games ran on GeForce 256 were noticeably better looking than same games running on Voodoo 3 if they could be ran on it at all. Because of 32 bit color vs 16 bit (visible dithering vs no visible dithering) and transform and lighting being much more complex. The only real advantage Voodoo 3 had was that some games still were better optimized for 3DFX's own Glide API than OpenGL and Direct3D. And that wasn't much of an advantage any more at the time, with everyone having moved to having OpenGL or Direct3D implementation that was good.

      And actual first mass market programmable shader compute GPU was GeForce 3. 3DFX was property of Nvidia at this point, having gone bankrupt earlier. At the time of being purchased, they were still working on getting a first graphics accelerator with T&L unit on board.

      • Not really. 3DFX lost to Nvidia in Riva TNT era,

        More like the Riva TNT2 era - the Riva TNT was still a little slower than the Voodoo2 it was competition against, but it did cement Nvidia as a good card that could actually compete (compared to everything else which was basically a toy compared to 3dfx's cards).

        That was in 1998 though, 2 years after 3dfx released the original Voodoo which basically was wiping the floor against anything else at the time. Back then you could still choose if you wanted 3d acceleration or the software rendering engine and it

        • by Luckyo ( 1726890 )

          Not really. Riva TNT introduced the fact that games looked way better on it due to 24 bit color space vs Voodoo's 16 bit. So it was a choice between better color space and being able to run many games in Glide, which was a significant performance boost for Glide optimized games. Which was quite a few games back in the day, as this is before the full breakthrough of Direct3D and OpenGL being significantly less performant API. This was the API war era, where Glide had by far the best performance but only ran

    • Just look at AMD. They're valued at less than 20% of Nvidia. Their products are just a little bit worse than Nvidia's and they're just a little bit behind the curve when it comes to catering to the AI market.

      The technology business is brutal. Being just a little bit better can lead to winning by a huge margin.

      • I just did a quick google search out of curiosity, and NVDAs data center revenue is 10x AMDs.

        Maybe that has something to do with it, and not your "a little bit worse products' claim.

        Anyone that uses any generative AI tools on their local machine knows NVDA devices are much faster than AMD at a similar price point.

    • 3DFX cards were really just used for gaming, though. You can only sell so many PC add-on cards and arcade machines.

      NVidia didn't really become a money printing machine until they invented CUDA, which let their GPU's do "real" work like crypto mining and eventually AI acceleration.

      • by UpnAtom ( 551727 )

        CUDA was out a couple of years before the first Bitcoin client, which itself was years before Bitcoin was actually worth anything. CUDA was initially aimed at number crunching in science: weather forecasting, DNA folding et al.

        3DFX was badly mismanaged.
        https://en.wikipedia.org/wiki/... [wikipedia.org]

        They also missed the transition of Hardware T&L from the business to consumer market in DirectX 7. The first Geforce just destroyed it and Nvidia never looked back. It's likely Nvidia consulted with Microsoft to exclude

  • Maybe I am missing something but compared to a GPU, AI chips seem much simpler:
    - No unpredictable conditional memory access patterns. It's mostly matrix operations which are very linear and predictable
    - No complex shaders. It's primary multiplication and addition
    - Extremely small and simple fixed functionality - just tensor operations. GPUs have to deal with many things like anisotropic filtering, texturing, z-buffering, rasterization, tesselation, anti-aliasing...

    Besides the usual players, there are m
    • by Luckyo ( 1726890 )

      Depends on patents. Patents are the thing that locks certain companies in. For example x86 and then x64. Sure, you can build an ancient x86 compatible chip without intel's license nowadays. Not so much for x64 and AMD though. Cutting edge is locked down with patents. Old stuff you can make, but demand for that is small to non-existent while competitive cutting edge is present.

    • by SuperKendall ( 25149 ) on Friday February 23, 2024 @01:20PM (#64262976)

      there are multiple companies developing AI chips, much more numerous than GPU companies.

      I totally agree, so much money is dumping into NVdia but what are the chances the dominance they have will last even five years??

      One example that is really interesting is Groq, who is producing hardware today [groq.com] that is MUCH faster at processing existing LLM models.

      Someone described it as the same shift that occurred in Crypto when people went from general GPU chips for computation to custom ASIC chips.

      NVidia does have an advantage for a company that absolutely needs hardware for AI right now but that wave is going to taper pretty soon as data centers with alternative hardware arise and offer shared AI resources at a fraction of current cost.

    • >It's primary multiplication and addition

      Oh my god, this is posted as a serious comment. As if there is just no benefit in using a fast linear algebra library instead of a slow one (or a fast architecture instead of a slow one).

      Hey, technically I can solve these back-propagation equations in a notebook, why do I even need to write code at all?

  • by backslashdot ( 95548 ) on Friday February 23, 2024 @12:39PM (#64262884)

    $2 trillion for glorified If-Then-Else and matrix multiplication?

    • THIS!!!

      Now, need to make some widget and market it as "Widget AI" and watch all the case rolling in.

      Hmm, maybe "Pet Rock A.I."
    • by ceoyoyo ( 59147 )

      Hey, wait until you hear about the hundred trillion dollar valuation of glorified not-and.

  • 3DTV, E-Readers, MP3 players, Cloud Connected, Virtualization, Cyber Security, Linux, ...and now ... A.I. It's so funny that even a simple algorithm is now being called A.I. I remember the same crap when computers were all on the internet, but "they" had to say "cloud networked/computing" for some dumb reason. Nvidia better do something with this soon, it's likely to be as short-lived as 3DTVs were. Maybe they could update the Nvidia Shield real quick right now while they have money, I'd love a tech update
    • I know you can't list everything but the crypto craze was a pretty significant one to leave out.

    • I also found it amusing that they included Linux and Cyber Security as "fads". I don't think that those two are going away anytime soon.

      • Fads are just that, big deals until they aren't. Linux was a huge Fad for a while and has faded away, doens't mean its still not useful, but there was a time in the stock market where if you had Linux in your name on the stok it was valued way more than it should be.

        Cyber Security is definitely another fad. Actually I think A.I. will likely take its place for system scanning and analysis, since it's cheaper and more effective (it's likely not going to miss anything a person would). However, you'll always
  • Nvidia has nothing that would justify that valuation. The current AI hype will collapse in a little while when it becomes blatantly obvious it cannot actually be used for much because of hallucinations and being pretty dumb. And their other products to not justify that valuation in any way either.

    • >cannot actually be used for much because of hallucinations

      https://googlethatforyou.com?q... [googlethatforyou.com]

      Consider a model that is correct 90% of the time. Oh, but I guess we can't use it because of 10% 'hallucinations.'

      The 'models don't work unless they are perfect' crowd needs to learn something about statistics.

      • by gweihir ( 88907 )

        You need to learn to read...

      • 90% doesn't cut it for many applications.

        If I know only one thing about statistics, it's that the only path to salvation is Six Sigma.

        • by gweihir ( 88907 )

          Exactly. 90% is nice for a demo, but not for getting actual stuff done. The cleanup of those 10% will be much more expensive than getting somebody to do it right the first time. There are exceptions where things are badly broken anyways and 90% will not make things worse, but there are not many of them.

    • This kind of bubble happens because traders aren't in it for the long haul, they're just trading on the ups and downs of the market.

      I'd like to see changes that require stock owners to keep their stocks for an extended period of time, say, one year, before they are allowed to sell it again. That might put this kind of wild market swing to bed.

      • If you own stocks through a 401k, you get penalized for pulling your funds out early. So if you're a worker, you have to hold the bag, while the big players can move their money around at will.

        Funny how that "free market" is only free for some. Why don't we do retirement like grandpa did?

        • Every kind of market only works for some. Thankfully, the free market works for a larger portion of people than other types, such as socialism.

          As for Grandpa's pension, those went away because they were always financially unsustainable. They offered so much to retirees that companies literally went bankrupt because they couldn't continue to pay enough into the pension funds to keep them afloat. Pensions were really a kind of pyramid scheme, where the early retirees lived like kings, but the later ones got s

      • by gweihir ( 88907 )

        I.e. they destabilize a whole global economy. What a stupid system to have.

  • They're the 4th most valuable company by market cap, behind AAPL, MSFT, Aramco (https://companiesmarketcap.com/ [companiesmarketcap.com]). To me, this is a huge bubble waiting to pop, but they're still behind MSFT & AAPL for now.

    To me, generative AI is just crypto all over again....but there seem to be a lot more suckers this round.

    However, I take solace in knowing their chips are also used for useful functions, like machine learning, machine vision, pattern recognition, etc. So while everyone is pouring money into
    • by ceoyoyo ( 59147 )

      Machine learning is AI. Vision and pattern recognition are classic AI problems.

      • AI is a machine or computer that can mimic human cognition. Machine learning is limited to a system capable of identifying patterns, checking accuracy, and learning from that data to get better outcomes.

        Machine Learning is a subset of AI.
        • by ceoyoyo ( 59147 )

          Machine Learning is a subset of AI.

          Yes, that's what I said. Your definition of machine learning isn't really correct though.

          Early AI was divided into two main approaches. The symbols and rules people built databases of facts and the relations between them, and equipped their systems with the rules of logic so they could create new facts. Programming languages like Lisp and especially Prolog were developed in large part to do this. But these systems don't really learn. You program them.

          Machine learning was t

      • Machine learning is AI. Vision and pattern recognition are classic AI problems.

        you and I both know this is all about ChatGPT and the generative AI hype. I have no issues with ML/MV...it's the people oversellng Generative AI that are getting on my nerves and starting to worry me....and this hype bubble is driven by ChatGPT and it's counterparts in the generative AI space. You and I both know this.

        • by ceoyoyo ( 59147 )

          Nvidia is happy to sell chips to people who want to train their own language model, but that's certainly not all their hardware does. Even then, reliable natural language processing is a long-time dream. Remember Scotty talking to the mouse in Star Trek? The world has an enormous amount of information that is stored in poorly structured language: electronic text, printed text, audio, etc. There's a massive industry just in document processing.

          This is not just about chatGPT, and equating AI with a particular

          • The massive nVidia stock spike is about ChatGPT and generative AI. Let's not kid ourselves. nVidia was doing well before, but this surge is related to Generative AI. If this was about other AIs, it would have spiked a few years ago. This is also what's fueling MSFT's rapid rise. Both nVidia and MS have diversified offerings, but they're hot among investors who think Generative AI will have practical commercial uses in the near future.
            • by ceoyoyo ( 59147 )

              Estimates are that training a GPT-3-like model costs about a million or two dollars in compute. Open AI says $100 million or whatever, but they're giving the biggest number they can: including all the research, people talking to themselves to generate training data, Sam Altman's stock options, etc.

              Most investors aren't dumb. They're not valuing Nvidia at $2 trillion because they think everybody's going to be training their own GPTs a million bucks at a time. Modern AI models have a lot of demonstrated utili

  • Microsoft and Meta are the primary buyers, spending $20B/yr each. Maybe Google and a few others. That doesn't justify this hockey stick valuation because they will eventually get tired of it and VR goggles.

Elliptic paraboloids for sale.

Working...