Nvidia Hits $2 Trillion Valuation (reuters.com) 65
Nvidia hit $2 trillion in market value on Friday, riding on an insatiable demand for its chips that made the Silicon Valley firm the pioneer of the generative AI boom. From a report: The milestone followed another bumper revenue forecast from the chip designer that drove up its market value by $277 billion on Thursday - Wall Street's largest one-day gain on record. Its rapid ascent in the past year has led analysts to draw parallels to the picks and shovels providers during the gold rush of 1800s as Nvidia's chips are used by almost all generative AI players from chatGPT-maker OpenAI to Google.
gravity (Score:2)
What goes up must come down.
Re: (Score:2)
How to say "I missed out on AAPL" without saying "I missed out on AAPL"
Re: (Score:2)
What must raise, must fall...
What is really fascinating how every hype finds so many idiots that are sure that this time it must all pan out.
This bubble will make a supersonic boom (Score:4, Insightful)
when it pops.
Re: (Score:3)
Consider how companies at a tenth of this value were considered 'too big to fail' back in 2008 and then wonder what'll happen when everyone pivots to the NEXT big thing after AI.
Re: (Score:2)
Consider how companies at a tenth of this value were considered 'too big to fail' back in 2008 and then wonder what'll happen when everyone pivots to the NEXT big thing after AI.
I think the most hilarious outcome will be some breakthrough in AI where we can dump the concept of having to have tremendous GPU farms in order to do even the simplest tasks. There's bound to be a better way to do what current systems are doing that we haven't found yet. And if that happens to be stumbled over in the near future, NVIDIA will implode so fast it'll make a cartoonish squeak on the way out.
And before anybody jumps into lecture mode about how it can't possibly happen, yeah, nothing can happen u
Re:This bubble will make a supersonic boom (Score:5, Interesting)
There's bound to be a better way to do what current systems are doing that we haven't found yet.
Not necessarily. Some problems are just hard and this is probably one of them. Remember that the current hype is not a "breakthrough", but small, incremental steps for something like 70 years now. A ton of really smart people have invested a ton of time into this and the hallucinating semi-morons we currently have are a very advanced end-result. Many people think these are just some early results and there is a mass of low-hanging fruit still to be found. Not so.
Re:This bubble will make a supersonic boom (Score:4, Interesting)
The biggest issue I see is that nVIDIA is fabless. The speed of CPUs is far more limited by the affordability and physics of the silicon than the engineers working on CPU design. There's very little gained in implementation algorithms. There are not many ways stay ahead in GPU design .. especially for AI. Sure there are process specific optimizations that you can do during place and route and LVS (it'll soon be at a point where computers do that better than humans). If you compare CPUs from nVidia, AMD, Qualcomm, and Apple .. you'll notice that the die-area (how much cache, how many processing units) and the process node is the most important predictor of performance.
Take AMD's Radeon RX 7900 XTX and compare it to the GeForce RTX 4090 .. The RTX 4090 performs up to 50% better, but that's because it's built on 4 nanometer process and gets 609mm^2 of die area whereas the AMD card had to use a 5 nanometer process and work with only 530 mm^2 of die space. That's a pretty significant difference that's like a CPU with 423^mm of 4-nanometer die area versus 609^mm nanometer die area .. that's a 50% different in the number of transistors. And sure enough, the 4090 has 76 billion transistors versus 56 billion on the XTX. The XTX has less transistors and uses an inferior process node .. of course it will perform worse. It has nothing to do with nVIDIA having better engineers.
My point -- nVIDIA's lead comes almost entirely from them being able to afford a superior silicon process node from TSMC .. not from some genius-level GPU design chops.
What happens when process node improvements stall, or AMD says fuck everything .. we're going to 2 nanometer?
They would just need to get a TSMC executive drunk and have him agree to let AMD get that node at a discount.
There is a way (Score:1, Troll)
There are not many ways stay ahead in GPU design .. especially for AI.
There is a way, you build a chip dedicated for AI [groq.com].
That's also what Apple has been doing, and I think SnapDragon has been making custom model processors as well...
Since the valuation of a company is forward looking, it's absolutely nuts to think NVidea deserves the valuation they have when profits are from needs of the moment, before too long large AI data centers will be using chips other than general purpose GPUs.
Re: (Score:2)
The biggest issue I see is that nVIDIA is fabless. The speed of CPUs is far more limited by the affordability and physics of the silicon than the engineers working on CPU design. There's very little gained in implementation algorithms.
Hmm, isn't this idea obviously false? After all, every company has access to the same fabs. In fact, it wasn't so long ago that Nvidia was a second-class citizen at TSMC, falling behind Apple and arguably behind AMD. In fact, even today, would Nvidia get priority if Apple decided it wanted more wafers?
If the fab is the important part of the design/manufacturing chain, then either companies are incompetent at finding fabs (i.e., the only two leading fabs in the entire world) or the companies are incompete
Re: (Score:2)
If every company has access to the same fabs why did AMD use 5N while NVIDIA used 4N ?
I'm sure NVIDIA paid extra for 4N (and AMD could have done the same) .. but that's a business decision rather than their engineers being smarter.
Re: (Score:2)
My point -- nVIDIA's lead comes almost entirely from them being able to afford a superior silicon process node from TSMC .. not from some genius-level GPU design chops.
Just my personal anecdote: for well over 20 years, every time I've had an AMD card I've had endless problems with drivers. nVidia's stuff always works perfectly, even with incredibly old games and stuff. Rag all you want about performance, but the only thing I care about is that shit works. Hence, I've been nVidia exclusive for years and probably will be for a long time.
Yeah, algorithms matter.
Re: (Score:2)
NV has taken a circuitous route to dominance of the AI marketplace. A company with purpose built hardware and a decent software stack should be able to take them down a notch, if not eventually surpass them. Tenstorrent looks promising.
In any case such competition may bring power requirements down a bit for llm training.
Re: (Score:2)
Qubits! QPUs! just kidding nobody knows when quantum computers will be useful. That's what's so exciting about them! Not to investors though.
Re: (Score:2)
The problem with bank failures wasn't just their 'big' size, but their central functional role throughout the economy. They're the middleman for everything.
NVidia is not.
Re: (Score:2)
Re: (Score:3)
Yet Bitcoin surges again. NVIDIA's forward PE rationalizes the valuation to some degree...
Re: (Score:2)
How can they meet expectations when they are set that high? Along with many racing to get into the demand... long term this isn't likely to be good as they fight to keep that valuation.
Re: (Score:2)
Indeed.
I don't think it's going to pop (Score:2)
The King didn't need to sell iPhones to make money. He just took it.
Re: (Score:2)
AI is a goldrush and they're all using proprietary CUDA.
Even though AMD has a faster chip, it can't use CUDA.
Does this monopoly of accelerated-through-parallelism maths justify Nvidia's share price?
3dfx (Score:4, Interesting)
Somewhere in a bar there's a guy who started 3DFX thinking "We could have had it all and we just screwed up.".
They went from "best in the industry" to out of business within 5 years.
Re: (Score:2)
Maybe they should have used better Voodoo against nVidia.
Re:3dfx (Score:5, Informative)
Not really. 3DFX lost to Nvidia in Riva TNT era, and got a finishing blow with first GeForce chip. None of these were GPUs as related to "GPUs as AI accelerators", i.e. shader compute units. TNTs were all about rendering as much triangles as fast as possible, and GeForce's main addition was to move transform and lighting from CPU to the accelerator. None of these could do programmable shaders, which is what we define as GPU nowadays. They were "graphics accelerators" before that.
And the reason why 3DFX lost was two fold. First it got utterly mauled by Riva TNT and Riva TNT2 for refusing to include 32 bit color depth mode in Voodoo cards. Yes, 3DFX basically went "640kB of memory is good enough for everyone", but unlike Microsoft, they actually stuck to this in hardware (yes they did internal rendering at 22 bit and then did some fuckery to get 16 bit output to look better than native 16 bit. Point stands regardless). And second step was nvidia successfully getting enough games to actually support moving transform and lighting to the graphics accelerator off CPU. So basically games ran on GeForce 256 were noticeably better looking than same games running on Voodoo 3 if they could be ran on it at all. Because of 32 bit color vs 16 bit (visible dithering vs no visible dithering) and transform and lighting being much more complex. The only real advantage Voodoo 3 had was that some games still were better optimized for 3DFX's own Glide API than OpenGL and Direct3D. And that wasn't much of an advantage any more at the time, with everyone having moved to having OpenGL or Direct3D implementation that was good.
And actual first mass market programmable shader compute GPU was GeForce 3. 3DFX was property of Nvidia at this point, having gone bankrupt earlier. At the time of being purchased, they were still working on getting a first graphics accelerator with T&L unit on board.
Re: (Score:2)
Not really. 3DFX lost to Nvidia in Riva TNT era,
More like the Riva TNT2 era - the Riva TNT was still a little slower than the Voodoo2 it was competition against, but it did cement Nvidia as a good card that could actually compete (compared to everything else which was basically a toy compared to 3dfx's cards).
That was in 1998 though, 2 years after 3dfx released the original Voodoo which basically was wiping the floor against anything else at the time. Back then you could still choose if you wanted 3d acceleration or the software rendering engine and it
Re: (Score:2)
Not really. Riva TNT introduced the fact that games looked way better on it due to 24 bit color space vs Voodoo's 16 bit. So it was a choice between better color space and being able to run many games in Glide, which was a significant performance boost for Glide optimized games. Which was quite a few games back in the day, as this is before the full breakthrough of Direct3D and OpenGL being significantly less performant API. This was the API war era, where Glide had by far the best performance but only ran
Re: (Score:3)
Just look at AMD. They're valued at less than 20% of Nvidia. Their products are just a little bit worse than Nvidia's and they're just a little bit behind the curve when it comes to catering to the AI market.
The technology business is brutal. Being just a little bit better can lead to winning by a huge margin.
Re: (Score:2)
I just did a quick google search out of curiosity, and NVDAs data center revenue is 10x AMDs.
Maybe that has something to do with it, and not your "a little bit worse products' claim.
Anyone that uses any generative AI tools on their local machine knows NVDA devices are much faster than AMD at a similar price point.
Re: (Score:2)
3DFX cards were really just used for gaming, though. You can only sell so many PC add-on cards and arcade machines.
NVidia didn't really become a money printing machine until they invented CUDA, which let their GPU's do "real" work like crypto mining and eventually AI acceleration.
Re: (Score:2)
CUDA was out a couple of years before the first Bitcoin client, which itself was years before Bitcoin was actually worth anything. CUDA was initially aimed at number crunching in science: weather forecasting, DNA folding et al.
3DFX was badly mismanaged.
https://en.wikipedia.org/wiki/... [wikipedia.org]
They also missed the transition of Hardware T&L from the business to consumer market in DirectX 7. The first Geforce just destroyed it and Nvidia never looked back. It's likely Nvidia consulted with Microsoft to exclude
Are AI chips really that complex... (Score:2)
- No unpredictable conditional memory access patterns. It's mostly matrix operations which are very linear and predictable
- No complex shaders. It's primary multiplication and addition
- Extremely small and simple fixed functionality - just tensor operations. GPUs have to deal with many things like anisotropic filtering, texturing, z-buffering, rasterization, tesselation, anti-aliasing...
Besides the usual players, there are m
Re: (Score:2)
Depends on patents. Patents are the thing that locks certain companies in. For example x86 and then x64. Sure, you can build an ancient x86 compatible chip without intel's license nowadays. Not so much for x64 and AMD though. Cutting edge is locked down with patents. Old stuff you can make, but demand for that is small to non-existent while competitive cutting edge is present.
Totally agree, look at Groq (Score:4, Interesting)
there are multiple companies developing AI chips, much more numerous than GPU companies.
I totally agree, so much money is dumping into NVdia but what are the chances the dominance they have will last even five years??
One example that is really interesting is Groq, who is producing hardware today [groq.com] that is MUCH faster at processing existing LLM models.
Someone described it as the same shift that occurred in Crypto when people went from general GPU chips for computation to custom ASIC chips.
NVidia does have an advantage for a company that absolutely needs hardware for AI right now but that wave is going to taper pretty soon as data centers with alternative hardware arise and offer shared AI resources at a fraction of current cost.
Re: (Score:2)
>It's primary multiplication and addition
Oh my god, this is posted as a serious comment. As if there is just no benefit in using a fast linear algebra library instead of a slow one (or a fast architecture instead of a slow one).
Hey, technically I can solve these back-propagation equations in a notebook, why do I even need to write code at all?
No fucking way (Score:5, Funny)
$2 trillion for glorified If-Then-Else and matrix multiplication?
Re: (Score:1)
Now, need to make some widget and market it as "Widget AI" and watch all the case rolling in.
Hmm, maybe "Pet Rock A.I."
Re: (Score:2)
Hey, wait until you hear about the hundred trillion dollar valuation of glorified not-and.
Fads are awesome!!! (Score:2)
Re: (Score:2)
I know you can't list everything but the crypto craze was a pretty significant one to leave out.
Re: (Score:1)
Re: (Score:2)
I also found it amusing that they included Linux and Cyber Security as "fads". I don't think that those two are going away anytime soon.
Re: (Score:1)
Cyber Security is definitely another fad. Actually I think A.I. will likely take its place for system scanning and analysis, since it's cheaper and more effective (it's likely not going to miss anything a person would). However, you'll always
Bubble much? (Score:2)
Nvidia has nothing that would justify that valuation. The current AI hype will collapse in a little while when it becomes blatantly obvious it cannot actually be used for much because of hallucinations and being pretty dumb. And their other products to not justify that valuation in any way either.
Re: (Score:3)
>cannot actually be used for much because of hallucinations
https://googlethatforyou.com?q... [googlethatforyou.com]
Consider a model that is correct 90% of the time. Oh, but I guess we can't use it because of 10% 'hallucinations.'
The 'models don't work unless they are perfect' crowd needs to learn something about statistics.
Re: (Score:2)
You need to learn to read...
Re: Bubble much? (Score:1)
Re: (Score:2)
90% doesn't cut it for many applications.
If I know only one thing about statistics, it's that the only path to salvation is Six Sigma.
Re: (Score:2)
Exactly. 90% is nice for a demo, but not for getting actual stuff done. The cleanup of those 10% will be much more expensive than getting somebody to do it right the first time. There are exceptions where things are badly broken anyways and 90% will not make things worse, but there are not many of them.
Re: (Score:2)
Imagine if the greeter at Wal-Mart hallucinated on 10% of customers. She'd be fired before the day's up.
Re: (Score:3)
This kind of bubble happens because traders aren't in it for the long haul, they're just trading on the ups and downs of the market.
I'd like to see changes that require stock owners to keep their stocks for an extended period of time, say, one year, before they are allowed to sell it again. That might put this kind of wild market swing to bed.
Re: (Score:2)
If you own stocks through a 401k, you get penalized for pulling your funds out early. So if you're a worker, you have to hold the bag, while the big players can move their money around at will.
Funny how that "free market" is only free for some. Why don't we do retirement like grandpa did?
Re: (Score:2)
Every kind of market only works for some. Thankfully, the free market works for a larger portion of people than other types, such as socialism.
As for Grandpa's pension, those went away because they were always financially unsustainable. They offered so much to retirees that companies literally went bankrupt because they couldn't continue to pay enough into the pension funds to keep them afloat. Pensions were really a kind of pyramid scheme, where the early retirees lived like kings, but the later ones got s
Re: (Score:2)
I.e. they destabilize a whole global economy. What a stupid system to have.
For context, they're well below MSFT & AAPL (Score:2)
To me, generative AI is just crypto all over again....but there seem to be a lot more suckers this round.
However, I take solace in knowing their chips are also used for useful functions, like machine learning, machine vision, pattern recognition, etc. So while everyone is pouring money into
Re: (Score:3)
Machine learning is AI. Vision and pattern recognition are classic AI problems.
Re: (Score:2)
Machine Learning is a subset of AI.
Re: (Score:2)
Yes, that's what I said. Your definition of machine learning isn't really correct though.
Early AI was divided into two main approaches. The symbols and rules people built databases of facts and the relations between them, and equipped their systems with the rules of logic so they could create new facts. Programming languages like Lisp and especially Prolog were developed in large part to do this. But these systems don't really learn. You program them.
Machine learning was t
Machine Learning has been around for decades (Score:2)
Machine learning is AI. Vision and pattern recognition are classic AI problems.
you and I both know this is all about ChatGPT and the generative AI hype. I have no issues with ML/MV...it's the people oversellng Generative AI that are getting on my nerves and starting to worry me....and this hype bubble is driven by ChatGPT and it's counterparts in the generative AI space. You and I both know this.
Re: (Score:2)
Nvidia is happy to sell chips to people who want to train their own language model, but that's certainly not all their hardware does. Even then, reliable natural language processing is a long-time dream. Remember Scotty talking to the mouse in Star Trek? The world has an enormous amount of information that is stored in poorly structured language: electronic text, printed text, audio, etc. There's a massive industry just in document processing.
This is not just about chatGPT, and equating AI with a particular
The massive stock spike most definitely is (Score:2)
Re: (Score:2)
Estimates are that training a GPT-3-like model costs about a million or two dollars in compute. Open AI says $100 million or whatever, but they're giving the biggest number they can: including all the research, people talking to themselves to generate training data, Sam Altman's stock options, etc.
Most investors aren't dumb. They're not valuing Nvidia at $2 trillion because they think everybody's going to be training their own GPTs a million bucks at a time. Modern AI models have a lot of demonstrated utili
Driven by seasonal AI spending binge (Score:2)