Is OpenAI Becoming 'Too Big to Fail'? (msn.com) 96
OpenAI "hasn't yet turned a profit," notes Wall Street Journal business columnist Tim Higgins. "Its annual revenue is 2% of Amazon.com's sales.
"Its future is uncertain beyond the hope of ushering in a godlike artificial intelligence that might help cure cancer and transform work and life as we know it. Still, it is brimming with hope and excitement.
"But what if OpenAI fails?" There's real concern that through many complicated and murky tech deals aimed at bolstering OpenAI's finances, the startup has become too big to fail. Or, put another way, if the hype and hope around Chief Executive Sam Altman's vision of the AI future fails to materialize, it could create systemic risk to the part of the U.S. economy likely keeping us out of recession.
That's rarefied air, especially for a startup. Few worried about what would happen if Pets.com failed in the dot-com boom. We saw in 2008-09 with the bank rescues and the Chrysler and General Motors bailouts what happens in the U.S. when certain companies become too big to fail...
[A]fter a lengthy effort to reorganize itself, OpenAI announced moves that will allow it to have a simpler corporate structure. This will help it to raise money from private investors and, presumably, become a publicly traded company one day. Already, some are talking about how OpenAI might be the first trillion-dollar initial public offering... Nobody is saying OpenAI is dabbling in anything like liar loans or subprime mortgages. But the startup is engaging in complex deals with the key tech-industry pillars, the sorts of companies making the guts of the AI computing revolution, such as chips and Ethernet cables. Those companies, including Nvidia and Oracle, are partnering with OpenAI, which in turn is committing to make big purchases in coming years as part of its growth ambitions.
Supporters would argue it is just savvy dealmaking. A company like Nvidia, for example, is putting money into a market-making startup while OpenAI is using the lofty value of its private equity to acquire physical assets... They're rooting for OpenAI as a once-in-a-generational chance to unseat the winners of the last tech cycles. After all, for some, OpenAI is the next Apple, Facebook, Google and Tesla wrapped up in one. It is akin to a company with limitless potential to disrupt the smartphone market, create its own social-media network, replace the search engine, usher in a robot future and reshape nearly every business and industry.... To others, however, OpenAI is something akin to tulip mania, the harbinger of the Great Depression, or the next dot-com bubble. Or worse, they see, a jobs killer and mad scientist intent on making Frankenstein.
But that's counting on OpenAI's success.
"Its future is uncertain beyond the hope of ushering in a godlike artificial intelligence that might help cure cancer and transform work and life as we know it. Still, it is brimming with hope and excitement.
"But what if OpenAI fails?" There's real concern that through many complicated and murky tech deals aimed at bolstering OpenAI's finances, the startup has become too big to fail. Or, put another way, if the hype and hope around Chief Executive Sam Altman's vision of the AI future fails to materialize, it could create systemic risk to the part of the U.S. economy likely keeping us out of recession.
That's rarefied air, especially for a startup. Few worried about what would happen if Pets.com failed in the dot-com boom. We saw in 2008-09 with the bank rescues and the Chrysler and General Motors bailouts what happens in the U.S. when certain companies become too big to fail...
[A]fter a lengthy effort to reorganize itself, OpenAI announced moves that will allow it to have a simpler corporate structure. This will help it to raise money from private investors and, presumably, become a publicly traded company one day. Already, some are talking about how OpenAI might be the first trillion-dollar initial public offering... Nobody is saying OpenAI is dabbling in anything like liar loans or subprime mortgages. But the startup is engaging in complex deals with the key tech-industry pillars, the sorts of companies making the guts of the AI computing revolution, such as chips and Ethernet cables. Those companies, including Nvidia and Oracle, are partnering with OpenAI, which in turn is committing to make big purchases in coming years as part of its growth ambitions.
Supporters would argue it is just savvy dealmaking. A company like Nvidia, for example, is putting money into a market-making startup while OpenAI is using the lofty value of its private equity to acquire physical assets... They're rooting for OpenAI as a once-in-a-generational chance to unseat the winners of the last tech cycles. After all, for some, OpenAI is the next Apple, Facebook, Google and Tesla wrapped up in one. It is akin to a company with limitless potential to disrupt the smartphone market, create its own social-media network, replace the search engine, usher in a robot future and reshape nearly every business and industry.... To others, however, OpenAI is something akin to tulip mania, the harbinger of the Great Depression, or the next dot-com bubble. Or worse, they see, a jobs killer and mad scientist intent on making Frankenstein.
But that's counting on OpenAI's success.
No (Score:2)
OpenAI on the other hand can be instantly and seamlessly replaced by just going to any of the other chatbots that do the same damned thing. Do not pass Go, do not collect $200 bailout money.
Measuring failure? (Score:2)
Pretty weak FP. I think it is some kind of pre-loaded rant against fiat currency. Or maybe was an intended recursive joke about futures on futures? Insurance ^n as n approaches infinity? The Subject was certainly unhelpful. Maybe you care to clarify?
But I'm going to jump in a different direction: How do we tell if AI is failing. I think we are using the wrong metrics, so I would like to suggest a few candidates:
Best apologies: So far I think that one goes to Microsoft's Copilot for some stuff it said about
Translating (Score:3)
OpenAI spent a fortune on flawed technology, spent even more hoping that the fix for the flaw would be to spend even more (probably the idea of someone in marketing or someone who is in the computer field but recently dropped out of a week-long course), and now that the bill is too big, it hopes to be “bailed out” by the government like the banks were, trying to argue that it is as important as the banks and therefore cannot be allowed to fail.
Re: (Score:2)
Let the company go into bankruptcy and investors eat the losses. If reorganization is impossible, sell of the pieces to the highest bidder.
Re: (Score:2)
Re: (Score:2)
but heaven forbid the FDIC bail the bank itself out with a loan that got paid back (which they don't pay for).
This is why the bankers win. All they have to do is create a paper trail complicated enough, and people like you think they paid it back. Where did they get the money to pay it back?
AI is a fraud until they get the I(intelligence) (Score:4, Insightful)
Re: (Score:2)
And nothing is too big to fail - not even Microsoft even though they try their best to hold our computers ransomed by Bitlocker and the Microsoft accounts.
Many corporations should really start to think about this when they activate Bitlocker and starts to have logins through Entra. One day those things might be inaccessible and your data is nowhere to be found. Do you have a local backup and an emergency plan?
Re: (Score:1)
Re: (Score:2)
And when I need it that key has for some reason been re-generated.
This happens quite often at my workplace, not sure why, but maybe it's at every BIOS update when the Bitlocker gets disabled and then re-enabled after the BIOS update. So don't depend on it.
Re: (Score:2)
This happens quite often at my workplace...
There are multiple ways for companies to centrally manage bitlocker keys, either on-prem or via 365. If you run into this frequently at your workplace then it's a skill issue with your IT dept.
Re: AI is a fraud until they get the I(intelligenc (Score:2)
His workplace probably does key escrow, and he's complaining because it's deliberately rotated, and if he doesn't use the machine for over so many days, (or whatever their policy is) then he has to manually enter in a new key. He likely also doesn't know that this feature has to be deliberately configured in a pretty drawn out process just to even work at all.
Re: (Score:2)
And nothing is too big to fail - not even Microsoft even though they try their best to hold our computers ransomed by Bitlocker and the Microsoft accounts.
Despite the disdain for Windows by many slashdotters, if Microsoft were to vanish overnight, that would be a huge hit on the US, impacting both households and corporations that currently use Windows and would struggle to transition overnight to Linux. In that sense, Microsoft is too big to fail. OpenAI, on the other hand, garners nowhere near that level of dependence. If ChatGPT were to disappear overnight, it would be trivial to switch immediately to one of many alternatives. No, if all LLM companies w
Re: (Score:2)
Counter point: Do you need automation or intelligence for the daily tasks? "Intelligence" is a nice lab experiment and will have a lot of philosophical implications, but for all current use-cases you need good working LLM and image generators. You neither need intelligence for a reliable knowledge model (in particular knowing what it doesn't know), nor for coding help, image generation, or other automation. You just need models that work well. I bet there are also a few more interesting architectures to be
Re: (Score:2)
We don't have reliable knowledge models built on LLM's. That's the core problem which they're unable to solve, still. There exists no LLM based knowledge model which works well, so that's still science fiction.
LLM's have use cases, sure. But they're a lot less general than they're being sold as.
Re: AI is a fraud until they get the I(intelligenc (Score:2)
Were Alpha Fold's advancements possible without transformers and the attention mechanism?
Re: (Score:2)
Re: (Score:2)
It's an apt description of what it's doing: allowing each given layer to pay attention to a small subset of its input at any given time instead of drowning in the noise of trying to process the whole thing at once.
Re: (Score:2)
It means new tokens attend to prior tokens. I think human attention is less specific identified than the transformer's mechanism. The latter is just a matrix of activations, the former isn't well understood up to now. It is likely that it might have a similar mechanism, but things are much more complicated in biological brains. And I think you're unfair to say the names are chosen to get attention (nice pun, though). They are chosen to communicate an abstract concept by an analogy.
Re: (Score:2)
Define what you mean by "well". Vs. a database? No (but that's what RAG is for). Vs. humans? Absolutely yes. They achieve a much denser data representation than we do (albeit with a slower learning rate).
Re: (Score:2)
But there is nothing that needs Intelligence for reliable knowledge. That said, I wouldn't build that into the model. It is much more efficient to let the model access a knowledge base. The keyword is RAG (retrieval augmented generation) if you want to read on.
Re: (Score:2)
There it is!
Re: (Score:2)
Even a big step forward for automation is going to be extremely valuable. Not trillions perhaps, but a big shift.
So far though AI isn't really delivering that.
ah, get off them already (Score:1)
They’re trying to do something genuinely useful for everyone. It’s not really their fault that the markets are so eager for growth that expectations, and the money pouring in, are way over the top. At some point, there’ll be a correction and a lot of investors will probably be disappointed, but honestly, that’s just how capitalism works.
Re: (Score:2)
Theyâ(TM)re trying to do something genuinely useful for everyone.
Maybe they were; at this point they seem to be reduced to trying to invent a more compelling form of interactive pornography that they can sell subscriptions to. Color me underwhelmed.
Re: (Score:2)
The question for investors is really the correction timing, not whether it will happen. IMHO, as weird as it sounds, it likely has to do with highly visible inflation (groceries, fuel, etc). Inflation leads to voter rage, which leads to politicians pursuing anti-inflation strategies, which dry up capital in the market, which cause capital-hungry growth fields (like AI) to starve. Once investors catch wind that their previous growth field is no longer going to be in growth mode, they bail, causing a collaps
If all of AI went away today (Score:5, Insightful)
A lot of investors would lose their money, and a lot of students would mourn the loss of their homework "help", and a lot of people would have to work a little harder to find answers to their questions (like having to actually look at the Wikipedia page). But too big to fail? No, not hardly. OpenAI is *not* that deeply embedded in our lives. We still remember how to do things without AI, I mean, it's been maybe 3-4 years now?
If OpenAI were to fail, would a big bubble pop? Yes, I think so. Would we all be doomed? No, hardly.
Re:If all of AI went away today (Score:5, Funny)
We still remember how to do things without AI, I mean, it's been maybe 3-4 years now?
I asked chatgpt how long it has been part of culture, and it confidently said it's been 26 wonderful years.
Re: (Score:3)
We still remember how to do things without AI, I mean, it's been maybe 3-4 years now?
I asked chatgpt how long it has been part of culture, and it confidently said it's been 26 wonderful years.
At this rate that will become indistinguishable from truth in another 3-4.
Re: (Score:2)
I asked Gemini and it went the "Ancient Astronaut Theorists" route instead:
Re: (Score:2)
The concept of artificial intelligence (AI) has been part of human culture for thousands of years, appearing in ancient myths and legends.
Perhaps it was referring to golems [wikipedia.org]? That idea dates back to 400-500 BC, although really they behave more like traditional computer programs than anything we'd currently consider intelligent.
Re: (Score:2)
Way beyond golems - tons of old religions have notions of "craftsmen deities" making mechanical beings (like Hephestus making Talos [talosgems.gr], the Keledones [theoi.com], the Kourai Khryseai [theoi.com], etc) or self-controlled artifacts (such as Vishvakarma making an automated flying chariot [wikipedia.org], Hephaestos making self-moving tripods to serve the gods at banquets, etc), or even things that (mythological) humans created, such as the robots that guarded the relics of the Buddha [theconversation.com], or a whole city of wooden robots made by a carpenter [wisdomlib.org] mentioned in th
Re: (Score:2)
The question isn't whether we can do without OpenAI's products: as you point out, it would be an inconvenience but not critical. The real risk is that with all those huge financial deals, when OpenAI fails, they'll take other companies with them, companies that produce goods and services that we cannot easily do without.
Re: (Score:2)
Yes, I agree. That was the part about the investors. Investors have plowed money into OpenAI that they *knew* was at risk. When a big bubble pops, it does take some innocent bystanders down with it. But that's not the same as "too big to fail" which would take down the whole economy.
Re: (Score:2)
The real risk is that with all those huge financial deals, when OpenAI fails, they'll take other companies with them, companies that produce goods and services that we cannot easily do without.
Which other companies are you thinking of that might fail if OpenAI fails? NVidia?
Re: (Score:2)
Turns out there's a bunch of companies that feed on the AI balloon.
Even airline engine manufacturers.
Imagine if those failed and we'd have to take the Shinkansen to our destination instead of the monorail to the airport...
Re: (Score:2)
Re: (Score:2)
For all that do business with the AI a lot will depend on how they've structured their operations in the face of absurdly high demand from the "AI sector". Most companies deal with large short-term spikes they wrongly perceive as long-term or permanent by heavy upfront borrowing. When the short-term bubble crashes and brings the inevitable credit crunch and re-evaluation, these usually go bust. There are so many well-known examples that I won't even bother to dig em up.
Who will survive and in what manner wi
Re: (Score:2)
I hear those things are awfully loud.
Re: (Score:2)
NVIDIA is bathing in money right now, so unless they over-extend, they should be fine.
The most useful company that I've seen invest in OpenAI would be AMD, but I'm not a financial expert so I don't know how bad it would be for them if their investment ends up losing its value.
I think Microsoft is large enough to be able to take the blow.
Oracle has announced a ridiculously large data center deal, which is unlikely to ever be realized, but I suspect they know that and wrote the contract in such a way they'll
Re: (Score:2)
And, unlike last time, there really *is* no money to save this industry. Just sayin', the Social Security Trust Fund will be in the red no later than 2030. A lot more people will care about that than AI when that arrives at our doorsteps ... not that we have not known for decades.
So yes, the bubble will pop, the useful part of "AI" will survive, the U.S. will be in recession, we all loose a shiny and "free" toy.
Moving on ...
Re: (Score:2)
I think its important to remember the "trust fund" is a book keeping entry. It holds special treasury notes that were loans of excess Social Security taxes to the general fund. Now that Social Security benefits cost more than the taxes collected those loans are being repaid out of the general fund to pay for social security benefits. That money mostly comes from new loans in the form of regular treasury bonds. In short, the money the general fund used to owe to the "trust fund" is now owed to other investor
Re: (Score:2)
AI isn't going to disappear just because the stock prices of these companies crash, or even if they close together. It's too late. The models already exist, inference is dirt cheap to run (and can even be run on your own computer), and vast numbers of people demonstrably find it useful (regardless of whether you, reader, do).
It's funny, when you see "The AI bubble will collapse", you get two entirely different groups of people agreeing - one thinking, "AI is going to go away!", and the other thinking "Infe
No (Score:3)
OpenAI technology is still, while used at large scale, mostly experimental. They also do not seem to have anything g that is years ahead of their competition, wherever they are today is at best a few months ahead of their competition.
They do not really have a moat, and it is trivial for customers to switch.
"Too big to fail" doesn't mean "bubble too big" (Score:4, Interesting)
"Too big to fail" refers to big banks or businesses that are so big and so embedded in our lives, that if they were gone, our economy would literally unravel. Chase Bank comes to mind. OpenAI isn't even in the same league, in terms of impact, should it fail. Yeah, it would hurt, but life would go on.
Re:"Too big to fail" doesn't mean "bubble too big" (Score:4, Interesting)
"Too big to fail" refers to big banks or businesses that are so big and so embedded in our lives, that if they were gone, our economy would literally unravel. Chase Bank comes to mind. OpenAI isn't even in the same league, in terms of impact, should it fail. Yeah, it would hurt, but life would go on.
Also GM and Chrysler, as mentioned in the article, employ a lot of people. OpenAI employs relatively few, and if they are really good at what they do should employ less and less as time goes on.
Re: "Too big to fail" doesn't mean "bubble too big (Score:4, Insightful)
Are you ignoring contagion? If pets.com went down, why did that spread to GM and Chrysler? Were they spending too much on websites? If OpenAI goes down, could it trigger a panic where traders devalue perfectly safe assets in an irrational selling frenzy? So why wouldn't the Fed act as a value investor to set a floor on asset prices to avoid a depression or recession?
Re: (Score:2)
If pets.com went down, why did that spread to GM and Chrysler?
And why did it take 8 years?
Re:"Too big to fail" doesn't mean "bubble too big" (Score:4, Informative)
Currently, think that is true. However, there are analyses that the stock market and the economy is only being propped up by AI. Were OpenAI to fail, it could cause a cascade. That could get serious quickly. I don't think we're at that point yet, but the more the C-Suite banks on AI, the more of a problem the U.S. will have when the bubble pops.
Re: (Score:2)
I think there needs to be another thing, a market retraction by itself I would say is not deserving of the status. That's what I would call it also, it's not a collapse forever, it's a retraction. Yes it will suck for awhile but besides the inflated market values AI is not that valuable and it's incredibly fungible, there's no lack of options that do similar things at similar qualities.
Also a point against is that the fact that if OpenAI goes out of business other than the financials the whole thing will k
Re: "Too big to fail" doesn't mean "bubble too big (Score:2)
Re: (Score:2)
Currently, think that is true. However, there are analyses that the stock market and the economy is only being propped up by AI. Were OpenAI to fail, it could cause a cascade. That could get serious quickly. I don't think we're at that point yet, but the more the C-Suite banks on AI, the more of a problem the U.S. will have when the bubble pops.
Are you saying the entire American economy is lifting itself up by its own bootstraps?
Re: (Score:2)
Yeah, how much of people's retirement savings are invested in AI companies through various funds? I would say a fair amount. Not to mention most of the growth in the market for years has been in these ai companies.
That doesn't make them too big to fail I think, but it does mean that when they fail it will be a pretty big disaster.
Years worth of investment that could have been poured into useful (and valuable) things will have just been spent on decaying, unused data centers.
Years worth of intellectual inves
Too big to fail (Score:1)
They can go by by and it won't matter in the slightest.
Now the fact that we're all basically held hostage by the banks is something we ought to deal with but we a scared of socialisms so that ain't happening
rSilverGun English Language Failure (Score:3)
You are definitely not a native English speaker but posing as one on this forum website. You have English language failures all over your posts. So all of the conspiracy theories about your account and your post on every single story appear to be more true the more you post and expose yourself for astroturfing a Socialist or Communist agendas.
Native English speakers do not write sentences like that or make those mistakes in writing. It is also not an error of voice to text transcription since those get the
Re: (Score:1)
Re: (Score:2)
Nice to know who the pathetic loser is who has nothing better to do but follow one guy on Slashdot and slam every post,thinking he is providing some kind of useful public service to someone, but really just making his own life look sad to everyone.
Not a fan of rsilvergun, but JakFrost needs to find another hobby. He's not good at this one.
Re: (Score:2)
He's been doing it since before llms were widely available and he just seems a little more human now because he has access to them running on his GPU.
It's still laughably basic but I suppose it's better than when it was just a simple chat bot
Re: (Score:2)
The points I made are valid. Too big to fail means something that can take down the whole economy if you allow it to fail and open AI is not that.
And we need to stop allowing ourselves to be held hostage by too big to fail companies. Because that's really what too big to fail means. It means powerful assholes set themselves up so that if they go down they take all of us with them so that they
Re: (Score:2)
They can go by by and it won't matter in the slightest.
Until they get utilities to sink billions into grid upgrades. And then the anticipated load and revenue fails to materialize. Guess who pays? The existing power customers. And the worst part of it is that the grid upgrades (new generation) will most likely be gas and coal. Not enough time to ramp upwind, solar or nuclear* capacity. And then we'll be stuck with that for the next few decades.
The worst possible outcome would be that AI works. But China has DeepSeek waiting in the wings. Cheap. Low power consu
I look forward to ... (Score:2)
Re: (Score:2)
I think the idea is this is a huge bubble and a large number or all of the AI companies fail - its hard to imagine they won't have vast quantities of GPUs they suddenly can't use. They can try to sell them to the surviving ai behemoths, yes, but it seems like a stretch to imagine they'll just want to buy all the old gpus of their former competitors all at once. Yeah, a lot of these things aren't gpus, which probably means the bankrupt companies will be more incentivised to sell the fungable GPUs rather than
Whenever this is asked (Score:4, Insightful)
when certain companies become too big to fail
If you have to ask the question, then "too big" has failed.
You've got a completely broken system whereby big for-profit corporation root themselves to deep that they then demand financial help from the public/government whenever they can't make enough money, even though they're supposedly valued in the $trillion.
free market american hypocrisy (Score:4, Insightful)
nothing that has ever been called "too big to fail" has ever justified the name, and we have only delayed the inevitable and propped up the white-collar criminals who brought them to the brink in preventing their deserved and self-inflicted collapse.
The only actual solution is to create a government strong and competent enough create a reliable social floor SO THAT THE FREE MARKET CAN OCCUR
but go ahead and give more handouts to the rich. I mean you handed the biggest grifter on the planet the keys to the nuclear codes and he's cleaning the place out entirely and knocking it down to boot, and there you are, still not stopping him, too afraid to give a shit about anything
If these big businesses truly depend on OpenAI... (Score:3)
OpenAI can increase their rates if they need more money, and their customer businesses can decide whether or not OpenAI is worth the increased fees.
Unlike the 2008 situation, people aren't going to lose their homes if OpenAI fails. And, frankly, they're not gonna lose their jobs - businesses have been using "AI" as an excuse for cutting jobs. In fact, one could reasonably argue that OpenAI going under might result in an increase in employment!
Altman: whether we burn $50b a year, I don't care (Score:2)
"2% of Amazon.com’s sales" (Score:2)
Amazon didn't post a profit for the first 6 years, and OpenAI is still working on it's for-profit status. They reportedly have more than 700 million weekly users and climbing with a sharp trend upwards this year, so I don't think there's anything for them to worry about. Like so many things in tech, this is foremost a battle for market share. Being profitable ultimately depends on winning that fight.
Personally I don't care much if OpenAI succeeds. I use Anthropic products, they are much better for my purpos
Re: (Score:2)
Re: (Score:2)
I dislike OpenAI, but costs will not kill them. Text inference is a pretty safe bet.
The development of the last years has shown an exponential decrease in cost for inference at the same level of quality, partially due to smaller models of the same quality, partially due to optimizations in the pipelines. GPUs also get more efficient over time.
If you now sell a person a $20 subscription for text generation and keep the price for the next years, your costs will go down while the subscription keeps paying you
Re: (Score:2)
I think you are correct about their operating expenses dropping off over time. I also think that most people post queries to the AI that do not require a very smart LLM to produce an acceptable answer. Probably they already have query classifiers that will route your easy questions to a cheaper LLM.
But I read an interesting financial article recently about another expense to OpenAI and the others. Their hardware is going to depreciate rapidly. Today's billion-dollar data center will be full of obsolete gear
Re: (Score:2)
GPT-5 already does such a routing and people are already complaining, even though its funnily sometimes more the people not wanting to talk to a STEM model about more emotional issues. One might also talk about enshittification compared to prior model scales, but on the other hand one really does not need the huge STEM model to answer "What is the capital of France".
The hardware issue is real, on the other hand there will be some second hand market for the hardware (or full servers, possibly not sold but re
Open AI will likely succeed technically (Score:2)
They are doing excellent work and I suspect that the results will get better
Whether or not they ever achieve profitability is a different question
Investors are irrationally betting on achieving vast riches
It's entirely possible that AI will become very useful without generating the returns that investors expect
So, it's a financial bubble and solid, real technical progress
potential to disrupt the smartphone market (Score:2)
Re: (Score:2)
You do realize that:
1) There are these things called Iphones that have little if anything to do with Google.
2) There are third party phones besides iphones and android.
GrapheneOS is liked for security and privacy, KaiOS exists for the Linux crowd, and Sailfish exists that can run some (not all) android apps while not using Android.
If something has never proven to have worked (Score:2)
If something has never proven to have worked can it ever be said to have failed? A riddle fit for Bilbo.
Is OpenAI too bloated to succeed? (Score:2)
The claimed (i.e. delusional) valuation of an IPO for OpenAI is $1 trillion. If that was a country in the US $1 Trillion Dollar Club [wikipedia.org] it would be equal to Switzerland.
That's all you need to know.
No (Score:2)
Morons and Greedy Carpet Baggers (Score:2)
There are numerous facts and points about how stupid this all is, and I am not going to cite all of them other than the fact this thing is now a huge GDP impactor on the USA economy.
Which I would like to point is nothing but a search engine. The Emporer has no clothes and there is nothing intelligent about it artifically or otherwise.
If it fails or tanks we could be looking at a sudden death of entire sectors of technology and industrial capacity which won't recover for decades, if ever. I wish I could say
Claude, what is a good alternative for OpenAI? (Score:2)
Thinking...
A Bail Out Would be Excessive (Score:2)
Depends on how they fail (Score:2)
If they fail it will knock $1.TB off NVIDIA's market cap, but unless they do it with a bunch of unpaid debts for capital projects it is hard for me to imagine them destroying the economy. If I was working on mega data center projects though I would tighten my payment terms for sure, as getting over extended there is what really kills a company.
The bubble burst will likely be more of a deflation IMO; as one company fails the remaining ones will have a brief period of opportunity and the cycle will repeat fo
The sooner it fails, the better. (Score:2)
It's much better for this stupid AI bubble to burst now than it will be in a year's time, and it'll be better to burst in a year's time than in two year's time.
The longer it lasts, the worse it will be.
It's too stupid to succeed (Score:2)
I mean, at least BitCoin can be used to buy drugs.
nope (Score:2)
No, it is not. "Too big to fail" is just bullshit bingo. The reason banks et al managed to get saved by taxpayer money with that phrase wasn't that they were. It was that they had a solidly entrenched lobby and connections at the highest levels. "Too big to fail" was simply the icing they coated the shit with to make the public swallow it.