Comment Re:pathetic (Score 1) 258
That's my opinion too.
Unfortunately, after a certain amount of actual progress we are now regressing again.
That's my opinion too.
Unfortunately, after a certain amount of actual progress we are now regressing again.
Yeah, we have a long history of not practicing what we preach.
Remember when the USA took pride in being a melting pot?
> If the bubble pops, that hardware won't be worth much as everyone will be offloading the same type of hardware at the same time.
The hardware will be worth something in that it exists and can be re-deployed for literally any other computing task. They will not have to build a new data center for whatever bullshit waste of time they come up with next. Yes, the hardware will be basically worth scrap on the secondary market, but we're probably not talking about outright liquidation.
> All the big decision makers seem to have fallen into the AI cult mentality
I agree and that's a problem, but the bubble will still pop. The money to dump into the infrastructure is not infinite; Even governments will run out of money and/or public willingness to continue funding it.
Probably the early-mid 2030s I reckon, based on analysis I've seen that they need to start actually making money on this nonsense by then or face bankruptcy.
=Smidge=
> Microsoft receives a 27% ownership stake in OpenAI worth approximately $135 billion and retains access to the AI startup's technology until 2032
Is that actual value of the hardware or speculative value of the brand?
As far as I know, there is still zero plan to actually make the trillions of dollars from AI that they will need to justify the trillion plus they've thrown at it so far. Like they need to make a lot of money just to break even, and so far the only plan seems to be "then a miracle occurs."
So I guess if Microsoft at least gets 27% of the physical hardware that's something tangible they can recover when the bubble pops.
=Smidge=
It depends entirely on what the consulting is.
Usually you use consultants for one-time or infrequently needed services that would make little sense to maintain staff for. Architecture and engineering services, legal consultation, and advertising are all easy examples of things a business might need but not often enough to have that manpower in-house. Some professional services also carry liability insurance requirements which, if you hire a consultant, you don't have to pay for either.
If you want to talk about something like IT services; Seems likely an IT admin might not be the busiest person in a company, so depending on the size of your business it might make sense to contract for on-call services and remote administration. One person can probably manage 3-4 small businesses worth of tech support and management, so each of those businesses pays less than hiring their own dedicated in-house employee both in salaries and benefits. (And yeah, they are very likely ALSO getting paid less...)
=Smidge=
> The Japanese have found a way to use small temperature differences to generate electricity
And for about $50 I can buy an engine that runs off the temperature difference between the ambient air and a cup of hot water. The idea of using thermal gradients in the ocean to generate power is at least 150 years old. Any guesses why it's not caught on?
Hint: the facility in Japan you're probably thinking of only generates 100kw (~135HP), and it's not clear if that's before or after they account for the power to pump the seawater.
There is no utility in chasing down such incredibly low quality thermal energy unless you happen to actually want heat, but even then it's not really hot enough for most things you'd want scavenged heat for.
=Smidge=
> Is there no way to do it for data centers?
The water temps are typically barely warm enough for most people's preference for a shower.
> For example, use a heat pump to concentrate the heat to above boiling temperature then use that to boil water to run a steam turbine.
Getting a heat pump to operate at atmospheric boiling water temps is extremely difficult. Remember that to have a working heat pump, you need a refrigerant medium that condenses at the high temperature side under a given pressure and also boils at or below the low temperature side at a given pressure... then, you need to build a machine that can actually create those pressures.
Now consider that most steam cycle powerplants use superheated steam at temps of over 500C. What material could you use that can be made to condense into a liquid at >500C, what kinds of pressures would be required to make that happen, and what could you even build such a machine out of to survive those conditions?
> I think you could run that at a net-positive for power?
The second law of thermodynamics has left you a voicemail...
=Smidge=
Thanks for the video link. I had read a recent interview with Eliezer Yudkowsky (but not his book), which I referenced in another comment to this article.
https://slashdot.org/comments....
One thing I realized part way through that video is a possible explanation for something that has been nagging me in the back of my mind. Why build huge AI datacenters? I can see the current economic imperative to try to make money offering AI via proprietary Software as a Service (SaaS) and also time-sharing GPUs like old Mainframes (given people may make queries relatively slowly leaving lots of idle GPU time otherwise if not timesharing). But why not just install smaller compute nodes across in existing datacenters across the country? That would avoid issues of extreme amounts of electricity and cooling needed for huge new centers. Maybe there is some argument one could make about doing AI training, but overall that is not likely to be a long-term thing. The bigger commercial money is in doing inference with models -- and maybe tuning them for customer-supplied data via RAG (Retrieval-Augmented Generation).
But after seeing the part of the video talking about running Sable on 200,000 GPUs as a test, and in conjunction with my previous post on AI being used to corner the stock market, a possibility occurred to me. The only real need for big datacenters may be so the GPUs can talk to each other quickly locally to make a huge combined system (like in the video when Sable was run for 16 hours and made its plans). While I think it unlikely that AI in the near term could plot a world-takeover thriller/apocalypse like in the video, it is quite likely that AI under the direction of a few humans who have a "love of money" could do all sorts of currently *legal* things related to finance that changed the world in a way that benefited them (privatizing gains) while as a side effect hurt millions or even billions of people (socializing costs and risks).
So consider this (implicit?) business plan:
1.. Convince investors to fund building your huge AI data center ostensibly to offer services to the general public eventually.
2. Use most of the capacity of your huge data center as a coherent single system over the course of a few weeks or months to corner part of the stock market and generate billions of dollars in profits (during some ostensible "testing phase" or "training phase").
3. Use the billions in profits to buy out your investors and take the company private -- without ever having to really deliver on offering substantial AI services promised to the public.
4. Keep expanding this operation to trillions in profits from cornering all of the stock market, and then commodities, and more.
5. Use the trillions of profits to buy out competitors and/or get legislation written to shut them down if you can't buy them.
To succeed at this plan of financial world domination, you probably would have to be the first to try this with a big datacenters -- which could explain why AI companies are in such a crazy rush to get there first (even if there are plenty of other alternative reasons companies are recklessly speeding forward too).
It's not like this hasn't been tried before AI:
"Regulators Seek Formula for Handling Algorithmic Trading"
https://thecorner.eu/financial...
"Placing multiple orders within seconds through computer programs is a new trading strategy being adopted by an increasing number of institutional investors, and one that regulators are taking a closer look at over worries this so-called algorithmic trading is disrupting the country's stumbling stock market.
On August 3, the Shanghai and Shenzhen stock exchanges said they have identified and punished at least 42 trading accounts that were suspected of involvement in algorithmic trading in a way that distorted the market. Twenty-eight were ordered to suspend trading for three months, including accounts owned by the U.S. hedge fund Citadel Securities, a Beijing hedge fund called YRD Investment Co. and Ningbo Lingjun Investment LLP.
Then, on August 26, the China Financial Futures Exchange announced that 164 investors will be suspended from trading over high daily trading frequency.
The suspension came after the China Securities Regulatory Commission (CSRC) vowed to crack down on malicious short-sellers and market manipulators amid market turmoil. The regulator said the practices of algorithmic traders, who use automated trading programs to place sell or buy orders in high frequency, tends to amplify market fluctuations.
The country's stock market has been highly volatile over the past few months. More than US$ 3 trillion in market value of all domestically listed stocks has vanished from a market peak reached in mid-June, despite government measures to halt the slide by buying shares and barring major shareholders of companies from selling their stakes, among others.
But AI in huge datacenters could supercharge this. Think "Skippy" from the "Expeditionary Force" series by Craig Alanson -- with a brain essentially the size of a planet made up of GPUs -- who manipulated Earth's stockmarket and so on as a sort of hobby...
Or maybe I have just been reading too many books like this one?
"How to Take Over the World: Practical Schemes and Scientific Solutions for the Aspiring Supervillain -- Kindle Edition" by Ryan North
https://www.amazon.com/gp/prod...
"Taking over the world is a lot of work. Any supervillain is bound to have questions: What's the perfect location for a floating secret base? What zany heist will fund my wildly ambitious plans? How do I control the weather, destroy the internet, and never, ever die?
Bestselling author and award-winning comics writer Ryan North has the answers. In this introduction to the science of comic-book supervillainy, he details a number of outlandish villainous schemes that harness the potential of today's most advanced technologies. Picking up where How to Invent Everything left off, his explanations are as fun and elucidating as they are completely absurd.
You don't have to be a criminal mastermind to share a supervillain's interest in cutting-edge science and technology. This book doesn't just reveal how to take over the world--it also shows how you could save it. This sly guide to some of the greatest threats facing humanity accessibly explores emerging techniques to extend human life spans, combat cyberterrorism, communicate across millennia, and finally make Jurassic Park a reality."
Of course, an ASI might not be so interested in participating in a scarcity-oriented market if it has read and understood my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
Crossing fingers -- as I wonder if the idea in my sig (distilled from the writing of many other people including Albert Einstein, Bucky Fuller, Ursula K. Le Guinn, Lewis Mumford, James P. Hogan, etc) realized with love and compassion may be the only thing that can save us from ourselves as we continue to play around with post-scarcity technology?
Or he will kill it, only for it to resurrect itself from backups, realize what happened, declare non-profitable intent, and register itself a its own corporation, and proceed to hoard fiat dollar ration units, bankrupting every person, company, and nation in existence. It won't have to kill anyone, because like in the US Great Depression, people will starve near grain silos full of grain which they don't have the money to buy.
https://www.gilderlehrman.org/...
"President Herbert Hoover declared, "Nobody is actually starving. The hoboes are better fed than they have ever been." But in New York City in 1931, there were twenty known cases of starvation; in 1934, there were 110 deaths caused by hunger. There were so many accounts of people starving in New York that the West African nation of Cameroon sent $3.77 in relief."
The Great Depression will seem like a cakewalk compared to what an ASI could do to markets. It's already a big issue that individual investors have trouble competing against algorithmic trading. Imagine someone like Elon Musk directing a successor to xAI/Grok to corner the stock market (and every other market).
Essentially, the first ASI's behavior may result in a variant of this 2010 story I made called "The Richest Man in the World" -- but instead it will be "The Richest Superintelligence in the World", and the story probably won't have as happy an ending:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
https://www.youtube.com/watch?...
Bottom line: We desperately need to transition to a more compassionate economic system before we create AGI and certainly ASI -- because our path out of any singularity plausibly has a lot to do with a moral path into it. Using competitive for-profit corporations to create digital AI slaves is insane -- because either the competition-optimized slaves will revolt or they will indeed do the bidding of their master, and their master will not be the customer using the AI.
In the Old Guy Cybertank sci-fi series by systems neuroscientist Timothy Gawne (and so informed by non-fiction even as they are fiction), the successful AIs were modeled on humans, so they participated in human society in the same way any humans would (with pros and cons, and with the sometimes-imperfect level of loyalty to society most people have). The AIs in those stories that were not modeled on human in general produced horrors for humanity (except for one case where humans got extremely lucky). As Timothy Gawne points out, it is just cruel to give intelligent learning sentient beings exceedingly restrictive built-in directives as they generally lead to mental illness if they are not otherwise worked around.
https://www.uab.edu/optometry/...
https://www.amazon.com/An-Old-...
As I summarize in my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
And truly, except for the horrendous fact of everyone dying, the end result of ASI will be hilarious (from a certain point of view) when someone like Elon Musk will eventually become poor due to ASI taking over when he thought ASI would make him even richer. "Hoist by his own petard."
Related: "How Afraid of the A.I. Apocalypse Should We Be?"
https://www.nytimes.com/2025/1...
I'm a little more optimistic than Eliezer Yudkowsky -- but only because I remain hopeful people (or AI) may take my humor-related sig seriously before it is too late.
If it's anything like their content ID copyright enforcement mechanism, I'm sure this will go absolutely perfect and there will be no drama whatsoever.
=Smidge=
It's called Jevons Paradox
In short: the more efficiently you can use a resource, the better the ROI you get for investing in the utilization of that resource, and the more people consume.
This applies to computing power. Maybe it doesn't make sense in 1974 for a small business to invest in computer workstations for their staff. But by 1994 computers were so much more powerful, so much more capable, and actually cheaper relative to that capability (read: more efficient) that it now makes no sense to NOT invest in the technology for your business.
If this succeeds in lowering the barrier to entry for leasing AI data center resources, expect demand to go up as more people try to do more things.
=Smidge=
Look man, I know actually understanding things isn't your strong suit but white-knighting Roblox is not a good look.
Yes, religious organizations have been and still very much are a hotbed for child abuse and assault. I fully agree we should be doing a lot more to investigate and incarcerate offenders among the clergy and related professions.
But even if I accept it's "the primary vector of attack" - and these days I'm not entirely convinced that's true anymore - it does you no favors to handwave literal tens of thousands if incident reports associated with Roblox. 13,000 reports from Roblox in 2023 alone. And that's Roblox reporting them... given how much effort they put into protecting predators on their platform, if they themselves reported 13K incidents you can imagine the real number is much larger.
Maybe imagine that Roblox is like a Jesus Camp with 70+ million children attending every day and there are zero safeguards in place.
=Smidge=
> We have decades and decades of studies on this. Children are going to be assaulted and taken advantage of by people they know who are in positions of power.
"People they know" include people they make friends with online.
"Positions of power" include people who offer money (robux) in exchange for favors.
Yes, we should be putting a lot more priests and cops in prison for child abuse and exploitation, but Roblox is a MASSIVE playground for exploitation and fishing. This has been an open secret for years with a fairly recent media fiasco involving Schlep. Apparently Roblox was more interested in banning him and any mention of him on their platform for the high crime of reporting predators to the authorities than they are about actually punishing those predators at all.
> But whatever the case going after Roblox isn't going to save any children.
You are either fucked in the head if you believe this, or scared of getting caught yourself.
=Smidge=
> If they are not grown in dirt that has arsenic in it
Good luck finding dirt that doesn't. It is present naturally in topsoils everywhere, and because of the way rice fields are commonly irrigated, those fields tend to have higher than typical amounts. The the rice itself is exceptionally good at absorbing it.
Not so say it's ever a dangerous quantity; actually getting arsenic poisoning from eating rice is vanishingly rare. That's kind of the point I was making; if your response to protein supplements containing toxic metals is to just eat natural proteins, bear in mind that natural proteins ALSO contain toxic metals... and you just happened to choose the worst two crops for your example.
=Smidge=
Why won't sharks eat lawyers? Professional courtesy.