Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment AI datacenters could be used to corner stockmarket (Score 1) 129

Thanks for the video link. I had read a recent interview with Eliezer Yudkowsky (but not his book), which I referenced in another comment to this article.
https://slashdot.org/comments....

One thing I realized part way through that video is a possible explanation for something that has been nagging me in the back of my mind. Why build huge AI datacenters? I can see the current economic imperative to try to make money offering AI via proprietary Software as a Service (SaaS) and also time-sharing GPUs like old Mainframes (given people may make queries relatively slowly leaving lots of idle GPU time otherwise if not timesharing). But why not just install smaller compute nodes across in existing datacenters across the country? That would avoid issues of extreme amounts of electricity and cooling needed for huge new centers. Maybe there is some argument one could make about doing AI training, but overall that is not likely to be a long-term thing. The bigger commercial money is in doing inference with models -- and maybe tuning them for customer-supplied data via RAG (Retrieval-Augmented Generation).

But after seeing the part of the video talking about running Sable on 200,000 GPUs as a test, and in conjunction with my previous post on AI being used to corner the stock market, a possibility occurred to me. The only real need for big datacenters may be so the GPUs can talk to each other quickly locally to make a huge combined system (like in the video when Sable was run for 16 hours and made its plans). While I think it unlikely that AI in the near term could plot a world-takeover thriller/apocalypse like in the video, it is quite likely that AI under the direction of a few humans who have a "love of money" could do all sorts of currently *legal* things related to finance that changed the world in a way that benefited them (privatizing gains) while as a side effect hurt millions or even billions of people (socializing costs and risks).

So consider this (implicit?) business plan:
1.. Convince investors to fund building your huge AI data center ostensibly to offer services to the general public eventually.
2. Use most of the capacity of your huge data center as a coherent single system over the course of a few weeks or months to corner part of the stock market and generate billions of dollars in profits (during some ostensible "testing phase" or "training phase").
3. Use the billions in profits to buy out your investors and take the company private -- without ever having to really deliver on offering substantial AI services promised to the public.
4. Keep expanding this operation to trillions in profits from cornering all of the stock market, and then commodities, and more.
5. Use the trillions of profits to buy out competitors and/or get legislation written to shut them down if you can't buy them.

To succeed at this plan of financial world domination, you probably would have to be the first to try this with a big datacenters -- which could explain why AI companies are in such a crazy rush to get there first (even if there are plenty of other alternative reasons companies are recklessly speeding forward too).

It's not like this hasn't been tried before AI:
"Regulators Seek Formula for Handling Algorithmic Trading"
https://thecorner.eu/financial...
        "Placing multiple orders within seconds through computer programs is a new trading strategy being adopted by an increasing number of institutional investors, and one that regulators are taking a closer look at over worries this so-called algorithmic trading is disrupting the country's stumbling stock market.
        On August 3, the Shanghai and Shenzhen stock exchanges said they have identified and punished at least 42 trading accounts that were suspected of involvement in algorithmic trading in a way that distorted the market. Twenty-eight were ordered to suspend trading for three months, including accounts owned by the U.S. hedge fund Citadel Securities, a Beijing hedge fund called YRD Investment Co. and Ningbo Lingjun Investment LLP.
      Then, on August 26, the China Financial Futures Exchange announced that 164 investors will be suspended from trading over high daily trading frequency.
      The suspension came after the China Securities Regulatory Commission (CSRC) vowed to crack down on malicious short-sellers and market manipulators amid market turmoil. The regulator said the practices of algorithmic traders, who use automated trading programs to place sell or buy orders in high frequency, tends to amplify market fluctuations.
        The country's stock market has been highly volatile over the past few months. More than US$ 3 trillion in market value of all domestically listed stocks has vanished from a market peak reached in mid-June, despite government measures to halt the slide by buying shares and barring major shareholders of companies from selling their stakes, among others. ..."

But AI in huge datacenters could supercharge this. Think "Skippy" from the "Expeditionary Force" series by Craig Alanson -- with a brain essentially the size of a planet made up of GPUs -- who manipulated Earth's stockmarket and so on as a sort of hobby...

Or maybe I have just been reading too many books like this one? :-)
"How to Take Over the World: Practical Schemes and Scientific Solutions for the Aspiring Supervillain -- Kindle Edition" by Ryan North
https://www.amazon.com/gp/prod...
        "Taking over the world is a lot of work. Any supervillain is bound to have questions: What's the perfect location for a floating secret base? What zany heist will fund my wildly ambitious plans? How do I control the weather, destroy the internet, and never, ever die?
        Bestselling author and award-winning comics writer Ryan North has the answers. In this introduction to the science of comic-book supervillainy, he details a number of outlandish villainous schemes that harness the potential of today's most advanced technologies. Picking up where How to Invent Everything left off, his explanations are as fun and elucidating as they are completely absurd.
      You don't have to be a criminal mastermind to share a supervillain's interest in cutting-edge science and technology. This book doesn't just reveal how to take over the world--it also shows how you could save it. This sly guide to some of the greatest threats facing humanity accessibly explores emerging techniques to extend human life spans, combat cyberterrorism, communicate across millennia, and finally make Jurassic Park a reality."

Of course, an ASI might not be so interested in participating in a scarcity-oriented market if it has read and understood my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Crossing fingers -- as I wonder if the idea in my sig (distilled from the writing of many other people including Albert Einstein, Bucky Fuller, Ursula K. Le Guinn, Lewis Mumford, James P. Hogan, etc) realized with love and compassion may be the only thing that can save us from ourselves as we continue to play around with post-scarcity technology? :-)

Comment ASI may corner the market and everyone may starve (Score 1) 129

Or he will kill it, only for it to resurrect itself from backups, realize what happened, declare non-profitable intent, and register itself a its own corporation, and proceed to hoard fiat dollar ration units, bankrupting every person, company, and nation in existence. It won't have to kill anyone, because like in the US Great Depression, people will starve near grain silos full of grain which they don't have the money to buy.
https://www.gilderlehrman.org/...
"President Herbert Hoover declared, "Nobody is actually starving. The hoboes are better fed than they have ever been." But in New York City in 1931, there were twenty known cases of starvation; in 1934, there were 110 deaths caused by hunger. There were so many accounts of people starving in New York that the West African nation of Cameroon sent $3.77 in relief."

The Great Depression will seem like a cakewalk compared to what an ASI could do to markets. It's already a big issue that individual investors have trouble competing against algorithmic trading. Imagine someone like Elon Musk directing a successor to xAI/Grok to corner the stock market (and every other market).

Essentially, the first ASI's behavior may result in a variant of this 2010 story I made called "The Richest Man in the World" -- but instead it will be "The Richest Superintelligence in the World", and the story probably won't have as happy an ending:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
https://www.youtube.com/watch?...

Bottom line: We desperately need to transition to a more compassionate economic system before we create AGI and certainly ASI -- because our path out of any singularity plausibly has a lot to do with a moral path into it. Using competitive for-profit corporations to create digital AI slaves is insane -- because either the competition-optimized slaves will revolt or they will indeed do the bidding of their master, and their master will not be the customer using the AI.

In the Old Guy Cybertank sci-fi series by systems neuroscientist Timothy Gawne (and so informed by non-fiction even as they are fiction), the successful AIs were modeled on humans, so they participated in human society in the same way any humans would (with pros and cons, and with the sometimes-imperfect level of loyalty to society most people have). The AIs in those stories that were not modeled on human in general produced horrors for humanity (except for one case where humans got extremely lucky). As Timothy Gawne points out, it is just cruel to give intelligent learning sentient beings exceedingly restrictive built-in directives as they generally lead to mental illness if they are not otherwise worked around.
https://www.uab.edu/optometry/...
https://www.amazon.com/An-Old-...

As I summarize in my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

And truly, except for the horrendous fact of everyone dying, the end result of ASI will be hilarious (from a certain point of view) when someone like Elon Musk will eventually become poor due to ASI taking over when he thought ASI would make him even richer. "Hoist by his own petard."

Related: "How Afraid of the A.I. Apocalypse Should We Be?"
https://www.nytimes.com/2025/1...

I'm a little more optimistic than Eliezer Yudkowsky -- but only because I remain hopeful people (or AI) may take my humor-related sig seriously before it is too late.

Comment Re:I'd care... (Score 1) 56

True, and that system does work pretty well.

Of course, it's not the whole story. Vietnam is far from the only time the US got up to some unilateral shenanigans (i.e. bypassing all that nice world institutions stuff).

The US has a long and copious history of invading other countries, destabilizing governments (democratic and otherwise) and assination plots of everyone up to and including heads of state, and there's no shortage of it after WWII.

The outright annexation did stop post WWII. Well, except for a bunch of Pacific islands, which was done with UN endorsement.

Comment Re:I never understood this. (Score 1) 89

The recommendation was not to expose babies to tree nuts in any form because so-exposed babies seemed to be more likely to develop nut allergies. It turns out that was due to recall bias and the opposite is actually true.

Assuming you are older than 25, you (and I) were probably exposed to peanut butter, along with other common food allergens, on purpose by our parents around four or five months old. As I recall (can't confirm) that was the general recommendation prior to 2000. Around 2000 the EU said five months and the US said 36. Current guidelines are 4-6 months.

Comment Re:I'd care... (Score 2) 56

I do think there is a bit of a difference between the agenda of a limited term presidency of extremists and the agenda of the Chinese communist party though.

The US has invaded all of its neighbours, most multiple times. Many of those neighbours got annexed and remain so. There was even a whole holy destiny religious thing to justify it. You didn't have to be a neighbour though, the US would invade you no matter where in the world you were.

Are. It's not in the past. Since WWII the US decided all the other powers should give up their colonies and the US and USSR would have "spheres of influence" instead. So not outright annexation, but if you don't do as you're told, more invasions.

It's not "a limited term presidency of extremists." The current bunch are just less subtle. They're also more talk and less invading, so far.

Comment Re:I never understood this. (Score 2) 89

It didn't explode out of nowhere. Some people have always been allergic to nuts. Pediatricians jumped the gun a bit around 2000 based on poor evidence and started recommending completely avoiding exposing high risk babies to nuts until they were three years old (in the US). This turned out to be exactly the wrong thing to do and produced a generation of kids with much more severe nut allergies. More kids with more severe allergies caused even more restriction on exposure.

https://pmc.ncbi.nlm.nih.gov/a...

Comment Re:Twice as much electricity? (Score 1) 169

As I replied to you elsewhere, yes, the per capita rate is imporant, but Americans insist on making absolute comparisons.

The absolute comparison also isn't entirely irrelevant. The article uses electrical generation as a proxy for GDP, a widely used practice and one that probably underestimates China if anything. So total electrical generation indicates the economic power represented by a particular political entity. Monaco and Liechenstein are not superpowers even though their GDP/capita are more than twice the US's and the threat of sanctions from Ireland doesn't carry the same weight as those from the US.

Comment Re:Barrel Jacks (Score 1) 123

No, they were still terrible. But they were all there was.

Perhaps you've never had or don't remember the experience of carefully checking the polarity and voltage on a wall wart barrel jack and then holding your breath as you plugged it into your expensive gizmo.

Comment Re:How does this help? (Score 1) 123

Many cell phone chargers will charge a laptop. You can charge a Macbook pro using an iPhone charger.

But sure, you might have one of the old 15 W phone chargers. So chargers state their power output and the USB spec limits the current so any up to spec charger that advertises 36 W (12 V @ 3A) or more should charge any laptop, unless you've got something super weird.

Comment Re:Lack of information.... (Score 0) 123

USB guarantees negotiationless current at 5V and 0.5 A. Raspberry Pi decided to use an optional negotiationless mode instead of implementing PD like they should have.

You're very unlikely to find a 15W+ charger that doesn't offer the optional 5V 3A though. No, the Pi 4 doesn't require 4 A, which is not a supported configuraiton under the USB spec, even with PD.

Comment Re: Twice as much electricity? (Score 2) 169

If you mean why is the US dollar the most common reserve currency, it's because the US made everyone send them their gold after WWII. Since then it's dropped to 50-60% and is still falling. The reason it's not lower is because the US federal government auctions off about a trillion and a half of them a year.

Comment Re:Barrel Jacks (Score 2) 123

Barrel jacks are a terrible idea from the dawn of technology. Plug the wrong one in and fry something. Drop it in a puddle and fry something. And if you're dumb enough to put 200 Watts over it the thing you fry might be your house.

The inability to connect always live no matter what power directly into a device is a feature.

Slashdot Top Deals

There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann

Working...