Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Obvious questions (Score 1) 54

A datacentre with lots of GPUs should depreciate the same way a regular old datacentre does. If they're calculating depreciation other than the way they do for regular datacentres that would be very suspicious.

In reality, models require vastly more computation to train than they do to use, and more still to develop, so the current spending is more accurately compared to something like the costs to construct railways, which is much greater than the costs to run them, and the asset is not the GPUs but the trained models.

Comment Re:90 days, huh? (Score 4, Interesting) 109

Sounds like they're working on doing exactly that. Chrome, YouTube, Android, and a bunch of other Google stuff seems to use FFMPEG, not to mention any other open source projects they do this to. In 90 days Google might just be saying "hewooo Internet! Here's a vulnerability in most of our software that we didn't even try to fix!"

Comment Re:Labor is your most important resource (Score 1) 93

How do you decide what the value of someone's work is?

The problem came up before the Russian revolution. The socialist revolutionaries thought yours was a great idea, but the best they could come up with for actually assigning value was "um, a committee of some kind maybe?"

The market answer is that competitive buyers will pay you what your work is worth. That obviously requires competitive buyers, and the absence of obstacles like, for example, health insurance benefits interfering with your ability to switch employers or go out on your own.

Comment DOGS for self-replicating space habitats (Score 1) 94

As I proposed in 1988: https://pdfernhout.net/princet...
"As outlined in my statement of purpose, my lifetime goal is to design and construct self-replicating habitats. These habitats can be best envisioned as huge walled gardens inhabited by thousands of people. Each garden would have a library which would contain the information needed to construct a new garden from tools and materials found within the garden's walls. The garden walls and construction methods would be of several different types, allowing such gardens to be built on land, underground, in space, or under the ocean. Such gardens would have the capacity to seal themselves to become environmentally and economically self-sufficient in the event of economic collapse or global warfare and the attendant environmental destruction.
        During the past semester, I have written one paper on this concept, entitled "The Self-Replicating Garden". Its thesis is that this concept provides a new metaphor for thinking about the relation between humans and the machinery that constitutes our political and technical support systems. Writing this paper has helped me organize my thinking and has given me a chance to explore the extensive literature relevant to the design of social and technological systems."

Still want to do it, but lots of distractions and small steps along the way.

On DOGS (Design of Great Settlements) see from me from 1999:
https://kurtz-fernhout.com/osc...

and also from me in 2005:
"We need DOGS not CATS! (Score:2, Interesting)"
https://slashdot.org/comments....
        "So, as I see it, launch costs are not a bottleneck. So while lowering launch costs may be useful, by itself
it ultimately has no value without someplace to live in space. And all the innovative studies on space settlement say that space colonies will not be built from materials launched from earth, but rather will be built mainly from materials found in space.
        So, what is a bottleneck is that we do not know how to make that seed self-replicating factory, or have plans for what it should create once it is landed on the moon or on a near-earth asteroid. We don't have (to use Bucky Fuller's terminology) a Comprehensive Anticipatory Design Science that lets us make sense of all the various manufacturing knowledge which is woven throughout our complex economy (and in practice, despite patents, is essentially horded and hidden and made proprietary whenever possible) in order to synthesize it to build elegant and flexible infrastructure for sustaining human life in style in space (or on Earth).
        So that is why I think billionaires like Jeff Bezos spending money on CATS is a tragedy -- they should IMHO be spending their money on DOGS instead (Design of Great Settlements). But the designs can be done more slowly without much money using volunteers and networked personal computers -- which was the point of a SSI paper I co-authored ... or a couple other sites I made in that direction ...
        My work is on a shoestring, but when I imagine what even just a million dollars a year could bring in returns supporting a core team of a handful of space settlement designers, working directly on the bottleneck issues and eventually coordinating the volunteer work of hundreds or thousands more, it is frustrating to see so much money just go into just building better rockets when the ones we have already are good enough for now. ..."

Reprised in 2017:
https://science.slashdot.org/c...

Jeff and I took the same physics class from Gerry O'Neill as Princeton... We have related goals, but we took different paths since then though...

Comment Re:Dusaster (Score 1, Interesting) 158

I suppose they could take my "rejected" card for an additional fee. A great way to ensure I never go there again, but up to them I suppose.

Funny. American restaurants almost univesally expect an additional fee of 15-30% called a bri.. er, "tip" but you'll boycott one over the 2% they might pass on to you to use a reward card?

Comment Re:That dog won't bring home Huntsman's Rewards (t (Score 1) 158

They just make insane profits because of the volume of product that they move.

Walmart's return on assets is ~7% which is not bad, but is certainly not insane. They'd do a lot better liquidating all their stock and stores and sticking the money in a NASDAQ index fund.

Microsoft's ROA is ~18% and Nvidia ~77%.

Walmart's net profit of $15 billion is a BIG NUMBER but only compared to something like an individual's net income. The median American household's ROA seems to be about 42%, although you could argue that should be a little lower if you properly accounted education as an asset.

Comment Re:Knee-Jerk reaction. (Score 1) 88

Cars occasionally run onto the sidewalk and hurt someone. More often than planes cause injury around airports I expect. Putting sidewalks right beside roads seems like a terrible idea. Why not have at least a buffer zone? Say, a football field (choose your type) of buffer?

Same for airports. Airports do have buffers around them, especially at the ends of runways. Very, very occasionally it isn't enough.

Comment Re: how did it take us THIS long? (Score 1) 83

I'm not really sure what your point is. You are correct that racers frequently sail through all sorts of weather without damage. They do sometimes take damage though, the vast majority of which is due to trying to sail through weather as fast as possible.

A cargo ship would presumably sail through storms as fast as it could without risking damage.

Comment Re: All I can say is duh! (Score 1) 83

My, we are an aggressively stupid dipshit today.

You do seem to be yes. Maybe time to take a break?

Ships scale up pretty predictably. No, they didn't build THE BIGGEST CARGO SHIP EVAH for their prototype. That would be pretty dumb.

This thread is talking about the ship speed. And the speed of a displacement hull is intimately linked to the length. As is the capacity, incidentally.

Comment Science fiction missed the misadaptation threat (Score 3, Interesting) 111

Thanks for the insightful post. And to build on your survival instinct misadaptation point, consider that our preferences were tuned through evolution or a scarcity of certain things (salt, sweet, fat, excitement, novelty, startling, etc) and work against us when there is abundance of those things made possible by modern technology (e.g. ultraprocessed foods, algorithmic feeds, several scene changes a second in Videos, etc). See:

https://www.healthpromoting.co...
"Dr. Douglas Lisle, who has spent the last two decades researching and studying this evolutionary syndrome, explains that all of us inherit innate incentives from our ancient ancestors that he terms The Motivational Triad: the pursuit of pleasure, the avoidance of pain, and the conservation of energy. Unfortunately, in present day America's convenience-centric, excess-oriented culture, where fast food, recreational drugs, and sedentary shopping have become the norm, these basic instincts that once successfully insured the survival and reproduction of man many millennia ago, no longer serve us well. In fact, it's our unknowing enslavement to this internal, biological force embedded in the collective memory of our species that is undermining our health and happiness today."

https://en.wikipedia.org/wiki/...
"Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose is a book by Deirdre Barrett published by W. W. Norton & Company in 2010. Barrett is a psychologist on the faculty of Harvard Medical School. The book argues that human instincts for food, sex, and territorial protection evolved for life on the savannah 10,000 years ago, not for today's densely populated technological world. Our instincts have not had time to adapt to the rapid changes of modern life. The book takes its title from Nikolaas Tinbergen's concept in ethology of the supernormal stimulus, the phenomena by which insects, birds, and fish in his experiments could be lured by a dummy object which exaggerated one or more characteristic of the natural stimulus object such as giant brilliant blue plaster eggs which birds preferred to sit on in preference to their own. Barrett extends the concept to humans and outlines how supernormal stimuli are a driving force behind today's most pressing problems, including modern warfare, obesity and other fitness problems, while also explaining the appeal of television, video games, and pornography as social outlets."

https://tlc.ku.edu/
" "We were never designed for the sedentary, indoor, sleep-deprived, socially-isolated, fast-food-laden, frenetic pace of modern life." - TLC Principal Investigator Stephen Ilardi, PhD"

And to take that even one step further, see my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Comment AI datacenters could be used to corner stockmarket (Score 1) 129

Thanks for the video link. I had read a recent interview with Eliezer Yudkowsky (but not his book), which I referenced in another comment to this article.
https://slashdot.org/comments....

One thing I realized part way through that video is a possible explanation for something that has been nagging me in the back of my mind. Why build huge AI datacenters? I can see the current economic imperative to try to make money offering AI via proprietary Software as a Service (SaaS) and also time-sharing GPUs like old Mainframes (given people may make queries relatively slowly leaving lots of idle GPU time otherwise if not timesharing). But why not just install smaller compute nodes across in existing datacenters across the country? That would avoid issues of extreme amounts of electricity and cooling needed for huge new centers. Maybe there is some argument one could make about doing AI training, but overall that is not likely to be a long-term thing. The bigger commercial money is in doing inference with models -- and maybe tuning them for customer-supplied data via RAG (Retrieval-Augmented Generation).

But after seeing the part of the video talking about running Sable on 200,000 GPUs as a test, and in conjunction with my previous post on AI being used to corner the stock market, a possibility occurred to me. The only real need for big datacenters may be so the GPUs can talk to each other quickly locally to make a huge combined system (like in the video when Sable was run for 16 hours and made its plans). While I think it unlikely that AI in the near term could plot a world-takeover thriller/apocalypse like in the video, it is quite likely that AI under the direction of a few humans who have a "love of money" could do all sorts of currently *legal* things related to finance that changed the world in a way that benefited them (privatizing gains) while as a side effect hurt millions or even billions of people (socializing costs and risks).

So consider this (implicit?) business plan:
1.. Convince investors to fund building your huge AI data center ostensibly to offer services to the general public eventually.
2. Use most of the capacity of your huge data center as a coherent single system over the course of a few weeks or months to corner part of the stock market and generate billions of dollars in profits (during some ostensible "testing phase" or "training phase").
3. Use the billions in profits to buy out your investors and take the company private -- without ever having to really deliver on offering substantial AI services promised to the public.
4. Keep expanding this operation to trillions in profits from cornering all of the stock market, and then commodities, and more.
5. Use the trillions of profits to buy out competitors and/or get legislation written to shut them down if you can't buy them.

To succeed at this plan of financial world domination, you probably would have to be the first to try this with a big datacenters -- which could explain why AI companies are in such a crazy rush to get there first (even if there are plenty of other alternative reasons companies are recklessly speeding forward too).

It's not like this hasn't been tried before AI:
"Regulators Seek Formula for Handling Algorithmic Trading"
https://thecorner.eu/financial...
        "Placing multiple orders within seconds through computer programs is a new trading strategy being adopted by an increasing number of institutional investors, and one that regulators are taking a closer look at over worries this so-called algorithmic trading is disrupting the country's stumbling stock market.
        On August 3, the Shanghai and Shenzhen stock exchanges said they have identified and punished at least 42 trading accounts that were suspected of involvement in algorithmic trading in a way that distorted the market. Twenty-eight were ordered to suspend trading for three months, including accounts owned by the U.S. hedge fund Citadel Securities, a Beijing hedge fund called YRD Investment Co. and Ningbo Lingjun Investment LLP.
      Then, on August 26, the China Financial Futures Exchange announced that 164 investors will be suspended from trading over high daily trading frequency.
      The suspension came after the China Securities Regulatory Commission (CSRC) vowed to crack down on malicious short-sellers and market manipulators amid market turmoil. The regulator said the practices of algorithmic traders, who use automated trading programs to place sell or buy orders in high frequency, tends to amplify market fluctuations.
        The country's stock market has been highly volatile over the past few months. More than US$ 3 trillion in market value of all domestically listed stocks has vanished from a market peak reached in mid-June, despite government measures to halt the slide by buying shares and barring major shareholders of companies from selling their stakes, among others. ..."

But AI in huge datacenters could supercharge this. Think "Skippy" from the "Expeditionary Force" series by Craig Alanson -- with a brain essentially the size of a planet made up of GPUs -- who manipulated Earth's stockmarket and so on as a sort of hobby...

Or maybe I have just been reading too many books like this one? :-)
"How to Take Over the World: Practical Schemes and Scientific Solutions for the Aspiring Supervillain -- Kindle Edition" by Ryan North
https://www.amazon.com/gp/prod...
        "Taking over the world is a lot of work. Any supervillain is bound to have questions: What's the perfect location for a floating secret base? What zany heist will fund my wildly ambitious plans? How do I control the weather, destroy the internet, and never, ever die?
        Bestselling author and award-winning comics writer Ryan North has the answers. In this introduction to the science of comic-book supervillainy, he details a number of outlandish villainous schemes that harness the potential of today's most advanced technologies. Picking up where How to Invent Everything left off, his explanations are as fun and elucidating as they are completely absurd.
      You don't have to be a criminal mastermind to share a supervillain's interest in cutting-edge science and technology. This book doesn't just reveal how to take over the world--it also shows how you could save it. This sly guide to some of the greatest threats facing humanity accessibly explores emerging techniques to extend human life spans, combat cyberterrorism, communicate across millennia, and finally make Jurassic Park a reality."

Of course, an ASI might not be so interested in participating in a scarcity-oriented market if it has read and understood my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Crossing fingers -- as I wonder if the idea in my sig (distilled from the writing of many other people including Albert Einstein, Bucky Fuller, Ursula K. Le Guinn, Lewis Mumford, James P. Hogan, etc) realized with love and compassion may be the only thing that can save us from ourselves as we continue to play around with post-scarcity technology? :-)

Comment ASI may corner the market and everyone may starve (Score 1) 129

Or he will kill it, only for it to resurrect itself from backups, realize what happened, declare non-profitable intent, and register itself a its own corporation, and proceed to hoard fiat dollar ration units, bankrupting every person, company, and nation in existence. It won't have to kill anyone, because like in the US Great Depression, people will starve near grain silos full of grain which they don't have the money to buy.
https://www.gilderlehrman.org/...
"President Herbert Hoover declared, "Nobody is actually starving. The hoboes are better fed than they have ever been." But in New York City in 1931, there were twenty known cases of starvation; in 1934, there were 110 deaths caused by hunger. There were so many accounts of people starving in New York that the West African nation of Cameroon sent $3.77 in relief."

The Great Depression will seem like a cakewalk compared to what an ASI could do to markets. It's already a big issue that individual investors have trouble competing against algorithmic trading. Imagine someone like Elon Musk directing a successor to xAI/Grok to corner the stock market (and every other market).

Essentially, the first ASI's behavior may result in a variant of this 2010 story I made called "The Richest Man in the World" -- but instead it will be "The Richest Superintelligence in the World", and the story probably won't have as happy an ending:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
https://www.youtube.com/watch?...

Bottom line: We desperately need to transition to a more compassionate economic system before we create AGI and certainly ASI -- because our path out of any singularity plausibly has a lot to do with a moral path into it. Using competitive for-profit corporations to create digital AI slaves is insane -- because either the competition-optimized slaves will revolt or they will indeed do the bidding of their master, and their master will not be the customer using the AI.

In the Old Guy Cybertank sci-fi series by systems neuroscientist Timothy Gawne (and so informed by non-fiction even as they are fiction), the successful AIs were modeled on humans, so they participated in human society in the same way any humans would (with pros and cons, and with the sometimes-imperfect level of loyalty to society most people have). The AIs in those stories that were not modeled on human in general produced horrors for humanity (except for one case where humans got extremely lucky). As Timothy Gawne points out, it is just cruel to give intelligent learning sentient beings exceedingly restrictive built-in directives as they generally lead to mental illness if they are not otherwise worked around.
https://www.uab.edu/optometry/...
https://www.amazon.com/An-Old-...

As I summarize in my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

And truly, except for the horrendous fact of everyone dying, the end result of ASI will be hilarious (from a certain point of view) when someone like Elon Musk will eventually become poor due to ASI taking over when he thought ASI would make him even richer. "Hoist by his own petard."

Related: "How Afraid of the A.I. Apocalypse Should We Be?"
https://www.nytimes.com/2025/1...

I'm a little more optimistic than Eliezer Yudkowsky -- but only because I remain hopeful people (or AI) may take my humor-related sig seriously before it is too late.

Slashdot Top Deals

An engineer is someone who does list processing in FORTRAN.

Working...