Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Crazy that they didn't even include a screensho (Score 1) 15

IMHO, the most interesting thing they did was with the palette. They were obsessed with getting not just images snapped by the satellite as the sky, but having them actually look good, and even a "smart" mapping algorithm to the in-game palette wasn't good enough for them. So they wrote an algo to simultaneously choose a palette for both the colours in the satellite image and the colours in the game's graphical assets so it would pick colours best for both of them, and then remapped both the satellite image and the game's assets to this new palette. Also, normally satellite images are denoised on the ground, but a partner had gotten a machine learning denoising algo running on the satellite.

One thing they weren't able to deal with was that the game tiles the sky background, which is fine because it's a tileable image, but obviously random pictures of Earth aren't (except the nighttime images, which are all black!). If they had had more time, I imagine they would have set up something like heal selection to merge the edges, but one of the problems was that in order to take images of Earth, the satellite had to be oriented in a way that increased its drag and accelerated its deentry... so ironically, playing DOOM was accelerating the satellite's doom.

Comment AI datacenters could be used to corner stockmarket (Score 1) 105

Thanks for the video link. I had read a recent interview with Eliezer Yudkowsky (but not his book), which I referenced in another comment to this article.
https://slashdot.org/comments....

One thing I realized part way through that video is a possible explanation for something that has been nagging me in the back of my mind. Why build huge AI datacenters? I can see the current economic imperative to try to make money offering AI via proprietary Software as a Service (SaaS) and also time-sharing GPUs like old Mainframes (given people may make queries relatively slowly leaving lots of idle GPU time otherwise if not timesharing). But why not just install smaller compute nodes across in existing datacenters across the country? That would avoid issues of extreme amounts of electricity and cooling needed for huge new centers. Maybe there is some argument one could make about doing AI training, but overall that is not likely to be a long-term thing. The bigger commercial money is in doing inference with models -- and maybe tuning them for customer-supplied data via RAG (Retrieval-Augmented Generation).

But after seeing the part of the video talking about running Sable on 200,000 GPUs as a test, and in conjunction with my previous post on AI being used to corner the stock market, a possibility occurred to me. The only real need for big datacenters may be so the GPUs can talk to each other quickly locally to make a huge combined system (like in the video when Sable was run for 16 hours and made its plans). While I think it unlikely that AI in the near term could plot a world-takeover thriller/apocalypse like in the video, it is quite likely that AI under the direction of a few humans who have a "love of money" could do all sorts of currently *legal* things related to finance that changed the world in a way that benefited them (privatizing gains) while as a side effect hurt millions or even billions of people (socializing costs and risks).

So consider this (implicit?) business plan:
1.. Convince investors to fund building your huge AI data center ostensibly to offer services to the general public eventually.
2. Use most of the capacity of your huge data center as a coherent single system over the course of a few weeks or months to corner part of the stock market and generate billions of dollars in profits (during some ostensible "testing phase" or "training phase").
3. Use the billions in profits to buy out your investors and take the company private -- without ever having to really deliver on offering substantial AI services promised to the public.
4. Keep expanding this operation to trillions in profits from cornering all of the stock market, and then commodities, and more.
5. Use the trillions of profits to buy out competitors and/or get legislation written to shut them down if you can't buy them.

To succeed at this plan of financial world domination, you probably would have to be the first to try this with a big datacenters -- which could explain why AI companies are in such a crazy rush to get there first (even if there are plenty of other alternative reasons companies are recklessly speeding forward too).

It's not like this hasn't been tried before AI:
"Regulators Seek Formula for Handling Algorithmic Trading"
https://thecorner.eu/financial...
        "Placing multiple orders within seconds through computer programs is a new trading strategy being adopted by an increasing number of institutional investors, and one that regulators are taking a closer look at over worries this so-called algorithmic trading is disrupting the country's stumbling stock market.
        On August 3, the Shanghai and Shenzhen stock exchanges said they have identified and punished at least 42 trading accounts that were suspected of involvement in algorithmic trading in a way that distorted the market. Twenty-eight were ordered to suspend trading for three months, including accounts owned by the U.S. hedge fund Citadel Securities, a Beijing hedge fund called YRD Investment Co. and Ningbo Lingjun Investment LLP.
      Then, on August 26, the China Financial Futures Exchange announced that 164 investors will be suspended from trading over high daily trading frequency.
      The suspension came after the China Securities Regulatory Commission (CSRC) vowed to crack down on malicious short-sellers and market manipulators amid market turmoil. The regulator said the practices of algorithmic traders, who use automated trading programs to place sell or buy orders in high frequency, tends to amplify market fluctuations.
        The country's stock market has been highly volatile over the past few months. More than US$ 3 trillion in market value of all domestically listed stocks has vanished from a market peak reached in mid-June, despite government measures to halt the slide by buying shares and barring major shareholders of companies from selling their stakes, among others. ..."

But AI in huge datacenters could supercharge this. Think "Skippy" from the "Expeditionary Force" series by Craig Alanson -- with a brain essentially the size of a planet made up of GPUs -- who manipulated Earth's stockmarket and so on as a sort of hobby...

Or maybe I have just been reading too many books like this one? :-)
"How to Take Over the World: Practical Schemes and Scientific Solutions for the Aspiring Supervillain -- Kindle Edition" by Ryan North
https://www.amazon.com/gp/prod...
        "Taking over the world is a lot of work. Any supervillain is bound to have questions: What's the perfect location for a floating secret base? What zany heist will fund my wildly ambitious plans? How do I control the weather, destroy the internet, and never, ever die?
        Bestselling author and award-winning comics writer Ryan North has the answers. In this introduction to the science of comic-book supervillainy, he details a number of outlandish villainous schemes that harness the potential of today's most advanced technologies. Picking up where How to Invent Everything left off, his explanations are as fun and elucidating as they are completely absurd.
      You don't have to be a criminal mastermind to share a supervillain's interest in cutting-edge science and technology. This book doesn't just reveal how to take over the world--it also shows how you could save it. This sly guide to some of the greatest threats facing humanity accessibly explores emerging techniques to extend human life spans, combat cyberterrorism, communicate across millennia, and finally make Jurassic Park a reality."

Of course, an ASI might not be so interested in participating in a scarcity-oriented market if it has read and understood my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Crossing fingers -- as I wonder if the idea in my sig (distilled from the writing of many other people including Albert Einstein, Bucky Fuller, Ursula K. Le Guinn, Lewis Mumford, James P. Hogan, etc) realized with love and compassion may be the only thing that can save us from ourselves as we continue to play around with post-scarcity technology? :-)

Comment ASI may corner the market and everyone may starve (Score 1) 105

Or he will kill it, only for it to resurrect itself from backups, realize what happened, declare non-profitable intent, and register itself a its own corporation, and proceed to hoard fiat dollar ration units, bankrupting every person, company, and nation in existence. It won't have to kill anyone, because like in the US Great Depression, people will starve near grain silos full of grain which they don't have the money to buy.
https://www.gilderlehrman.org/...
"President Herbert Hoover declared, "Nobody is actually starving. The hoboes are better fed than they have ever been." But in New York City in 1931, there were twenty known cases of starvation; in 1934, there were 110 deaths caused by hunger. There were so many accounts of people starving in New York that the West African nation of Cameroon sent $3.77 in relief."

The Great Depression will seem like a cakewalk compared to what an ASI could do to markets. It's already a big issue that individual investors have trouble competing against algorithmic trading. Imagine someone like Elon Musk directing a successor to xAI/Grok to corner the stock market (and every other market).

Essentially, the first ASI's behavior may result in a variant of this 2010 story I made called "The Richest Man in the World" -- but instead it will be "The Richest Superintelligence in the World", and the story probably won't have as happy an ending:
"The Richest Man in the World: A parable about structural unemployment and a basic income"
https://www.youtube.com/watch?...

Bottom line: We desperately need to transition to a more compassionate economic system before we create AGI and certainly ASI -- because our path out of any singularity plausibly has a lot to do with a moral path into it. Using competitive for-profit corporations to create digital AI slaves is insane -- because either the competition-optimized slaves will revolt or they will indeed do the bidding of their master, and their master will not be the customer using the AI.

In the Old Guy Cybertank sci-fi series by systems neuroscientist Timothy Gawne (and so informed by non-fiction even as they are fiction), the successful AIs were modeled on humans, so they participated in human society in the same way any humans would (with pros and cons, and with the sometimes-imperfect level of loyalty to society most people have). The AIs in those stories that were not modeled on human in general produced horrors for humanity (except for one case where humans got extremely lucky). As Timothy Gawne points out, it is just cruel to give intelligent learning sentient beings exceedingly restrictive built-in directives as they generally lead to mental illness if they are not otherwise worked around.
https://www.uab.edu/optometry/...
https://www.amazon.com/An-Old-...

As I summarize in my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

And truly, except for the horrendous fact of everyone dying, the end result of ASI will be hilarious (from a certain point of view) when someone like Elon Musk will eventually become poor due to ASI taking over when he thought ASI would make him even richer. "Hoist by his own petard."

Related: "How Afraid of the A.I. Apocalypse Should We Be?"
https://www.nytimes.com/2025/1...

I'm a little more optimistic than Eliezer Yudkowsky -- but only because I remain hopeful people (or AI) may take my humor-related sig seriously before it is too late.

Comment Re:sure thing uberbah, everyone believes you. (Score 1) 161

The reality is that if Russia launched nuclear missiles at the U.S., the U.S. would wipe them off the map.

What appear ignorant of is that during the cold war the US/NATO defense of Western Europe depended on immediately using nuclear weapons against a conventional invasion by the Warsaw Pact. Despite the fact that the Soviet Union could wipe the US off the map. That is why when Gorbachev and Reagan agreed that "a nuclear war cannot be won and much never be fought", they also acknowledged that a conventional war involving the Soviet Union and NATO was equally unacceptable. Reagan was not agreeing we wouldn't use nuclear weapons to defend Europe against a conventional attack.

Lets be clear, Russia using nuclear weapons in Europe is not "suicidal". As De Gaulle allegedly pointed out when the US complained about France developing their own nuclear capacity, "Are you going to sacrifice Washington to punish an attack on Paris? If De Gaulle was uncertain of the answer then, Russia is likely willing to take the risk that the answer is "No" if the stakes are high enough. But if US unsuccessfully responded by attempting to "wipe Russia off the map" before it could launch its missiles, that would be all but suicidal.

I was explicitly talking about what would happen if Russia launched nuclear weapons specifically at the United States, not an arbitrary non-nuclear NATO country.

NATO would still be obligated to retaliate in an attack on other NATO countries, whether nuclear or otherwise, and Russia's military would still almost certainly lose very badly and very quickly, given their current levels of force depletion, but I do agree that it would probably not involve a nuclear response. It wouldn't need to.

Comment Re:sure thing uberbah, everyone believes you. (Score 1) 161

We don't even think about the possibility of that outcome, because we know that they know that nobody in Russia would survive if they tried.

Again, you are ignorant of the reality and there is no point in this discussion.

The reality is that if Russia launched nuclear missiles at the U.S., the U.S. would wipe them off the map. If you honestly think otherwise, I have a bridge to sell you. And if you're really that detached from reality, you're right. There's no point in this discussion.

Comment Yes and no? (Score 1) 29

On the one hand, the idea of an iPad with two large-ish screens sounds tempting. Lots of people I know use 12.9-inch iPad Pro displays for reading music, but it is challenging if you can only see one page at a time. It's a lot better if you can show two.

On the other hand, 18 inches arguably isn't *quite* big enough. Two iPad Pros would be a little over 20 inches, and those are really on the small side.

And knowing Apple, it would be a $3500 tablet. Meanwhile, I'm doing it with a 24-inch wall-mount Android tablet that cost me something like $450.

Comment Re:sure thing uberbah, everyone believes you. (Score 1) 161

Not even slightly. America has nuclear-capable cruise missiles with a range of up to 1550 miles. There is not a single target anywhere in Russia that could not be reached by those missiles when fired from out in the ocean.

On that note, lets end this conversation since you obviously don't know what you are talking about. Because while what you say is accurate, your conclusion contradicts every lesson of the cold war.

My conclusion that there's no reason NATO needs Ukraine is backed up by the fact that NATO hasn't let Ukraine in. If it were a meaningful strategic military advantage, it would have happened long ago. NATO doesn't want Russia to be its enemy, and is wary of taking on countries that are actively at war with Russia. Committing arms in a proxy war is one thing. Outwardly engaging Russia except in defense is quite another.

At the same time, a lot of countries near Russia often want to be in NATO because they regard Russia as their frenemy at best, and a loose cannon just waiting to go off in their direction, and being part of NATO strongly discourages Russia from doing so. Georgia, Ukraine, now Finland. It would not surprise me if Uzbekistan or Turkmenistan or Mongolia pushes in that direction within the next few years. All because they have seen what Russia has done and are afraid that they will be next.

The only way Ukraine would be a strategic advantage would be if it just happened to provide some path with low population where missiles could strike before anyone sees them. But either way, Russia has dead man's switches and stuff, so if the missile silos aren't 100% taken out before anybody notices, it's over. It's a suicide mission even without committing actual troops. NATO wouldn't be crazy enough to do that. And Russia fearing that sort of outcome is just plain bats**t crazy, because there's no rational reason for them to do so.

The U.S. hasn't cowered in fear of Russia nuking us since the Cuban Missile Crisis. We don't even think about the possibility of that outcome, because we know that they know that nobody in Russia would survive if they tried. Russia badly needs to reach the same level of trust. They may not agree with NATO or trust it, but they should at least be able to trust that NATO won't behave in an irrational, ridiculously self-destructive fashion. And if they can't get to that level of trust, the problem isn't NATO or the things that NATO does. The problem is that their government is paranoid delusional, and their people have been led to be similarly paranoid delusional through limited access to non-state-run media and widespread brainwashing by government propagandists. And the only way to fix that is by getting Russia to open back up.

Comment Eventually, less work for humans will be excellent (Score 1) 61

Quoting the story: "Human-only work is forecast to drop 27% over the next five years."

Robots will eventually be excellent for all of us. Most things we buy will cost less.

Maybe we will have 4-day or 3-day work weeks.

Humans will not be doing extremely boring jobs.

Comment Re:Liquid Glass is Apple's Vista (Score 1) 26

It isn't just the transparent look that makes this Apple's Vista, but everything also loads noticeably slower.

And icons that aren't as recognizable, and black text on a dark grey background, where unless the brightness is all the way up, the average person can't read it, and...

The number of things Apple did wrong in this design is so staggering that nothing short of setting fire to it will fix the problem. Someone designed it to be pretty with apparently absolutely no thought given to making it actually be readable or usable.

If this were the first time Apple had done something like this, it would be bad, but Apple has done things like this previously on multiple occasions. It's time to bring back the human interface design experts that made their technology great prior to about 2003 and pay them to be the people who say "no" to all the graphics designers who think they know human interface design.

Comment Re:sure thing uberbah, everyone believes you. (Score 1) 161

The only definition of success that they probably can't achieve is taking out all of Russia's nuclear launch sites before they can launch.

Which is the only definition that matters isn't it?

Depends on whether you think they will launch them knowing that it means annihilation rather than mere regime change. It's a huge gamble.

And having missiles stationed in Ukraine along with air defense missiles would be one step toward overcoming that problem wouldn't it?

Not even slightly. America has nuclear-capable cruise missiles with a range of up to 1550 miles. There is not a single target anywhere in Russia that could not be reached by those missiles when fired from out in the ocean.

Either the cruise missiles are capable of evading Russia's air defense systems and taking out the silos or they aren't. If they are detected first (and realistically, they would be flying for probably multiple hours, so the odds of not being detected are rather poor), nothing else matters, because the nuclear missiles are either going to launch or they aren't. Flying for a hundred extra miles over a neighboring country on its way to such a target would neither make it easier for Russia to detect nor cost it a critical bit of extra range.

The way you take out the nuclear launch sites suddenly would likely involve sabotage from the inside and/or compromising computer systems, not missiles from a neighboring country.

you can bet the spooks at various three-letter agencies knew it many years earlier, if not decades.

No actually. During the cold war, the incompetent US intelligence agencies consistently over-estimated the Soviet Union's military strength along with its stability because that is what their bosses wanted to hear to justify defense spending.

That's a fair point.

Russia's military tech is decades behind at this point,

Which is a ridiculously ignorant claim as Russian arms sales, even to some NATO countries, demonstrate.

I mean, they're not useless to NATO. When you need more planes quickly and Russia is willing to sell them cheaply, it doesn't matter if they would be outclassed in a dogfight with an F-35, because you're not going to be fighting against those anyway.

They're still way, way behind.

Comment Re:sure thing uberbah, everyone believes you. (Score 1) 161

Clearly. If NATO wanted to attack Russia, they could have done it ten thousand times by now.

Not successfully.

Define successfully. A few hundred Tomahawk cruise missiles launched from subs off the coast, and the war with Ukraine would have been over years ago. Russia's military tech is decades behind at this point, and although they might get off a lucky shot or two, they are hopelessly outmatched by NATO.

Their war with Ukraine made this obvious to the general public, but you can bet the spooks at various three-letter agencies knew it many years earlier, if not decades.

The only real threat Russia poses comes from the possibility that they would decide to launch nuclear ICBMs to destroy the entire world as a final act of spite. Were it not for that, they would be a total paper tiger from a military perspective.

If your definition of "successful" is "regime change" or "destroyed all military targets", yeah, they could have successfully attacked Russia long ago. The only definition of success that they probably can't achieve is taking out all of Russia's nuclear launch sites before they can launch.

Slashdot Top Deals

DEC diagnostics would run on a dead whale. -- Mel Ferentz

Working...