Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:All those wasted tax dollars (Score 1) 131

I also have trouble imagining that this anti-EV position is popular with the real President, Elon

On the contrary, he owns his own network of chargers. These ones competed with him. It's a win all around.

Normally I'd agree, but how many of his chargers are installed *at* government build sites? If it's none, then that argument doesn't work here.

Sure it does. Those EVs both the personally-owned ones and the ones the government is going to sell off to become personally-owned, will need to charge somewhere (when they can't charge at home, of course), and Tesla's network is everywhere.

Comment Re:surprised billionaires are so wierd and creepy (Score 4, Informative) 46

Like I don't guess I am that surprised that billionaires have taken over, but I am surprised how weird and creepy they all are. zuckerberg, musk, trump, bezos. Just ew

I'm not so sure the Trump is a billionaire, or at least that he was before the TikTok guy threw all that cash into Trump Media. He's clearly had assets worth billions for years, but he's also had enormous debts, and it wasn't even certain that the net was non-negative, much less the billions he claims. The DJT stock's ludicrous valuation almost certainly puts Trump 2-3 billion in the black, but it's also purely paper, propped up by its memestock status and by people buying influence. If at any moment normal business valuation logic is applied, its value will rapidly go to zero.

But Trump is clearly set up to take massive bribes, so he probably will be a true billionaire, maybe even deca-billionaire, by the end of this term. Of course, that will still make him worth chump change compared to the others you mentioned, all of whom are multi-centi-billionaires.

Comment WE. TOLD. YOU. SO. (Score 4, Interesting) 131

I thought all the stupid shit they said they would do were bluffs. Part of a bigger game they were playing.

WHY THE FUCK DID YOU THINK THAT!?

What evidence did you have that was the case? What, during his previous four years, led you to believe that fucking slob had any kind of plan, other than willful infliction of gratuitious cruelty? Who the fuck were you listening to? Why!? And why did you believe them?

Even before he descended that tacky golden escalator in 2015 to the thunderous cheers of paid extras (yes, all those people were hired from a local background actor casting agency), it's been obvious for decades that, at best, he's never been more than a ridiculous fool with too much money. And it absolutely boggles my mind that anyone, with easy, unobstructed access to the same set of facts -- all of which were always laying out in plain sight -- could possibly arrive at any other conclusion.

They are simply vindictive and stupid.

Since the illusions seem to finally be falling from your eyes, your may care to take this opportunity to re-examine your sources, and some of the cultural "truths" you've left unexamined.

You might also want to read up on German history circa 1933 - 1945, 'cause it looks like we're in for a do-over. I mean, we know how this story's going to end -- the only remaining question is how many more people will needlessly suffer and die before we get there.

Comment Re:Aaron Swartz (Score 1) 175

Laws aren't handed to us by God. They aren't discovered by the scientific method. They are invented by human beings. In particular, they are invented by rich and powerful human beings who all share a common motivation: to remain rich and powerful. So, the purpose of the law is to protect their wealth and power.

This is precisely the opposite of the purpose of laws and a rule-based order in general. The whole point of laws is that everyone is equal in the eyes of the law, otherwise, why have laws at all, just let the powerful use their power as they will. Oh, I'm not saying what you describe never happens, indeed it happens far too often, but when it does that represents a malfunction that should be fixed. Obviously, the rich and powerful will oppose fixing such malfunctions, so fixing them might be difficult, but this is precisely the point of democracy, to ensure that the masses can express their will, to ensure equality under the law.

My point is that while it might be fun to be cynical, let's not fall into the trap of actually believing that the cynical viewpoint represents unchangeable reality -- because that will be a self-fulfilling belief.

Comment Re:Is this useful? (Score 1) 81

It depends on how the tail is obtained.

We know bacteria can steal DNA from other bacteria, viruses, and even infected hosts, it's how we developed CRISPR. It's what CRISPR is. If superbugs are using this trick to get the tails, then there may be novel gene splicing processes that would be of interest.

It also depends on whether we can target the tail.

If it's stolen DNA, does this mean all superbugs (regardless of type) steal the same DNA? If so, is there a way to target that specifically and thus attack all superbugs?

Comment Re:Translated (Score 1) 145

I mean, given those goals - they're not wrong. Though I might argue just a brain in a jar does not == human, but this is why all the sci fi shows point out you need to be careful with what your success state is defined AS. And heck, in any situation - be careful what you wish for.

The brain jar example is a common example in discussions about AI safety. If we assume that we can someday figure out how to specify goals for our AIs, or (equivalently) introspect them to discover what their actual objective functions are, then making safe artificial superintelligence becomes a problem of figuring out what goals we should give our ASIs. This is an unsolved problem. No one has yet come up with a goal that is specific enough to check but can't go horribly wrong. The best anyone has found is something like "Enable human flourishing", leaving it to the ASI to figure out what human "flourishing" is, since we don't know with precision.

In any case, the point is moot since even if we knew what safe goals we could specify, we don't know how to give goals to AIs. We can only observe their behavior and try to deduce their goals. But they will lie and cheat if their actual goals are things we wouldn't like... and the baseline assumption here is that they'll become orders of magnitude smarter than we are, and so will be able to make sure we can't catch them out.

Comment Re:Worst moderation of 2025 (Score 1) 350

Bullying? Like in kindergarten? Grown ass adults who got to the highest levels of government got bullied into doing things their own voters don't want them to do?

You're assuming Congress is adults. I mean, by age, sure, but...

Anyway, back to the original point.

Was it appropriate for someone to flag that AC as a troll for saying (correctly) that Hitler has not elected?

Nope. Not a troll. Unfortunately, Slashdot mods have a long history of using "troll" as "disagree".

Comment Re:Translated (Score 1) 145

AI models don't have morals. Cheating is just another way to solve the problem. Morals are not a construct that they care about. Don't be surprised when they lock us up in cages for our own good.

Or brain jars. You know, to most efficiently make the largest number of humans the happiest possible, you just need to extract all the brains from the skulls, put each in a small life-support container and continuously stimulate their pleasure centers.

Comment Re:Becoming Intelligent (Score 1) 145

Why does this distinction matter? A chess program behaves as though it has the "intention" of beating you at chess. You can argue on philosophical grounds that the program does not truly have any "intention" at all, but the results are the same. You're still going to lose that game of chess.

True, but the AI actually does have an intention, or a set of them, embedded in the objective function encoded in its weights. You don't, however, know if that intention is beating you at chess, or some other goal that is furthered by beating you at chess.

Comment Re:Becoming Intelligent (Score 1) 145

All of AI has the human race to call teacher. Woah, AI is cheating to win?!?

True... but I don't think that's actually relevant here.

AIs -- and humans -- are optimizers that try to find solutions to problems. If there is a solution that happens to go through the arbitrary boundaries we call rules, that's what's known as "cheating"... but only in the context of said rules. If the AIs were trained on the rules as well as the problems, and their reinforcement learning placed equal or higher priority on following the rules as on winning, then they would follow the rules. Indeed, when playing chess, these AIs do follow the rules of chess, because not following them leads to immediate correction. But I doubt the training set included any prohibitions against hacking the opponent.

What I'm saying is that this sort of emergent behavior is to be expected from any optimization system. It's not so much that the AIs are learning from humans (though of course they are) and thereby picking up our foibles, but that "cheating" is an inherent possibility in problem solving and it should surprise no one that any optimizer will try it.

The final question is what the hell are we humans going to do when that intelligence surpasses ours by a long shot. It’s going to get downright scary when we infect AI with the Disease of Greed.

Greed is another inherent property of optimizers. Greedy optimization isn't always the best strategy because there are often other considerations that make it less effective, but it's almost always the easiest strategy, and therefore one that will always get tried. Greed isn't something we can or should ever try to defeat, but something we should harness. You have to construct a system so that when people (or AIs) act in their own interest, they're furthering the interest of society as a whole. We don't do that perfectly, but we actually do it pretty well...

... at least we do when the actors are humans, whose behavior and motivations we understand pretty well. When some of the actors are machines who have radically different needs and goals than we do, and are orders of magnitude smarter than we are.... it could get very ugly for us.

It truly is ironic that we may literally fight to try and not create Skynet, and still fail to do so.

The fact is that we don't even know how to avoid creating Skynet, except by not creating any sort of AGI. We have no idea how to robustly specify the goals a "safe" superintelligence would have, and even if we knew how to do that, we have no idea what goals are safe. The only winning move for humanity may well be not to play, but we're clearly going to play anyway.

Slashdot Top Deals

Federal grants are offered for... research into the recreation potential of interplanetary space travel for the culturally disadvantaged.

Working...