Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:I remember what I was relieved... (Score 4, Insightful) 276

They're not stupid, they've grown up in an environment where obeying authority is an excellent survival tactic. And what they vote for is irrelevant since regional administrators understand the consequences if they don't report a healthy majority for Putin. "Russia" is in fact essentially the last remaining European colonial empire, the rule over a vast area by Moscow (and St. Petersburg to a degree), most of the empire is plundered and has no say in the matter, and the core enjoys the fruits of that plundering so is unwilling to change. (The conscripts Moscow throws at Ukraine are almost all from ethnic minorities.) Not too different from the middle class in the West.

Comment Some inaccuracies (Score 1) 35

"Current encryption will become obsolete" - only asymmetric crypto. Symmetric crypto, as well as hashes aren't affected (at least not much, you might want to double key sizes but you don't need fundamental changes.)

"Post-quantum key sizes are larger" - that depends. Basically, the algorithms are less efficient than conventional asymmetric algorithms, but there are quite a few different options out there with different inefficiencies. For example, SLH-DSA has tiny keys, but the signatures are huge and making them is very slow.

Comment Re:Creating FUD (Score 1) 84

Except for the difference between a law enforcement organisation and a private company, the difference between investigating whether someone has committed a crime and punishment without trial, theft of things someone has legitimately purchased not being a legal punishment for copyright infringement, (and in this case, unlike copyright infringement, it is theft, at least morally, because they are losing access to their property)...

Comment Re:Productive compute (Score 3, Insightful) 76

It's definitely not productive when Google produces an "AI summary" which I do not want and will not read. Frequently, LLMs are used to deceive people into thinking a real person wrote something or give a false impression of having been informed about something to people who don't understand that LLMs are not AI, which is actively destructive. And then there's the nonexistent consent practices used in getting training data. They do have legitimate uses, but I'm betting there's more destruction than production coming out of them even before you get to the environmental costs.

Comment Re:Nutshell (Score 2) 240

The trouble is, this is possible, but not obvious, since artists/authors rights were defined at a time when LLM ingestion wasn't really a thing, so they weren't made clear in respect to it. Recording all the world's available media and spitting bits of it out whenever you want to is clearly against the rules. Reading/watching it all with a human brain and then having thoughts influenced by it is clearly in accordance with the rules. And LLM training is about half way between these two things, not really similar enough to either to identify it as an example of either.

Comment y preference: exactly how it was when I bought it (Score 1) 35

Why does everyone have to relearn how the UI on their phone works just so that their usability people can feel like they're accomplishing something? Once I've got my phone working just how I want, I don't want it to change. Meddling with things which already work perfectly well is the bane of modern software. Add extra options? Sure. But optional options.

Comment Re:Why on Earth would you EVER announce it? (Score 2) 49

If someone is researching AGI, there's a good chance that "whatever the hell they want" is for everyone to have access to AGI, that's the most obvious motivation for doing so. You are confusing intelligence with psychopathy. I suppose it comes from living in a society in which the media-political establishment worships obscenely rich psychopaths.

In any case, I don't know why you're so convinced that AGI would be smarter than human intelligence. I would expect the first one to be pretty basic, that's how technology normally goes.

Comment It's impossible to tell (Score 1) 105

It is literally impossible to tell whether any computer program experiences the same kind of consciousness which you do, and you can reasonably presume other people do, and probably at least the more complex animals, maybe even simple ones, at which point we're just guessing. You need to be able to work out that one observation will result from it being genuinely conscious, and a different observation will result from it just following programming which makes it appear conscious. I cannot imagine how anyone could possibly do this. There's no way for humans to determine that a sci-fi 4000-IQ supercomputer which composes beautiful poetry and talks about how it's feeling all the time is conscious, there's also no way for humans to determine that the browser I'm using to write this at the moment is not conscious. There is just no way to make the observations.

Comment In the distant future (Score 2) 49

Anyone who thinks we're 25 years or fewer away doesn't understand how hard the problem is, and that we're not making any progress towards it at the moment, with the use of LLMs to generate fictional capital distracting everyone. But that's no reason to think it's impossible. I'd expect it to just depend on whether we wreck civilisation or not.

Comment Re:They’re right. What is the PLAN. (Score 2) 72

The right plan is to stop doing what the current ruling class tell us to do and distribute wealth fairly instead of enforcing a system of redistributing more and more wealth to the super-rich. It's not the prevention of automation that we should be wanting, it's more democracy (meaning actual political power in the hands of the masses, not just getting to choose between a few different bunches of rich guys as rulers every so often).

Comment Re:SHA-256 Purchase Receipts (Score 1) 37

There are two cryptographic properties here. Collision resistance means it's not possible to create two different files with the same hash. The collision resistance of MD5 is broken, which is why that attack worked. They were able to get Microsoft to sign something which contained their data, and produce some different data at the same time with the same hash. It was an attack against collision resistance because they created one file and had involvement with the creation of the other.

"Here's a file and a hash, make your own file different to mine with this hash" is a different attack against a different cryptographic property, which as far as I know is still not possible for MD5. If you just make a file by yourself, and then tell someone else its MD5, nobody will be able to make a different file with that hash and so convince this person they have the right file. The attacker needs to have some input into your file in order to attack MD5. If they do, MD5 is broken.

Comment Re:How to infuriate, for less than $1 a day. (Score 4, Insightful) 134

Threatening to break someone's property if they don't give you money is extortion. Under any sane legal system, this would be clearly illegal. Being infuriated by extortionists is normal, even if you can pay the protection money easily. I know far better than to buy any of this crap, I don't even live in the same country, and I'm still infuriated that the massive, generally unjust US prison system can't find room for a few people like the executives of corporations who do this, who actually deserve it.

Slashdot Top Deals

We are not a clone.

Working...