Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment "Flying colors" describes a failure condition. (Score 1, Interesting) 42

It seems to me that one of the following must be true. Either true, human-level artificial intelligence will be achieved

  1. within the next 20 years, or
  2. sometime between 20 and 100 years from now, or
  3. not until at least 100 years from now, or
  4. never.

I doubt the last case is true, unless civilization destroys itself somehow before it happens. In the other cases, well, there's really no excuse for not planning for how we're going to safely integrate artificial intelligence into our civilization with a minimum of harm, and certainly without destroying ourselves.

Since the advent of LLMs, though, I have seen no evidence that any such planning is taking place. Indeed, those pushing for AI development seem intent on using it mostly to save money by displacing workers with automation. I don't think that sort of thing makes the survival of civilization more like.

We're failing this test.

Comment Advanced Lateral Kerfliggening, 3d. Ed. (Score 2, Interesting) 31

From TFA:

"While computers are fundamentally deterministic systems, researchers discovered in the 1970s that they could enrich their algorithms by letting them make random choices during computation in hopes of improving their efficiency. And it worked. It was easier for computer scientists to start with a randomized version of a deterministic algorithm and then "de-randomize" it to get an algorithm that was deterministic."

"In 1994, Wigderson co-authored a seminal paper on hardness versus randomness with Noam Nisan, demonstrating that as useful as randomness can be, it is not a necessity."

That's all very vague. Who were those researchers in the 1970's? What domain, exactly, were they researching? How does one write the "randomized version" of an algorithm, and how does one subsequently "de-randomize" it?

Comment Re:I was really hoping we'd withdraw head from ass (Score 1) 14

Since any likely countermeasures to hypersonic missiles would likely require giant frickin' laser beams, which in turn will require amazing new forms of storage and rapid transfer of electrical energy, maybe we can kill two birds (one almost literally) with one stone.

Comment Re:Sure, but what's the use-case at scale? (Score 1) 26

I'm not a copy editor, but I'm not sure that filtering outright, intricate, willful, quadruple-down fabulism out of "copy" is a traditional part of a copy editor's job. I think you may be underestimating how much work it will be to filter out this stuff, and how much of it will slip through. It's designed to look real, and to pass inspection. These systems inspect their own output to make sure its bullshit is as intractable as possible.

Comment Re:Won't help much until (Score 1) 119

"They simply have to prove Rand wrong no matter how painfully more evident it becomes every day they are going to end up vindicating her."

This very thing happened to me. Ayn was right; I was wrong. I didn't think being raped by a fascist was going to be much fun, but now it happens frequently, so I guess I must enjoy it.

Comment Re:This just in: (Score 2) 112

(Sorry sorry, after still more investigation, it turns out he IS an English major after all, and then he took his English major to a professorship at Georgia Tech, where they're so into English majors that the English major there isn't even called "English", it's called something like Society and The Human Languages of Yesteryear, or maybe The Thing You Certainly Shouldn't Have Come Here to Major In, Idiot.

Slashdot Top Deals

The one day you'd sell your soul for something, souls are a glut.

Working...