It seems to me that one of the following must be true. Either true, human-level artificial intelligence will be achieved
I doubt the last case is true, unless civilization destroys itself somehow before it happens. In the other cases, well, there's really no excuse for not planning for how we're going to safely integrate artificial intelligence into our civilization with a minimum of harm, and certainly without destroying ourselves.
Since the advent of LLMs, though, I have seen no evidence that any such planning is taking place. Indeed, those pushing for AI development seem intent on using it mostly to save money by displacing workers with automation. I don't think that sort of thing makes the survival of civilization more like.
We're failing this test.
From TFA:
"While computers are fundamentally deterministic systems, researchers discovered in the 1970s that they could enrich their algorithms by letting them make random choices during computation in hopes of improving their efficiency. And it worked. It was easier for computer scientists to start with a randomized version of a deterministic algorithm and then "de-randomize" it to get an algorithm that was deterministic."
"In 1994, Wigderson co-authored a seminal paper on hardness versus randomness with Noam Nisan, demonstrating that as useful as randomness can be, it is not a necessity."
That's all very vague. Who were those researchers in the 1970's? What domain, exactly, were they researching? How does one write the "randomized version" of an algorithm, and how does one subsequently "de-randomize" it?
"They simply have to prove Rand wrong no matter how painfully more evident it becomes every day they are going to end up vindicating her."
This very thing happened to me. Ayn was right; I was wrong. I didn't think being raped by a fascist was going to be much fun, but now it happens frequently, so I guess I must enjoy it.
The one day you'd sell your soul for something, souls are a glut.