Forgot your password?
typodupeerror

Comment Re:We need humility, not arrogance (Score 1) 137

I feel like your recent replies are written by an LLM. They are so predictable. Post with citations to articles or expert opinions? Ignore.

Other opinions that you disagree with? Respond with vague claims to authority ("my studies") and increasingly angry posts calling people "clueless," "stupid," "idiot," "deranged," "cultists," and "demented" -- and that's just within the last 24 hours.

It can be fun to engage in arguments on Slashdot and elsewhere, but are you Ok? You seem to be increasingly agitated and you're not getting into as much technical detail as you used to. I have enjoyed some past exchanges you.

Cheers.

Comment Re:We need humility, not arrogance (Score 1) 137

Exactly. There is no credible theory and there is no known mechanism.

False. There is a known mechanism. The human brain.

but there is a lot of indicators that say we probably cannot. Obviously, stupid people do not understand indicators. But the fact of the matter is that we do not understand how general intelligence works physically, that we only observe it ins some humans and that we do not even know how life works

More magical thinking from you. The idea that human-level intelligence is somehow ineffable, undefinable, unknownable is the height of mysticism. This is like some gnostic pastiche of feelings that human-level intelligence is unique and special.

I'll stand with technical advancement every single time as opposed to those who say "impossible."

Comment Re:We need humility, not arrogance (Score 1) 137

Well geez drinkypoo, that was my entire entrance to this conversation!

IMHO, the real magical thinking is the belief that human-type intelligence is unique and can never be replicated, simulated, or surpassed.

I object to the never part! I do happen to believe that human-level AI is possible, and I think there's a _chance_ that it arises during my lifetime, but I'll straight up say that the timing is just a guess that I have no confidence in.

Comment Re:We need humility, not arrogance (Score 1) 137

That the opposite of insightful discussion, because it's the proponents of machine sapience who have the good press now... and it is universally bullshit.

Hah! I guess that is a matter of your perspective. Sam Altman is (rightfully so, as a huge huckster) public enemy #1 with people trying to attack his home with molotov cocktails. Merriam Webster's 2025 word of the year was "slop" as in AI slop. You have large crowds protesting data centers and AI across the country. The entire state of Maine just banned building more. The county I live in just put a total moratorium on new construction. A few weeks ago a bunch of anti-AI pro-environment signs popped up all over the interstate entrances near me. Maybe within silicon valley or the tech industry bubble there is good press, but it's far from universal!

Regardless, I would again recommend reading the article. It was a fun read, especially considering it was written 30+ years ago.

Billions of years of evolution producing a human brain does not speak for or against our ability to simulate one. But so far, we can not do that, so the irrelevance of the question is overshadowed by the irrelevance of asking it. Maybe someday we can, but we can't yet. We don't know enough to even know whether or not we can. That's not an argument against trying, but it's evidence that we still lack enough information to do it, whether we otherwise have the technology or not.

I just don't agree with that. I think it's a fair proposition to say anything that exists can be built. (Ok, I'll admit I'm opening a massive can of worms with that one!)

If starting with your position -- that we don't know enough -- I still stand with the side that says "never" is the weaker position than "possibly."

Comment Re:We need humility, not arrogance (Score 1) 137

I enjoyed his books very much, but no he was not on point.

Really? I thought the article I linked to was an insightful discussion of the topic. e.g.: "For awhile yet, the general critics of machine sapience will have good press. After all, till we hgave hardware as powerful as a human brain it is probably foolish to think we'll be able to create human equivalent (or greater) intelligence. ... it's more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so so, the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment." Etc. I would highly suggest reading it.

That is also magical thinking, but no more so than the idea that by throwing circuits with complexity similar to that which we have discovered in the human brain so far, we will inevitably create consciousness. That is not just wishful thinking, it's clueless. We keep finding more complexity in the brain, so it's still a moving target which is enough to defeat such an argument on its own, and transistors are not neurons which is also enough to prove it's a folly.

I think you're shifting the goalposts a bit here and not responding to what I actually said. I said that it is magical thinking to believe that "human-type intelligence is unique and can never be replicated, simulated, or surpassed."

For one thing, I think it is possible that human-level intelligence has evolved elsewhere. I don't see why we would have to be unique.

Secondly, I don't know how to define consciousness, and I don't know how to define it in an artificial context. I don't know if consciousness is necessary for intelligence.

I also don't know how long transistors will be our top computing technology? I guess we're within a decade of no longer being able to shrink circuitry, as we are close to coming up on physical boundaries that we don't know a way around. I have never claimed that silicon chips are going to to lead to superintelligence or that LLMS are going to lead to superintelligence.

What I do know is that it's an unimaginably massive universe out there. To me, it seems foolhardy to make claims that something can never happen. We are barely a century into the electric age. We are well under a century into the era of integrated circuits. Who knows what comes next? I don't feel comfortable saying "never" in that context!

I also know that exponential change is intuitively difficult to understand.

If billions of years of evolution can produce a human brain, why can't we simulate one? If not now, in 100 years? 500 years? 10,000 years?

Comment Re:We need humility, not arrogance (Score 2) 137

True. That "singularity" idea is completely disconnected from reality. It is essentially a belief that a machine will become God, and it is a belief with absolutely no supporting evidence.

When you just make stuff up and argue against a strawman, it becomes awfully easy to win arguments.

The term "singularity" used in a technological sense goes back to the early days of computing--Von Neumann (this was news to me!). Interestingly, in 1993 NASA held a conference on "cyberspace" and future issues. https://searchworks.stanford.edu/view/3001391. Link to the paper https://ntrs.nasa.gov/api/citations/19940022855/downloads/19940022855.pdf

Vernor Vinge:

Within 30 years, we will have the technological means to create superhuman inteligence. Shortly after, the human era will be ended. Is such progress avoidable?"

Let's see..1993 + 30 = 2023. A few months after ChatGPT 3.5 was released! A funny coincidence (or not?), and nobody would claim that ChatGPT is superhuman, but Vinge was on point. You might read the article, it's (deliberately) provocative, but it's interesting.

You frequently accuse those you disagree with of magical thinking. IMHO, the real magical thinking is the belief that human-type intelligence is unique and can never be replicated, simulated, or surpassed.

AI growth has more or less tracked computing capacity over the decades. I'm excited to see what comes next.

Comment Re:We need humility, not arrogance (Score 1) 137

For your information, the very definition of "bug" is "implementation does not match specification". There is no other one that makes the least bit of sense.

What hubris!

Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."

Here's an ACM article on the epistemology of bugs: https://dl.acm.org/doi/full/10.1145/3662730

What's your definition of "insight"?

Comment Re:Um...so what? (Score 1) 88

You can argue that's concerning for the future, and on that I'd agree, but speaking as a nerd, it's still fucking cool.

It kind of feels to me like it's hard to get excited about things these days? I mean that if you think a particular breakthrough is cool, well you didn't consider problems X or Y. And it was done better, by Z. And it's going to destroy the environment. And people on the opposite side of the political spectrum like it, so that's a problem too. Etc.

I'm not sure if we're really in an unusually negative-thinking period if history, or if it just feels that way to me, in the current political climate, in my own state of life, etc.

Either way, as a GenXer it's kind of relaxing! Back to the days when caring about things was SO not cool. ;-)

Comment Re:Robot locomotion (Score 1) 88

David Brin is a science fiction author who wrote a collection of books in the "Uplift" universe. The 30-second summary is that there are countless sentient species around the universe and a galactic civilization spanning billions of years. before humanity was discovered, no known example of evolution creating advanced intelligence was ever discovered. Instead, superior advanced species "uplifted" primitive species to be intelligent through gene modification, selective breeding, etc.

One of the species in the book was kind of a like an organic wheel chair that had evolved in a close-to-weightless environment--short stub legs to accelerate and hardened wheel-like structures with some kind of magnetism involved.

Very interesting books and very creative in exploring just how different evolution in radically different environments might be.

Comment Re:Robot locomotion (Score 1) 88

I agree that we shouldn't feel that we need to contain robot designs to humanoid or humanoid-esque shapes and mechanics, but I would disagree that there are only a very few niche cases where walking on legs make sense. Legs (limbs more generally) have millions of years of evolutions behind them, and they work very well in many situations (walking on the ground, running, climb trees, ascending a cliff, swimming, etc). They may be considered jacks-of-all trades, and they may be less efficient than wheels or treads when moving on flat generally regular surfaces, but limbs are incredibly versatile.

I'm waiting for humanoid spider robots...

Comment Re:Um...so what? (Score 3, Informative) 88

Machine faster than human. They may also be physically stronger. The only thing this shows, is that humanoid robots have become reasonably efficient (assuming no recharge breaks).

I am so puzzled by this kind of reaction. This is the first time something like that has ever happened! Check out "WABOT-1" -- I'm not sure if it was the first bipedal robot to walk at all, but it's a product of the 1970s. Look at the progress in 50 years!

Then Check out Honda Asimo from ~2010. Asimo was only about 15 years ago.

I think going from WABOT to Asimo to an autonomous running robot that can run 26 miles in under 2 hours—in half a century—is absolutely amazing. The engineers who designed and built it should be applauded.

Slashdot Top Deals

The primary function of the design engineer is to make things difficult for the fabricator and impossible for the serviceman.

Working...