I enjoyed his books very much, but no he was not on point.
Really? I thought the article I linked to was an insightful discussion of the topic. e.g.: "For awhile yet, the general critics of machine sapience will have good press. After all, till we hgave hardware as powerful as a human brain it is probably foolish to think we'll be able to create human equivalent (or greater) intelligence. ... it's more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so so, the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment." Etc. I would highly suggest reading it.
That is also magical thinking, but no more so than the idea that by throwing circuits with complexity similar to that which we have discovered in the human brain so far, we will inevitably create consciousness. That is not just wishful thinking, it's clueless. We keep finding more complexity in the brain, so it's still a moving target which is enough to defeat such an argument on its own, and transistors are not neurons which is also enough to prove it's a folly.
I think you're shifting the goalposts a bit here and not responding to what I actually said. I said that it is magical thinking to believe that "human-type intelligence is unique and can never be replicated, simulated, or surpassed."
For one thing, I think it is possible that human-level intelligence has evolved elsewhere. I don't see why we would have to be unique.
Secondly, I don't know how to define consciousness, and I don't know how to define it in an artificial context. I don't know if consciousness is necessary for intelligence.
I also don't know how long transistors will be our top computing technology? I guess we're within a decade of no longer being able to shrink circuitry, as we are close to coming up on physical boundaries that we don't know a way around. I have never claimed that silicon chips are going to to lead to superintelligence or that LLMS are going to lead to superintelligence.
What I do know is that it's an unimaginably massive universe out there. To me, it seems foolhardy to make claims that something can never happen. We are barely a century into the electric age. We are well under a century into the era of integrated circuits. Who knows what comes next? I don't feel comfortable saying "never" in that context!
I also know that exponential change is intuitively difficult to understand.
If billions of years of evolution can produce a human brain, why can't we simulate one? If not now, in 100 years? 500 years? 10,000 years?