Comment Re:Basically fraud (Score 1) 80
Let's stop using that fallacy, shall we?
Let's stop using that fallacy, shall we?
Maybe you should lose your fixation with emissions, because it has not worked, will not work with the best will in the world, and will impoverish us all.
Instead, let's turn our attention to ways that will actually cool the planet, not cost trillions and can be implented quickly.
Time for some science and economics to be applied to solutions, not just the causes.
It's time for you to familiarize yourself with the wealth of science and economics that has already been done.
But fine, you're stubborn, and need me to do your homework. To wit, the wikipedia article:
Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like Hubert Dreyfus and Roger Penrose, deny the possibility of achieving strong AI.
Following the references, we get this one reference to Penrose: The mathematician Roger Penrose a few years later wrote two major books where he showed that human thinking is basically not algorithmic (Penrose, 1989, 1994). Following those books, we get their arguments summarized as:
Deep sigh. And here we have it. Penrose is confused about... a great, many thing.
Well, charitably, the term "Strong AI" did imply a good deal more weight on consciousness back then, rather than simply human-level power at cognition. And 'weak AI' But it's not excusable to conflate the two as he does. Consciousness is orthogonal with intelligence. And if you are so convinced, I can see why you're not getting it. I too do believe (speculatively) that consciousness is incomputable, but that has naught to do with intelligence. So there are two of his claims busted due to a modelling error on his part. The Godel thing, that's just a laughable set of false assumptions. It's hard to know where to begin. A formal system can VERY easily and algorithmically sidestep GOT, as can we. The theorem applies to a particular logic, and a metalogic can add rules that excuse the case of incompleteness with a new, tailored axiom. And that's not even necessary; understanding and utility do not require proof, or completeness. I hope you're getting the picture. These are terrible, terrible, terrible, terrible arguments.
It shouldn't take much to convince you (assuming some rationality and basic humility) that the presence of a wide diversity of opinions of experts in the field -- and there exists the full spectrum, including diametric opposites if you are paying attention -- indicates that clearly you cannot take one (extreme) opinion as conclusive. That is before you even look into whether their epistemic position is sound, which it is not.
As you bring up the D-K effect, and believe it applies, I think you must be in a situation where you cannot easily see the cognitive errors of luminaries. They abound. Every podcast, debate, and media appearance one of the leading scientists reveals some thorouhgly mistaken or fallacious thinking. Their opinions, therefore, are not strong evidence. There is a very good reason this is a named logical fallacy. And so in that state, it seems you barnacle on to some opinion you like, and don't question it.
He is asserting a lower bound on the compute needed for ~human level AGI, which is not something any research has indicated, nor is it a topic you can easily work with theoretically since it's not a crisp concept.
All you have are weak evidence from not finding such algorithms that scale as efficiently as is needed. By analogy, a prospector is drilling cores in his back yard, or even his whole country, and finding none concludes nobody will find gold anywhere throughout the world. To go from weak evidence to such strong claims is quite bananas.
What observations would we have if there is already AGI in labs?
LLM's are just a tool that improves efficiency, like any other.
Granted, for now. Non-sequitur.
We hear this with every new invention and discovery.
Nope not even close, and even if we did, it's not at all a good argument -- a correlation rather than a causal story. Every year before 2024 has failed the 'x >= 2024' test, so will 2024? Of course it will, thousands of trials can't be wrong!
All things under the sun are not equivalent and linear, the way this simplistic forecasting would assume.
Start here, then come up with some real counter arguments: https://www.lesswrong.com/post...
The current AI tech has too high an investment cost for it to be used for anything else.
What an absolute BS statement
You're an idiot... Seriously. Almost every small business out there was started to fulfill a need, you clown. And every big business is just a small business that was super successful and grew. Yeah, even Walmart was just a single store in a fucking hick town in Arkansas at one point.
Irrelevant. Reread what 'need' we were talking about. You don't just see a word and run with it in a different context. Also wipe your mouth
Let's hear all about your great system will work.... and it better not be some form of socialism.. We already tried that at a cost of 100M+ lives.
Enforce anti-trust laws. Stop or slow mergers and acquisitions. Split corporations that are monopolies. Remove corporate moats with legislation.
Capitalism isn't the fucking problem. Government is...
Capitalism tends toward monopoly. NO SHIT. That's why it has to be managed. There isn't a single system humans have ever come up with that is "set and forget". Every hierarchy tends toward corruption and requires corrections. A government that permits, and actively encourages monopolies, is the problem.
You need to sit with these thoughts for a minute and work out the obvious inconsistencies. Governments are obviously needed to enforce the management. Therefore they are the solution.
Capitalism isn't driving inflation... A federal government that spends $3 to $4 trillion dollars more PER YEAR than it fucking brings in, and resorts to PRINTING MONEY, is the problem.
Be wary of whatever sources are giving you these numbers. They're wrong, and also off topic.
Ma Bell is a mean mother!