Slashdot stories can be listened to in audio form via an RSS feed, as read by our own robotic overlord.


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Yeah, but who's buying? (Score 1) 698

by klkblake (#37365912) Attached to: Is There a Hearing Aid Price Bubble?
No. America's health system is not capitalist. I don't even have words for what it is. I live in Australia, and while the government does offer health insurance, pretty much everyone is on private health insurance because (shock, horror) capitalism actually works. The health care is great, and private health insurance is not much more expensive than the government option. The American system is a clusterfuck of regulatory capture and perverse incentives. It's a miracle that monstrosity ever worked.

Comment: Re:Hang on a second (Score 1) 143

by klkblake (#35460812) Attached to: New Hardware Needed For Future Computational Brain

If you simulate a neural network, there is no need to know, how it really works.

That is NOT a good thing. If we ever want to actually get any sort of efficient AGI, we need to figure out how intelligence actually works. The vast majority of current AGI attempts are based upon reasoning like "The human brain is a neural network. The human brain is intelligent. Therefore, all I need to do is use a sufficiently large/fast neural network, and my AI will magically become intelligent."

If you cannot explain why your AI will be intelligent without resorting to comparing it to a human brain, you are effectively trying to fly by gluing feathers to your arms. Aeroplanes do not need feathers; if you actually understand how something works, you can change its superficial structure without losing the key attributes that make it work.

Neural networks, in the context of AGI, are a waste of time, computing power, and are a convenient distraction from the damn hard problems we need to solve to actually get working AGI.


("You" in the above should not be construed to refer to the parent.)

How many QA engineers does it take to screw in a lightbulb? 3: 1 to screw it in and 2 to say "I told you so" when it doesn't work.