Dang! Posted in the wrong place!
Dang! Posted in the wrong place!
You cannot change this by going in to an office.
6) Noise. It will still be a problem when the rest are solved.
Flying directly towards him, I suspect.
"And because Blockchain is effectively run by a network of unrelated computers, it produces a permanent ledger of transactions with which no one can tamper. Until now."
“The invention is not designed for ‘permissionless’ systems, like the cryptocurrency system supporting Bitcoin, which is open and decentralised and where the absence of a single governing authority makes absolutely permanent, or ‘immutable,’ recordkeeping vital."
The author of the article does not seem to understand that decentralized openness is an essential part of blockchain tamper-resistance (you can rewrite your copy of the Bitcoin blockchain, but you cannot get it accepted without controlling a significant part of all global Bitcoin mining.)
I wonder how many blockchain-based solutions will be sold on the grounds of tamper-resistance, when the way they use the blockchain actually negates that assumption?
His safety claims are made up from wishful and magical thinking, trying to justify the current technology by pretending it is what it is hoped to become.
Musk's somewhat more sophisticated attempt at the same thing has been exposed as bogus, e.g:
The name is not the problem - if it was, it could easily be fixed. The problem is human nature - even well–intentioned people find it difficult to pay attention when, most of the time, they do not have to. This is not driven by the name, it is driven by experience.
If it is just a software upgrade away, then there is no harm in waiting a bit and doing it properly, is there?
Except the 'upgrade' is not developed yet, and so certainly not tested yet. By this 'argument', superhuman generalized AI is 'just' an upgrade away.
Besides that, there are a number of good arguments that the current state of software needs additional hardware support in order to do the job properly.
The auto industry should look to the aerospace industry, which has learned the hard way how to do safety, and that industry was not killed off by it (quite the contrary, in fact.) One thing learned was that wishful thinking and vague hand-waving arguments don't count for much.
You cannot justify the irresponsible use of *current* technology by pretending it is now what it will become.
Keeping your hands on the wheel doesn't mean much. The manufacturers that require this may want us to think it is a proxy for paying attention, but it is not.
Self-driving cars would be a great improvement, but, as Tesla keeps saying, these are not self-driving cars.
Training is not going to help. it's just not in human nature to pay attention when there's nothing to do most of the time, and it must be assumed that the driver will require at least a couple of seconds warning, and probably more, before being able to take control. It is highly irresponsible of auto manufacturers to field systems that cannot reliably give that warning, even though it is technically the driver's responsibility to pay attention. Tesla's habit of calling it beta software is a cynical attempt to avoid responsibility, which may come back to haunt them, as it shows they know the system is not ready.
I am beginning to see a case here for active blocking that could be turned off or be tuned/smart enough to permit emergency-response signals.
All "new" ideas are really just reboots of old ideas.
You are just repeating one of Herodotus' tweets.
Shit happens, but my point is that this accident wasn't "Tesla's autopilot screwed up" as is being reported. It's "(1) transport truck made an illegal and dangerous turn, (2) Tesla driver was watching Harry Potter instead of driving, (3) Tesla autopilot failed to prevent accident caused by these two chuckleheads."
Point 1 is not relevant, because all these systems must be judged on how well they respond to situations without regard to cause: as you say, shit happens. Point 2 might be relevant if this wasn't entirely predictable behavior.
The real issue is that beta software with real safety concerns is being put into the hands of people who predictably (statistically speaking) can not or will not treat it as such - that alone is a major WTF. The only thing Tesla can justifiably complain about is that this is being reported as a Tesla-only issue, when the other manufacturers are being equally irresponsible (if not worse - my understanding is that Tesla's system is better than most, if not all.)
"The truck probably didn't see the car either."
That seems to be an overlooked bit of this case.
It is not being overlooked; it is irrelevant. the issue is whether people in general are capable of operating these systems safely, given their limitations (both the people and the technology.) The case has not been adequately made so far, IMHO. Tesla's insistence that this is beta software is both an acknowledgement that this is so and an attempt to get around the fact (irresponsibly so, IMHO.)
I look forward to autonomous cars, but I am opposed to the practice of pretending that the state of the art is more advanced than it is.
The disks are getting full; purge a file today.