Netflix does not have to pay ATT/Comcast/Verizon a single dime. All it needs to do is [...] buy proper transit
So they don't need to pay those three, but they must pay someone, for what amounts to transit to themselves. Transit was a concept when a small ISP bought from a large ISP to get the small number of users to The Internet across unequal networks. Peers are when the networks were more even.
It was always from the consumer point of view. Only recently did the concept of charging content for content transit. If my ISP is charging for content transit, I want my rebate/discount. They are getting paid twice for the same thing.
Good luck finding enough programmers that can write code with that level of parallelism.
Just buld an AI that programs AI in a highly parallel fashion. What could possibly go wrong?
The question at its heart is not about object avoidance in the article...it's about choices between objects. And that requires identification.
Such dichotomies are not realistic. When defined they are always spelled out like "The car is going 300 mph in a 30 mph zone, with lines of parked cars on both sides, and low visibility. A passing plane looses a set of seats, unoccupied, landing facing away from the car at a distance of 50 foot. At the same time, a kid runs out from between parked cars. The road is completely blocked by the child and the seats. What do you hit?"
Yeah, it takes magical couches appearing from nowhere and impossible starting parameters of overly unsafe driving and such. Any "realistic" scenario leaves one clear point of action that is best for all. For 99.9%, braking within your lane is the best action. That last 0.1% could always be the wrong answer and the sum total would be much better than leaving humans in control.
This would not work, for the simple reason that there is no way to safely move on most roads, if you assume that everyone else is a malevolent actor, waiting to slam into you as soon as you place yourself in a position from which you cannot avoid him.
You can assume rational actors. They aren't, but it's a valid assumption.
But some times, the other car will blow a left tire, or the other driver will have a heart attack and lean on the wheel. And an accident will be about to happen.
Unless you are on a narrow road with barriers on both sides and all lanes full, with no emergency lanes or available space at all, then your "worst case" is still trivial.
If I were programming it, I'd program it to minimize damage. That means avoiding a head-on, and not avoiding traffic with a small speed differential. If the person beside you swerves into you, that's trivial. You hold speed and steer into them, both cars traveling foreward, and nobody injured. The human response is to swerve away from them when not safe to do so, and kill themselves by hitting a tree or wall, while the heart attack victim kills themselves on another tree ahead.
But, for some reason, saving multiple lives and minimizing damage is undesireable because "OMFG, the autonomous car deliberately hit the other car!!!!"
When a collision is imminent, the car should try to avoid hitting anything. If it cannot, it will have a fail mode, which I bet dollars to donuts will be "Maintain heading and reduce speed". Why? Because that is the safest setting in many situations, because it is what you want everyone else to do, and because it is easy to mandate it by law.
It's also better than the human response in almost all situations. I know of multiple people who killed themselves avoiding animals on the road. I know of nobody who died from hitting one. The statistics aren't kept in a manner that makes it easy to see if my experience is typical or atypical.
My concern here is how controlled that lab environment is. I did my fellowship in an ID research group that had a BSL3 lab in the unit and given the number of containment breaches they had, you should seriously question the the wisdom of conducting the kind of research
The real problem is that the security levels are lax. They are more design rules, not operational ones. You can design for any level and get a certification, but so long as the gear is working, if the processes and people don't work well, you'll end up with a significantly reduced actual safety level. Properly done, you end up with breaches being events like "meteor struck building, destroying air handling systems, and creating a large breach in the envelop" events. Triple redundant power isn't uncommon, but no amount of redundancy can ever be "100%". 100% is impossible. There's always the chance of something almost impossible happening. But when you are examining the chances of an airplane crashing on the building during a hurricane, and alien invasions for the most likely breaches, you are doing good.
the herpesviruses which have all kinds of special viral proteins that are designed [...]
Intelligently designed?
The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.