You are missing the point. The point is that unless the liability is with the car manufacturer, you're basically a riding scapegoat for whatever might go wrong, and realistically with a minimal ability to actually do anything about a sudden situation. If you're smart you wouldn't want that as a car owner.
So it isn't a question of whether the driver-less cars can drive more safely or not; I'm sure they can. It's a question of who's in charge of the car. Is the car maker ready to accept responsibility for what the car maker has programmed it to do? If they're not confident enough to do that, then why would I let it drive for me, instead of just driving myself?
Of course it's technically possible to transmit packets with essentially 0% loss, and I'm sure there are set-ups that would work under the right circumstances. That's not the point. The point is that each and every component involved, from hardware through firmware to software, is designed under the premiss that it is okay to drop a packet at any time for any reason, or to duplicate or reorder packets. Even if you get it to work, the replacement of any single component, or the triggering of some corner case you haven't tested for (some hardware counter wrapping around or whatever imaginable), might suddenly blow everything up. It's just an insanely fragile system, and you need to have complete and total control of the implementation of every involved component, not just their specifications, in order to ensure that your system meets your spec.
Either switch protocol, or implement something on top of UDP that adds the reliability. There is no other sane way.
Honestly - why are people trying to do things that need guarantees with python?
Oh, you got that far at least? What I wonder is, why are people trying to do things that need guarantees using UDP with no back-communication, no redundancy built in to the protocol, and not even detection of lost packets? External requirement my ass, why do you accept a contract under those conditions? The correct thing to say is "this is broken, and it's not going to work". If they still want the turd polished, it should be under very clear conditions of not accepting responsibility for the end result, and they should be known and understood by all decision makers at the customer. And even so I would be wary.
Otherwise, you're in a prime position for getting hit by the blame when shit hits the fan, either because it doesn't work, or because you didn't tell them that in the first place, since you are supposed to be the expert.
I suppose this is the next Tick in Microsoft's equivalent of Intel's Tick Tock development model. In Microsoft's case, they get redesign hubris with every other version, then spend the following version back-tracking and undoing all the things they did wrong.
Much like Windows 7 pretty much was a fix-up of Vista, Windows 9 appears to be a "corrected" Windows 8.
Obviously, you could say the same thing for any well defined tax curve, more progressive ones as well. It doesn't have to be X%+Y. What you're really saying is that you want all incomes to be treated the same way, and that there should be no deductions.
The reality isn't going to be that simple; for starters you need to define income in some way that is both fair and can't be evaded easily. And if you are too strict about everything being level, you have lost a whole range of financial instruments that are sometimes useful for fine-tuning a market (e.g. internalising externalities like the cost of pollution). But the principle of making the system as simple and transparent as possible I can agree to.
One reason why a progressive tax system is a good thing is the following: In general, you can get a higher appreciation on your assets if you have more of them. In other words, the richer you are, the faster you can increase your relative wealth. If you set up the differential equations for this, you will notice that the system is unstable, and will asymptotically reach a point where very few own almost everything. A progressive tax system counterbalances this effect, so that there can be an stable equilibrium where some are richer and some are poorer.
Incidentally, this kind of concentration of wealth to a small elite is exactly what we have been seeing in recent decades. Because in practice, the tax systems in most countries aren't really progressive all the way up to the top. If you are rich enough, you either get your income through other means than a salary, which is then usually associated with a lower tax rate, or you escape taxes through more creative measures, like moving your assets to a tax haven.
What people are saying here is that those claiming that increased gun ownership leads to lower crime rates are using utterly flawed logic. That does not mean that they necessarily argue for the opposite position. In fact, the person you responded to explicitly said that "I'm not arguing for one side or the other".
Why is it so freaking hard for people on Slashdot to understand the difference between a counterargument and making the opposite claim? There is such a thing as "we do not know".
The burden of proof is on the one making a claim. So when someone wants to claim that gun ownership does not increase violent crime, that person has to prove that claim just as much as someone claiming it does. Pointing to other plausible compounding factors is a perfectly valid counterargument. You then have to eliminate those alternative explanations, just like Copernicus had to for his theory to survive.
You are of course correct that it is a very weak counterargument to simply say that "there could be explanations". But that is not what the GP did. The GP did propose a whole list of other factors that could affect violent crime rate.
By the standard you are setting, nothing is ever disproved, because there could be variables which have not been taken into account.
It's more like "nothing is ever proven", which would be a more or less correct statement with regard to science. It is all about trying to come up with inconsistencies or alternative explanations. When you have thrown everything you've got at the theory and accounted for it all, it is usually accepted.
They use stereopsis for coarse scale depth and photometric stereo (three directions from the looks of it) for finer scale structures. And they seem to be using some tracking target to compensate for motion between these captures. Not a bad idea per se, but I don't think their numbers are particularly remarkable.
I'm not aware of any 3D capturing technique that captures an object "from all sides", unless it's comprised of multiple individual scanners who's data you then stitch into a single model, or a moving scanner (relative to the object's reference frame, so the object could be the one moving), in which case you're really building the model out of lots of tiny scans at different positions (e.g. sheets). In principle, either of these are more or less orthogonal to the choice of scanner. You could do it with this scanner; you'd just put them all in a box and calibrate the extrinsic parameters using some reference object.
The only things that I can think of that could be remotely considered scanning from "all sides" would be something that penetrates the object, like an x-ray CT scanner, ultrasonography or something of that sort, but that would be stretching it.
Also, calling their accuracy, by which they mean noise level on a perfectly flat surface, of 0,3 mm on a 35 cm (diagonal) field of view "extremely high resolution" is quite a stretch. High compared to other cheap scanners, possibly, but at least an order of magnitude worse than industrial scanners of similar format.
I think it is an interesting concept to combine photometric measurements with geometric stereo in a single handheld unit, trying to get the best of both worlds, so to speak. But it certainly feels like they are overselling it.
An authority is a person who can tell you more about something than you really care to know.