The system should _never_ _ever_ send "false alarm".
Sure, but how do you design a system where that never happens?
Adding ad hoc messages is an even worse idea. At some point a politician will get the bright idea of using the system for trivial notifications (like "air quality alerts") that are better handled via other channels.
And how do you design a system that cannot deliberately be misused?
Sure, it varies; about $100,000 is an average figure. But again the point isn't income, it's consumption levels.
I should have been clearer: it will bring the total cost for the war into the trillion dollar range, counting all downstream costs.
The problem with some notions of equality is that they don't take into account the fact that money, like everything else has diminishing returns.
About ten years ago researchers looking at this question discovered that while perceptions of personal success do continue to rise with income, income above $70,000 ($98,000 in current dollars) has essentially zero impact on emotional well-being -- the actual quality of life experienced by the individual on a day to day basis. In other words while our wants are flexible, our actual needs are quite modest.
This suggests to me that if you look at $98,000 as a consumption level, and factor in technologically mediated productivity changes, practical equality is something achievable in the historically speaking near future. A few countries are close to achieving this; if you look at the countries with the highest reports of well-being they're all wealthy countries with a low gini coefficients, which means that they have the largest proportion of people with solidly middle-class incomes.
In general I don't see any reason to care if someone wants to become the next Elon Musk, except so far as such people are able to buy politicians with their money. When politicians are working for the super-wealthy they're putting their efforts into things that will make literally nobody happier.
Google has relaunched its map service in China after an eight-year absence, signaling a new era of cooperation between the American internet giant and local partners in fields such as artificial intelligence, reports Nikkei.
I beg to disgree, and must posit as below: -
Google, like many other big [American] companies, blinked, period!
This does not surprise me at all, especially as the cost is arrived at by taking the total campaign cost divided by the number of soldiers.
Everything the US military does costs an eye-popping amount. The V22 Osprey costs $64000/hour to operate. The Bradley AFV cost over $50 for every mile driven. Recoilless rifle ammunition runs between $500 and $3000 per round. Every time an A10 opens up its mighty 3900/ round/minute cannon, each of those rounds costs $150.
The current administration's plans for increases in troop levels in Afghanistan are expected to cost the US taxpayer over a trillion dollars when all the downstream costs are included. In return they hope to secure access to about a trillion dollars in mineral reserves for US companies.
First of all, Agile doesn't work in every situation unless you stretch the definition to include non-agile practices where warranted. Second, the distinction between users and testers isn't as clean as you suggest. Users *are* testers until they become habituated to the system.
And what material. Dice rolls could probably outperform slashdot readers on article summaries.
Give that some systems are worse than others in inviting operator error, you can't just assume it's not the tech because operator error was involved. However even if the tech is as good as humans can possibly make it, that still wouldn't prevent operator error.
This kind of fault is hard to test for, because it's a non-functional requirement. You can't simply do a functional test and check off "prevent accidental message from being sent". At best you can simulate various scenarios, but those simulations are unreliable because you're dealing with testers, not people who are habituated to the system and who thus use it differently.
Clearly there were several kinds of operational faults here that may have been compounded by design flaws. But one of the operational mistakes was purely a matter of planning: not programming in a "false alarm" message to be sent after the inevitable operator error. This also suggests a design shortcoming in the system in that designers didn't anticipate the need to ever issue an ad hoc message on short notice.
I don't know. In my experience every design choice has unintended (although hopefully not unaccounted-for) consequences.
You have to add up all the foreseeable failure modes of a system with a mechanical switch -- including but not limited to a mechanical failure when you actually need to use it -- weighted by the probabilities of those failure modes. Just throwing a mechanical switch into a system because you had a failure is not engineering. In engineering you don't just focus on the desired result of a feature.
I'm not saying that a physical arming switch isn't the best option, but designing a solution to this problem is a job for someone with experience dealing with human factors in systems. I suspect having distinct armed/test modes is a good idea, but a switch alone isn't going to be enough, you'd need to have other indications the system is live -- e.g. klaxons and flashing lights.
Remulak may be a small town in France, but Barcelona is neither of those things.
Your program is sick! Shoot it and put it out of its memory.