Don't disregard manufacturing and management savings. Space-X seems determined to be the least expensive way to put stuff in LEO by far, and if we can put lots more stuff in LEO we can do a whole lot of things with spacecraft. As Stalin said about the Red Army in WWII, "Quantity has a quality all its own."
If there's a general perception that a stock is undervalued, people try to invest money by buying stock. This raises the price until people in general don't think it's undervalued any more.
A polygraph can be good at determining if somebody's nervous, which is not a good indicator of truthfulness. It might be useful in questioning or interrogation by telling the interrogator when to press and when to let slide. This assumes that the guy being interrogated can't manipulate the machine into inaccurate readings, such as showing nervousness when being asked about something basically innocuous.
Republic comes from the Latin "res publica", and normally means a government or country not headed by a hereditary monarch. That leaves a lot of room for all sorts of governing systems. The US is a republic and a democracy (or at least used to be, and can be again). The UK is a democracy but not a republic. Nazi Germany was a republic but not a democracy. North Korea is not a republic (with three Kims in a row, I'm calling it a monarchy) and not a democracy.
My observation is that, the more democratic-sounding adjectives (other than a people or place name) are tacked on to "Republic", the less likely it is to be a democracy. You do not want to live in a Democratic People's Republic.
Patents aren't the only problem. Cell phones are held to exacting standards that must be certified (at least in the USA), a expensive and time-consuming necessity. Take a hackable phone (I believe Nexus phones are) and design your new OS to run on that.
No. Avoid those hats! There is an MIT study that shows that it actually amplifies certain frequencies of potential mind-control radiation.
It looks like JackieBrown is paying for a capability that is mostly unused but occasionally wanted. That's considerably different.
FWIW, I haven't noticed adware on my iPhone, although there may be some in the App Store I could acquire if I wanted. There may be other smart phones with different OSes that don't support adware.
Unfortunately, smartphones are more restricted than general-purpose computers, so it's much harder to customize them.
So, Microsoft Research has developed a method to tell when a programmer is in a condition that tends to create bugs. That's nice. What happens with this?
I already know when I'm in a condition that tends to create bugs. It won't help there. It could be passed on to others, such as management.
Now, is management going to take action to reduce the amount of time I'm more vulnerable to causing bugs, by improving the office environment or discouraging overtime or making reasonable deadlines? Or is management going to find this a good evaluation tool and ding my performance reviews if I'm spending too much time in that condition? Is excessive stress going to become a thought crime?
The only way this would be useful is if management recognized that they needed to reduce buggy time and were rewarded partly on that basis. Anybody confident their employer will do that?
I am thinking of a company* that sold customizable software to large customers who demanded the ability to customize. It also sold new features, which would have to be created by the development staff. At one point, the sales and development staff wound up in different cities, to the technical people weren't able to sit on the overenthusiastic sales people.
The sales people found that they could sell new features, and (a) it made it easier to close the sale, and (b) custom development cost money, so it raised the dollar amount of the sale. Therefore, by overselling custom features, a salescritter could increase not only the probability of a big commission but the size of the commission. The development staff had different ideas about getting flooded by requests for custom features, in that it destroyed the ability to move the main product forward in any coherent fashion, and resulted in large delays in actually shipping and (this was important) getting paid. This continued until top management stepped in and did something that actually made sense in the situation (imagine my surprise).
Smart people will know how to game the reward system, no matter in what field.
*I would like** to formally announce that this is not a company I worked at and signed an agreement not to talk about.
**For reasons I'm not discussing here.
You do realize that we've done that over and over. We keep coming up with programs that do code generation. We introduced automatic programmers that represented machine codes with mnemonics and kept track of memory locations (these are now called assemblers). We introduced programs that would take scientific calculations (FORTRAN) and business logic (COBOL). We introduced "fourth generation languages" in the 1980s. All of these, and many more, were honest attempts to remove the need for programmers. They made life easier for programmers, but failed at removing the need for programmers. Windows Workflow Foundation isn't going to work any better.
There is a step in any programming process where somebody has to translate ambiguous specifications into some sort of formal, precise, notation. Once we have the formal, precise, notation, we can have programs take it the rest of the way. We need an actual human to do that translation, though, and that human is a programmer no matter what title or position. If architects can create formal specs, they're programming. If it takes code monkeys, they're programming.
There's the basic theory, in which I'm a consequentialist, and practice, in which I take certain virtues that seem to me to be wise under the circumstances. The theory does have practical difficulties (such as figuring out what the consequences are and how good or bad they are), just like each and every other halfway reasonable ethical theory on the frippin' planet.
Requiring that the ends must justify the means requires that I have absolute knowledge of how good all the consequences are. That simply isn't going to happen. By that criterion, I could essentially do nothing, except that I couldn't just do nothing. You are correct in suggesting that in practice it's easier to try to avoid directly harming people, but that isn't the theory. In practice, if you're looking at considerable predictable harm opposed to a probable good that may be great, you should go cautiously. I'd bet that it's more common for people to get their virtues or deontology screwed up and commit great wrongs than it is for consequentialists. Himmler thought that the people responsible for exterminating Jews were heroes. This was not as a result of any consequentialism, but rather a virtue theory that assumed certain things about races that I disagree with.
So what do you do when you find out you screwed up? What do you do in any ethical system? You figure out what you did wrong so you can try to avoid doing it again, and you deal with the current situation.
If I were in the UK, and received such a letter, it would be interesting information for me, because I like to know how my connection is being used. It's only an informational letter, after all.
In most RPGs, there's a very big difference between a player lying about a die roll and a GM lying about a die roll. The GM is not the players' opponent, and the GM's primary responsibility to to manipulate the situation so everybody has fun, regardless of what dice he or she may roll. The player character has certain abilities, and it hurts the game if the player fudges them.
The fact that you cannot foresee all consequences is not a fundamental error of consequentialism, any more than not knowing all applicable virtues and which to prioritize is a fundamental problem or not knowing exactly what God wanted in every specific instance is a fundamental problem. It's a complication. All moral systems have to deal with human fallibility, and the lack of omniscience is one fallibility that consequentialism has. I've met some very, very good people, but none who were absolutely always acting morally. I'm not interested in arguments for ethical systems that require perfection, since everybody fails in that case.
Let's consider a situation in which somebody else will be badly hurt unless you lie. A consequentialist will weigh the harm done in each case. Somebody who believes in the virtues of telling the truth and helping others will have to make a choice of virtues. A deontologist will have to decide what God requires in that instance. A sanctimonious asshole will try to remain morally pure, disregarding the consequences to others. In this case, we see that the consequentialist has more philosophical support for ambiguous situations than a virtue ethicist or deontologist.
The Versailles peace treaty, while harsh, was not as punitive as German nationalists painted it. The social crisis had passed well before the Nazis came to power. The Nazis, typically Goering, courted the industrialists to establish an economic power base.