In my book respect has to be earned, even for the President.
That's a philosophy to live by. Courtesy is free, up to a point, but respect has to be earned. It doesn't come from a position, a title, a degree, age, or wealth.
In my book respect has to be earned, even for the President.
That's a philosophy to live by. Courtesy is free, up to a point, but respect has to be earned. It doesn't come from a position, a title, a degree, age, or wealth.
Think of the Engineers and Scientists who made the a-Bomb.
1. Don't help and you will be the reason for a sustained war costing millions of lives of mostly military personnel.
2. Make the A-Bomb that will kill ten thousand civilians and end the war.
Far more than ten thousand civilians died from the atomic bombs.
On the other hand, typical estimates for civilian casualties associated with the battle for Okinawa place the total between 140k-150k, depending upon the reference you look at (including suicides). Military casualties from this battle were about the same as the civilian casualties.
It was the most dreadful single battle of the war, by any standard I can think of. The deaths were by no means limited to "mostly military personnel".
The numbers for total casualties associated with the (conventional) battle of Okinawa and both atomic bombings are not all that different in magnitude.
There's a lot of uncertainty over the exact numbers in both cases, especially given that the word "casualty" may mean different things to different people, making it hard to know what is being counted.
Still, it's not unreasonable to assert as many people died during the fighting for this single island as from the use of one atomic bomb, and possibly as many died during this single battle as from the use of both atomic bombs put together.
If you like, as a thought exercise, look up the population of Okinawa in WW2, determine the ratio of civilians that became casualties to the total population, and extrapolate to the population of the remaining islands of Japan. This gives you one estimate of the civilian lives that might have been lost in the event of a conventional (non-nuclear) military campaign to finish Japan.
It is balanced against the rights of others, including, for example, the right to take photos or video recordings of public places
In the ancient past (i.e. the Pre-Digital Age) this was sometimes seen as a reasonable balance. Today we know better.
Even in the ancient past, this was a flawed policy. Even in public places there can be expectations of privacy. For example, if one steps away from a trail and goes behind a tree while hiking in a national forest, another person should not be able to take a picture of one relieving oneself, even though this is legally a public place.
Further, there have always been issues with the ability of advanced technology devices to spy into private lots or into private homes, or other clearly private spaces, even from locations far away, locations that may actually be public. Camera zoom capabilities, parabolic microphones, possibly even phased array or extended baseline systems can all potentially be used to intrude into spaces where people reasonably expect to be private. A seemingly empty beach, for example, creates a reasonable expectation of privacy, and a photographer hidden in the dunes nearby is violating that expectation when they take pictures of a woman sunning herself.
We can only expect sensor technology to get better over time, and who knows what new sensor modalities will be developed in the future.
Hence, a policy better suited to modern technology would be that one can only take pictures or make other recordings with the explicit permission of all the uniquely identifiable subjects in the sensory field of the recording device. An exception can be made for security cameras in appropriate settings, or for recording devices intended to capture other forms of sociopathic conduct (such as recording an attempt by a con artist to engage in fraud). Another exception can be made for recording government officials in the performance of their duties. Even in such cases, strict limits can be placed on what can be done with the recordings.
It is appropriate to recognize a right to privacy arising under the 9th Amendment, superseding the right to freedom of the press in many circumstances.
It is morally wrong because it requires intervention to prevent two parties from engaging in what both deem is a mutually-beneficial contract.
This is patent nonsense, once you understand that a "mutually beneficial contract" is a fiction that already needs intervention to enforce. Contracts are pieces of paper with no value. They only have meaning and value because some government intervenes by deploying police with guns, and books with laws written in them, to prevent two parties from treating contracts as meaningless, which is what they naturally are in the absence of said enforcement.
In reality, of course, people have to have money to live. They have to have jobs, and a place to stay. Their lifespan is finite and they don't necessarily have the time to become experts at all the obscure details of Contract Law. The whole philosophical concept of people freely entering into mutually beneficial contracts is thus flawed in many practical situations.
Further, in order to be able to get a job in the modern technological world, people have to be able to learn to modern technological tools, which in many cases means access to software and other electronic media. Many such tools are commercial, and even the ones that aren't often require other commercial tools (such as an operation system), and as the use of commercial tools is often governed by "shrink-wrap" contracts, thus we run into yet another situation where the myth of two parties freely entering into a mutually beneficial agreement simply doesn't work in the real world.
If you listen to a typical Bar Review audio course on Contract Law, you'll probably hear the instructor plainly state that the vast majority of contracts are never read. Thus, the legal profession knows full well that these "mutually-beneficial contracts made with the full knowledge of the involved parties", are in reality nothing of the sort.
In the USA, understanding the law relating to contracts is extremely difficult. It's not simply a question of reading a textbook on Contract Law (which would be a challenging enough task in its own right, just from the length of the typical textbook). There's also a significant consideration most people don't think about (which might not even be mentioned in the textbook): in the USA the highest law in the land is the Bill of Rights. Clearly, as the Bill of Rights is the highest law in the land, it necessarily supersedes Contract Law when the two come in conflict.
To further complicate matters, James Madison deliberately gave the US an open-ended Bill of Rights, with unspecified rights "retained by the people" (9th Amendment) and "reserved to the people" (10th Amendment). He did this to deal with the objections made by the Anti-Federalists to the original Constitution, which the Bill of Rights would then supersede, namely that any Bill of Rights would be incomplete and would leave out really important rights that the people would sooner or later need to assert. This in turn has implications for Contract Law as it is ultimately up to the people to determine the rights retained by them, and those rights will supersede the established principles of Contract Law.
Consider the following: if any rights the people might want to assert as "retained by" or "reserved to" them could be taken away by the legal profession or by the government by any means, including some mechanism of Contract Law (or any court ruling or precedent), they would no longer be "retained by" the people: a contradiction.
For a concrete example, a right that might reasonably be asserted as being "retained by the people" is the right to long term oversight over business (we might also assert a parallel right to long term oversight over government, but that can be a discussion for another day). We don't, for example, want peanut butter companies letting rats get in their peanut butter that they sell: this used to happen, and hopefully as a result of public oversight over the conduct of businesses it doesn't happen any more.
A more modern example would involve the need for long term oversight over the environmental impact of a business. We don't want businesses dumping toxic waste and hence poisoning the environment, for example.
Note that the right to public oversight is not the same as government oversight. The whole point of having the Bill of Rights is that governments can be incompetent, or they can become corrupt. The presence or absence of government action does not preclude private action.
It is common to the legal profession put terms in work contracts that prohibit people from talking about their work (you might see the phrase "trade secrets") without any apparent consideration for whether or not, or to what extent, the presence of such terms infringes fundamental rights such as the right to long term oversight. This is a clear case of a conflict between Contract Law and the Bill of Rights (the existence of such terms does not say good things about the legal profession and it's relationship to the Bill of Rights).
Hence, to really understand contracts, you must also understand the Bill of Rights. However, it isn't sufficient to study what legal professionals think about the Bill of Rights, because the unspecified rights granted by the 9th and 10th Amendments are specifically retained by the people, not by the legal profession, and thus the views of the legal profession with respect to such matters are not binding. Putting this in other words: a government of the lawyer, by the lawyer, and for the lawyer is clearly not the same thing as a government of the people, by the people, and for the people.
Once we add Constitutional Law to the mix of knowledge required to understand contracts, that in turn opens up a whole can of worms, not just from the complexity of having to master yet another area of law (and reading another long textbook), but also because it is far from clear that the US legal profession's current view of Constitutional Law is actually consistent with the Bill of Rights or with some basic concepts regarding the ethical practice of law.
Putting this in hopefully clearer terms, the legal profession, as a class in society, is in a position of ethical conflict of interest with respect to the nature, scope, and form of the legal system. Contract law poses particular problems as far as legal ethics is concerned, because a) so much of the business the average legal professional engages in is contract related, creating an incentive to do the wrong thing in situations involving ethical conflict of interest on the part of the legal profession with respect to what can legitimately be put into a contract, and b) there's this built-in propaganda inherent in contract law that both parties "deem" the contract to "mutually beneficial", which in turn means there's a tendency to assume that if something is in a contract it must be "ok", which makes it really easy to conceal the legal issues (and the ethical conflicts of interest on the part of the legal profession) that often arise in contracts.
For example, in many "shrink-wrap" software contracts there are terms prohibiting reverse engineering. Does this not seem like a violation of the right to long term oversight of business (or even a violation of the right of the human mind to be curious)? If so, what should we conclude from the willingness of the US legal profession to put such terms in so many of these contracts?
Clearly, the legal profession is in a position of ethical conflict of interest with respect to determining whether particular terms present in a contract are legitimate or actually violate fundamental rights. The widespread presence of these "do not reverse engineer" terms suggests that the profession is not doing a very good job in handling the ethics issue in this particular case, which in turn raises questions regarding the ethical conduct of the profession as a whole (this will come as no particular surprise to those familiar with the issues involved in so many other areas of law, such as intellectual property law or tort law).
In short, to really understand the law regarding contracts, one must understand not only Contract Law, but also Constitutional Law, and Legal Ethics. How many people have the time to do all that, especially given that many of the conclusions they reach when they start to examine the whole mess are likely going to be different from the "party line" or the "official propaganda" presented by the legal profession as a class in society? It's far simpler for the average person to just assume that things will work out ok even if there was something bad hidden in the contract.
Twenty some years later, we developed a weapon that could destroy a modest size city with a single bomb.
Historically speaking, the famines and diseases that usually followed warfare often destroyed the population of entire cities and ravaged entire "nations" (look up the 30 Years War for an example). Until relatively recent time, far more people died of disease associated with warfare than in battle.
Then there were groups like the Mongols, who executed the populations of entire cities (presumably excepting those that became slaves) if the cities refused to surrender.
Sacks of conquered cities were pretty barbaric in general, and that phenomenon was not just limited to the Mongols.
Deaths in huge numbers is nothing new in warfare.
Estimates for the total deaths resulting from the Mongol conquests range from 30 million to 70 million people (including deaths resulting from disease and famine), although it's not clear how reliable the numbers can be for a period that long ago. If those numbers are reliable, the totals are not that different from what the modern world achieved in WW2, with all its advanced technology.
What struck people at the time about WWI is that, rather than having to kill people onesy-twosey, it could now be done on an industrial scale.
Knowledgeable observers of history already understood this, as the carnage of the US Civil War had already demonstrated it well before WWI. Consider the battle of Antietam, for example, where over 23,000 people died in a single day. Unfortunately, the leaders of Europe failed to draw the appropriate conclusions, which I'd call criminal negligence.
Overall, there is good reason to suppose far more human beings have been killed by pre-industrial warfare (and its consequences) over the centuries than by the recent wars using modern technological devices. The Mongols, for example, managed to kill far more people than the two atomic bombs, even if we accept the lowest estimate, by a factor of about 100x.
The real difference between modern industrial warfare and the historical kind is not that we are killing people in larger numbers, the real difference is we can do it faster than ever before.
But advertisers don't ruin everything about the Internet
The concept that commercial content providers should be able to make their content only available with advertisements is opposed by huge numbers of people (hence the common use of home recording systems to skip the ads on TV, and the massive use of ad-blockers in browsers).
This whole situation ultimately ends up creating yet another nail in the coffin for the whole concepts of a copyright system, which hurts people with a genuine stake in copyright (bad) as well as the sleazy entities (good). It also harms the legal system, by providing yet another justification for the public to view that system with contempt.
There has to be a middle ground (or another option) between the sites supported by ad-sponsored materials and the sites supported by the public (a lot of stuff gets mirrored by universities and government agencies, which ultimately is funded by our tax dollars), and a way to cleanly separate the ad-based content from the rest.
The system that some providers have, where people pay a small amount each year ($20) to have ad-free email seems like a good model.
If the content providers aren't smart enough to figure this out, eventually the public will start demanding that some of these "essential" services be treated like utilities, and regulated by the government. Intellectual property protection is a privilege, and if enough of the children continue to misbehave and abuse this privilege it will be taken away.
It is clear fom Slashdot comments that Americans have a deep built-in phobia about anything done by government. No doubt this is rooted in the "free man" culture with which the USA was founded, the fact that the USA industialised after the worst period of laissez faire capitalist exploitation in Europe, and the myth of British tyrranic governance which fuelled the revolution; but it does gets beyond reason at times.
Wrong. Only a tiny minority of people in the USA would claim that the core services that government provides could or should be provided without government.
Regulating capitalism, maintaining the roads, public education, providing or regulating public utilities, the police, long term medical and scientific research, maintaining public libraries, protecting public lands, and protecting the environment: all of these (within reason) are legitimate functions of government (as are doubtless many other things I haven't listed), and you will find relatively few people in the USA that dispute this.
The problem is not that government is bad as an abstract concept, the problem is that government as realized in the USA is sometimes corrupt, is often excessively bureaucratic, and is sometimes just plain incompetent.
Hence, there is no phobia about "anything" done by government, but rather there is deep concern about the current state of government.
A government that is massively in debt, with an out-of-control legal system driven by ethical conflict of interest (and with an ever-increasing rate of abuse of fundamental rights), is not a government that generates much trust or goodwill.
No amount of propaganda by the mainstream political parties and politicians can keep people from seeing the problems, but the fact that the propaganda exists in the first place makes people very suspicious that anything useful will be done to correct the problems. That's not a phobia, just plain common sense.
The right wing in America is pro- all the personal freedoms enjoyed by a typical white male middle class homeowner in, say, Lexington Kentucky. Freedom to do what you want with your land? Check.
Actually, this whole issue is about the freedom to do what you want with your land. In particular, it is about having the freedom to breathe clean air on your land.
Just as an individual's right to wave their fist around ends at somebody else's personal space, so to does an individuals right to create sound, light, or chemical pollution end when the products of that pollution enter another person's property. This is merely a straightforward application of long recognized (but seldom enforced) legal principles.
Further, I would assert the right to be protected from unwanted pollution is a fundamental right arising under the 9th Amendment "rights retained by the people", and thus, in protecting those rights, the government is doing exactly what it should be doing here (for a change).
If somebody wants a wood burning stove, or a big Christmas light display, or a barking dog, or a noisy piece of machinery, that's fine, however, they have a responsibility to keep any chemical, light, or noise pollution generated by these sources from entering other people's property.
Find some way of filtering the pollutants, or get permission to do this from every other affected property owner and/or resident, or don't do it at all.
In order to prevent the "ex post facto" issue, the government might offer reasonable compensation for the depreciated value of the existing stoves that do not meet the emission requirement.
"I can't think of a way to measure contribution precisely, even though there have been loads of effective worker cooperatives across the world, therefore socialism as an ideal does not reward labour."
The issue is doing this on the large scale. It works fine for many small groups, especially those that are working on fairly simple stuff without a lot of dependencies, or even some particular complex tasks that human beings have thousands of years of experience at (such as small scale farming).
I'm not sure I understand this belief that capitalism best response to demand with supply
Look into the reasons the Soviet Union failed. They tried to determine what needed to be produced, and when it needed to be produced. They couldn't do it. There were huge shortages of many things (except, perhaps, in the stores serving the privileged few, but even those were often inferior to what the average European or American can get in a quick trip to an ordinary store). On the other hand, to complement these shortages, they also ended up with warehouses of other stuff that wasn't being used. Find some of the papers produced by post-Cold War Soviet economists for the details.
The problems of getting different organizations within the Soviet system to work with each other were so hard that many of the organizations built their own spare parts and tools because they couldn't rely on the other groups to produce what they needed, when they needed it. That's very inefficient. They couldn't focus on what they did well, but had to try to be good at everything, which wasn't possible (nobody can be good at everything).
The reason they couldn't do manage production on a collective basis comes from the fact that the economy is vastly more complex than most people realize. Outside of engineering, few people realize how complex the processes are to manufacture many of the things we take for granted every day. Even 1940's technology was a LOT more complex than most people realise: look at a book on wood or metal working with machine tools to get a feel for that (you might also look at the part count for a WW2 fighter, or even a WW1 warship: you'll be surprised!).
Modern technology is even more complex: the physics of small devices is very different than the physics of large devices, for a variety of reasons (such as quantum mechanics, critical to understanding modern semiconductor devices), and the physics of very fast devices (such as cell phones and computers) is different than the physics of slow devices (as speed goes up, lumped models become impractical and one has to use fundamentally different techniques: glance at a "microwave" book if you're interested).
A typical cell phone, for example, is built with at least one custom ASIC, and that will be an enormously complicated piece of technology. A typical ASIC fabrication process takes hundreds of steps and requires a multi-billion dollar facility, and there are so many different fabs, and processes, and details involved that no single engineer understands all the details of making a typical chip! Then we have the equally complex issues of packaging and testing the thing, each of which is a speciality in itself!
There are layers upon layers of technology that go into building the tools to refine the materials to build the products we use, and the mix is constantly changing.
Then we have the enormously complex issues of logistics, i.e. getting things where they're needed, when they're needed. It's such a complex subject that in military science it's said that amateurs study tactics, professionals study logistics. Again, doing things on a small scale is one thing. Doing them on a large scale (and in a timely manner) often involves a lot of complications one doesn't encounter in toy or small scale projects.
The mechanisms of cost and price and profit provide a means by which organizations can measure their performance, and that measurement becomes the basis for improving performance, and determining when something is or is not feasible. There's a lot of factors that can cause capitalism to fail in particular cases, for a wide variety of reasons, but that's true of any system. It's an imperfect system, but it's also responsible for most of the progress in the past few centuries (as well as a lot of awful stuff, but it's not unique in that).
This doesn't mean that everything needs to be done on a capitalist basis, or that unrestricted capitalism is a good idea (history clearly shows that it isn't). Capitalist systems are notoriously bad at funding some important things, such as long term research.
You mention the examples of health care and energy. In typical cases, these are only partly done on a socialist basis. The health care systems do not usually make their own chemicals, or test equipment, or surgical tools, or all the other thousands of different things they need to do their jobs. A MRI machine, or an Ultrasound machine, or a CAT scanner, is an enormously complex piece of equipment. The socialist health care systems typically depend very heavily upon a capitalist system operating in the background to produce these complex things (and tend to have far fewer machines available, and longer wait times for access to a machine, than one finds in non-socialist health systems).
Further, don't be misled by the low costs you often hear cited for producing drugs: the reason the cost is low is because a lot of complex and expensive equipment and processes are being used to make the cost per item low (and that doesn't even begin to consider the costs associated with developing the drug in the first place: lots of time and expensive test equipment is required). That doesn't mean a socialist health care system would be able to produce the same goods for the same price.
Similarly, the energy companies rely on capitalism to produce the machines and supplies needed to produce and distribute the electricity. Lots of complex stuff. Think about what's involved in mining all those miles (thousands of miles? tens of thousands of miles?) of copper wire, shaping it, insulating it, and installing it, to say nothing of the generators and the controls systems!
It is far from clear that either type of organization (energy or health care) could operate on its own as a socialist entity, with any degree of efficiency, without capitalism in the background to drive the complex processes needed to produce the fundamental things needed for the organization to operate.
Another issue to think about is the measurement issue with respect to these organizations. From a social science perspective, measuring the effectiveness of these organizations is a very difficult problem and many of the claims made are dubious at best, and often badly flawed or highly misleading.
To each according to his contribution
It's never worked. There is no reason to suppose it ever will work on the large scale. Somebody has to define how to measure contribution. It can't be done. There are too many people, too many trades, too many skills, too many goods, too many variables, too many potential exchanges, too many dependencies.
Many of the aspects of contribution overlap, and others are hard to measure in their own right. How do you measure leadership and determine the relative worth of it compared to other contributions? If a minor team member finds a fatal flaw in a product that nobody else saw, allowing it to be corrected before it goes out the door, what is their contribution? Is it equal to the contribution of somebody that has spent years of their life designing the thing? Is it worth more? Where do we draw the line? How do we come up with a system that people are willing to accept, given that we have to keep large numbers of people happy with the solution?
What about people skills, how important are they? Most work is done in teams today, but how does one measure the contributions of individual team members? That's a really hard problem, with lots of bad solutions that create more problems than they solve.
The measurement problem -- measurement being the basis of science -- is ultimately one of the hardest aspects of all science. It's particularly difficult in the areas to which the social sciences are applicable. At present, we don't have the tools. It's unlikely we will any time soon.
Capitalism, on the other hand, is not about selfishness. It's about markets expressing supply and demand to provide for the efficient exchange of goods and services whose value can not be measured any other way. The need for exact measurement goes away.
Some regulation of this process is needed, because any group of human beings contain sociopaths who will try to defraud others, or not honor their given word, or will do other things that cause long term harm to the environment or to society. Any system must deal with these people. It's not capitalism that's the problem, it's coming up with better ways to deal with these people.
Over-control is actually a bad thing, because it prevents markets from expressing supply and demand accurately or in a timely manner, causing inefficient or bad decisions to be made.
The selfish person robs others at the point of a weapon, or steals by cooking the books, or engages in cons, or plunders a retirement account. This isn't the norm in capitalism. Most people are decent and honest, and the best deals are those in which both parties walk away satisfied.
If anything, a lot of non-selfish behaviour happens as a result of capitalism. There is a long history of immigrants to capitalist places sending large amounts of money back to their families in their homelands to make life for them better (and that money in turn stimulated the economy in those far away places).
In the case of the USA, the history clearly shows the Irish did this, the Chinese did this, the Japanese did this, the Jews did this, and so forth. We have lots of accounts of people receiving money from their families. Many groups continue to do this. It was capitalism that made this extremely non-selfish behaviour possible, to the long term benefit of the world as a whole.
That's why the FairTax has a prebate, in the amount of tax which would be paid up to the poverty level.
One problem here is coming up with a single number to use for this prebate, as the cost of living differs significantly from area to area, and is often HIGHER in poor areas. In poor urban areas, this is because businesses operating in those areas must deal with lower availability of capital and higher losses of goods, both resulting from the higher crime rates, and thus must charge more to compensate for the higher interest rates on loans and higher loss of goods.
In poor rural areas, the higher cost is associated with the cost of transport to get the goods out into the boonies.
Then we have the difficulty of defining poverty
Another problem with this system is that many types of purchases that have long term educational value to those looking to improve their lot in society will be taxed, such as non-fiction books, non-fiction DVDs, computers, software, internet access, tools needed to learn trades, and so forth, as the acquisition of these won't fall within the prebate amount.
Yet another set of problems with this system arise from the fact that supply and demand still operate. The higher the cost of the item, including taxes, the less the demand. Each item not sold reduces the profits of the merchant, the manufacturer, and everybody else involved in the system.
Thus a sales tax directly reduces the ability of people in poor neighbourhoods to get jobs, since the merchant will have less money to spend on hiring people (or on bonuses for good work).
You also get poorer selection: the higher the sales tax, the fewer items merchants can take a chance on selling. Stores in poor areas already have much more limited selections than those in other areas, which has an impact on things like health.
Similarly, due to the reduced profits, the availability of capital will be lower, for things like repairing or maintaining or expanding the buildings the merchants operate in.
You also negatively impact the willingness of people to produce educational materials to help the poor develop job skills, or to provide services that help with this.
All this is basic economics, unfortunately it's not going away without massive government regulation that does more harm than good.
Most people do NOT stay in poverty their entire lives (most basic books on economics will discuss this at length), but the more you create obstacles to developing marketable skills (or getting low paid jobs that allow individuals to get skills) through legislation, the more you limit this mobility. Over the long run, you create a class system.
If you're going to tax goods or services, tax things like ESPN, which nobody needs.
Agreed, but by taxing income, you allow taxation to be avoided by deferring income, having income received outside the country, etc. Tax consumption, and you reduce opportunity to avoid the tax.
No, you simply shift the avoidance of taxes to different mechanisms. Black markets become more profitable, and you see an increase in smuggling and barter, as well as services provided "off the books" or "Quid pro quo". No tax system can prevent this: there will always be inefficiency in any system. Think of it as entropy if you like.
Having both income and sales taxes buys you the worst of both worlds, and there's all kinds of second and third order negative consequences that flow from having multiple types of taxes operating at once. Far better just to tax job and inheritance income.
There's a lot that could be done to clean up inheritance tax. For example, don't allow people to hide money in trusts or similar mechanisms (or at least clean up the rules for doing this). Also, you might set some reasonable maximum for the money that can be received by 1 individual through inheritance, gifts, or trusts over the course of their lifetime (such as 1 million dollars in today's money, which is about what one needs to retire today and not be dependent on the government: over time that amount would need to be adjusted for inflation).
Answering your question from a historical perspective, several things are clear:
1. Warship design is a very complex field of endeavour. It makes rocket science look childishly simple. Even the design of WW I warships was enormously complex (take a look at some of the books by Norman Friedman to get a sense of this, and even then you won't get most of the engineering details!), and things have only gotten more complex since then. It's actually really hard to get good engineering information on warships, even those from a century ago, let alone today.
2. Lots of mistakes are made in the design of ships (and their associated equipment) by the naval officers responsible for these designs. Technology changes fast, and this can invalidate even good decisions. For example, in WW2, every single major navy in the world failed to design their ships with adequate defences to meet the air threat (aircraft became a lot faster, and hence more difficult targets, in a relatively short time), and the cruisers were not designed to handle the torpedo threat (torpedos deployed in WW2 for many navies were far more effective than had been the case in WW1).
This is not necessarily due to a lack of naval professionalism or competence. The problems involved in figuring out what needs to go on a ship, and keeping up with technology (while staying off the bleeding edge), and making this happen in a large, complex organization with many political constraints, are really, really hard problems.
Note that these failures didn't make naval units useless: they still played a critical role, but more lives were lost as a result.
3. Even more mistakes are made by politicians (including politicians in uniform) in the funding of construction and renovations (or lack thereof). For example, the British Navy told the politicians the Hood needed a refit long before WW2 started, but the politicians refused to pay for it (in part because the public didn't want to support the military). Also, there is the issue of pork-barrel politics.
4. Government agencies and contractors make their own share of mistakes. Lots of ships have had problems with their systems (e.g. engines that keep breaking down), or even their fundamental design (smokestacks that blind the crew, inability to operate in bad weather) with parts or designs from one particular contractor or government agency. My favorite example: the government agencies running the US Navy in WW2 managed to deploy dysfunctional torpedoes to the entire fleet at the start of the war, using a design that was not properly tested.
5. What "everybody knows" is often proved wrong, misleading, or incomplete in combat. Manufacturing problems, maintenance issues, morale issues, weather, organizational politics, lack of experience, inadequate training, and so forth, not to mention overconfidence or arrogance, can all result in bad or incorrect decisions, causing units that "should" in some weird theoretical sense be able to defeat an opponent (or at least inflict significant damage) to lose badly. Shit happens. Look at the Battle of Savo Island, from WW2 for an enlightening example. Also look at the loss of the carrier Glorious in WW2.
6. Even decisions made for good reasons will get criticized after the fact, by people that don't understand the issues or the reasoning behind the decisions (i.e. the press, the politicians, the self-appointed "experts" amongst the public, and even naval professionals with a political agenda). A lot of myths and misleading information persist as a result. Consider the "controversy" about wooden flight decks on WW2 carriers, for example. Or the question of battlecruiser design and "vulnerability".
In short, history shows that it's very difficult to generalize or draw accurate conclusions about warship design. Most people get it wrong, including many professionals, and the only real test is combat. Even then, the assessment of how designs actually performed can be difficult.
This is a point that is probably a bit hard to grasp for people whose concept of naval matters is formed from playing computer games (which don't model most of the issues that make warship design hard). You won't learn anything useful about naval operations from the ships in popular games like Starcraft.
If you wish to argue that the US military is over-funded, this isn't the right way to go about doing it. There are better approaches.
I never understood this line of thought that but it's just for personal use so that makes it ok. It doesn't. The copyright owners have the right to charge for their film, including 'personal use' (indeed, this is almost the entire point of releasing on DVD). Today's copyright law rightly grants them this prerogative.
The issue is considerably more complex than you seen to realize. Copyright law in the USA needs to handle things like loaning a copy out to another person, without giving the copyright owner authority to control this process. This is how public libraries work, and public libraries are an essential component to a free society, as they provide a means for any citizen to get at least some education (making up for the many deficiencies in the public education system, for those with the motivation to pursue lifetime learning). Public libraries often provide much of the information citizens need to participate in and sustain a free society.
Copyright law also needs to handle the issue of library loans of audio books and DVDs, where copyright owners only get to charge once for a work, but it can then be temporarily copied into the memories of multimedia players (this is how all multimedia systems work: you can't avoid a copying process if you want to display something to a tv screen or monitor or play audio) by thousands of people taking that work home and playing it in the privacy of their homes (can't get more personal than that).
Nobody worried about this in the old days, of course, when the copies were analog and thus unreliable over the long term (like the analog copy your brain makes when you watch a movie: nobody cares about that even today, except some would-be thought police). Playing a DVD at home is certainly a personal use, but should the copyright owner be able to charge for every time a copy is made simply to play it? Historically that has not been the case.
Similarly, copyright law needs to handle the copying of works for non-commercial research purposes to the benefit of society, and for situations involving public oversight over businesses, over non-profit organizations, and over the government. There is a big problem with "paywall" setups in this respect. Many people believe they have a long term right to know what their tax dollars are being spent on, and thus on a personal basis have a right to read research papers that receive public funding.
All of these varied scenarios are accommodated by the section of copyright law that provides for "Fair Use" rights. The text in the law code is fairly short, but there are many precedents that one must read if you want to understand what the courts make of this. In essence, this section of the law provides for the freedom to make copies or other derivative works (such as satire) without the permission or knowledge of the copyright owner under certain not-well-defined circumstances.
To make matters even more complicated, in the USA we have an open-ended Bill of Rights. This is what the 9th Amendment creates, by providing for unspecified rights "retained by the people", and the 10th Amendment, with unspecified rights "reserved to the people". This potentially places limits on what can be put into the law, according to what the people decide those limits are, which means neither the letter of the law nor the courts have the final say on what is or is not legal, or what is or is not a right.
A right for live human beings to engage in reasonable individual conduct is certainly high on the list of rights that might reasonably be asserted as being protected under the 9th Amendment. In this case, we would have to decide when copying without the copyright owner's permission is reasonable conduct. Many arguments can and have been made with respect to this issue, which you can doubtless find in prior Slashdot discussions.
In this respect, one key point to think about is the idea many people believe have receiving intellectual property protection is a privilege society provides businesses, and in that sense there can also be costs associated with receiving that privilege, including the cost of lost potential sales in certain circumstances, or it can be taken away. For example, unethical and shady business practices, or abuse of the legal system, on the part of copyright owners and organizations representing them, are likely to be viewed by many as leading to forfeit of these privileges.
Another right we might assert as a right "retained by the people" is the right to ethical practice of law and to ethical government, and this in turn has implications for the legitimacy of current copyright law. Certainly the long length of current copyright, and the complexity of the current system, are both ethically suspect.
Having said all this, in many circumstances I expect sharing of content should NOT be done without some form of reasonable compensation ("reasonable" in many cases, but not necessarily all, being determined by the market), and what a lot of people do in this respect is abusive. Equally, many special interest groups with an interest in copyright are themselves abusive of other people's rights. It's a complex situation, and significant reform of the law is needed.
Now how much is wasted? I don't know, but the result could be shocking.
There are fundamental limitations on systems. There is ALWAYS inefficiency in ANY system (see: thermodynamics, entropy).
Mechanical systems (engines) started out at about 1% efficiency (these were systems for pulling water out of mines). They slowly got better, with the development of improved definitions and measurement techniques (the two are closely related in science) eventually leading to an improved understanding on what was really going on, but even today it's really hard and often not practical to get high efficiency out of physical systems.
Now compare that history to the situation in current science research. Determining "waste" in this context is essentially a measurement problem in social science, since even when the research projects involve physical science those projects are carried out by human beings. Here we DON'T have good measurement techniques, or even a good definition of what efficiency in this context means. That in turn implies we can expect any approach we come up with for improving "efficiency" in this context to be ad-hoc and unreliable (and we may even do more harm than anything else by trying to change things).
Further, what's a "good" efficiency in this context? Is 1% ok? Those mine engines were enormously popular and useful even with just this low efficiency. How do we determine what is good?
The fact is, while reading is indeed an intellectual activity, it's an intellectual activity that appeals to people to varying degrees. Some people simply do not find intellectual nourishment from books. Now, perhaps it's because they are stunted in their intellect or imagination, but often, there are other ways they stimulate their brain.
While intellectual nourishment may be a valid reason for reading, it is not the primary one. The primary reason reading is put on a pedestal is that, at present, good readers can absorb vastly more information about a lot of topics important to the long term development of society (the physical sciences, the social sciences, history, law, philosophy) from reading a book than is possible from interacting with any other medium. The assumption is that if we can get people to read more, they will benefit from this efficiency. This in turn can help make people informed citizens, as opposed to ignorant sheep.
A lot of political decision making is oriented towards things that sound like good ideas in the minds of ignorant people (of whom there are many), while actually being really bad ideas with lots of negative consequences over the long term. Most introductory economics books will provide lots of good examples of this phenomenon. Getting people to do more reading is potentially the most efficient way to change this state of affairs.
A college or university degree does not make somebody educated. At best, it lays a foundation for building one's education. The actual process of becoming educated requires far more years, and exposure to a much wider variety of material, than any degree program exposes students to. With the learning technology available today, getting this education will require a lot of reading. Hence the pedestal.
Clearly, reading books alone will not build skills (other than the skill of reading), but it can lay a foundation for learning a surprising variety of skills. Also, reading books does not necessarily build critical thinking skills (it can help with this, provided one already has an appropriate foundation). There is still a place in education, in many and perhaps most subject areas, for teachers and for hands-on training.
This is not to say that mathematics or juggling are without value, but you won't learn much about economics, for example, by doing either in isolation.
Yes, I know a lot of modern economics research is math-driven, but the math involved in the basics of the subject is simple enough that one can learn a lot just by reading about it, if one is a good reader and has basic grade-school mathematics skills. Then there's the issue that mathematicians tend to work in fantasy worlds, and sometimes that can be counter-productive for understanding the real world. The balance between application and abstraction, theory and practice, is critical for true understanding of the world.
Video and audio courses are now a good alternative (or at least a supplement) to reading for many subject areas. We're slowly moving away from the book as a dominant tool for lifetime learning. Even when such courses are available, however, it isn't necessarily the case that the course will be complete enough to stand on its own without being supplemented by outside reading.
A good plan for lifetime learning balances those things that can be learned through books with those things that can't.
Make some time to read good books, then go and get other forms of intellectual nourishment as you see fit.