My problem is that arithmetic is both a concept and a skill. Most of the teaching methods emphasize teaching the concept. This would be like focusing on teaching you how to ride a bike by trying to come up with constructive suggestions to improve your intuition about how it works, but minimizing the amount of time you actually get to be on the bike.
What is being lost is that basic math is a skill and like all skills, it needs repeated and constant practice sustained over multiple years. I see way too many students that can't do basic arithmetic after going through these "concept" oriented classes. Or to put it more strongly, if learning basic math isn't a boring repetitive chore, than you aren't doing it right.
In the worst of all possible worlds, straightforward skill practice is replaced by repetitive practice of the "concept" building exercises -- so its still boring giving you not even that win. I see this often enough that I rather ditch any "concept" building and just do the arithmetic if the outcome is to train mathematical illiterates. It would be much like doing repetitive practice sessions of "envisioning yourself on a bike" without ever being on a bike. Imagine we treated reading like math. You weren't allowed just to read the books, you had to "read" the books in the correct way showing that you had built up your mastery of parsing words from letters, to syllables, to words.
A kid shouldn't be allowed out of sixth grade if they cannot quickly answer the following questions:
40 - 16
8 * 9
1/2 - 1/3
I can imagine situations where I could recommend any of the above. For example, if you are large financial company with billions of rows, I would go with Oracle. If you have smarts but not money and didn't need somebody to sue if something went wrong, then maybe Postgres would do . If I were a simple web based app with simple form submits, I would go with MySQL. If I had complex unpredictable data blobs and unpredictable needs to do certain types of queries against the data, I might recommend Mongo. If I have large amounts of data on which I want to do analytics I would use Cassandra.
Cassandra wins when you have a lot of data and not a lot of complex real time queries against it. It is especially good at scaling up on cheap data storage (think 100s of terabytes). It also has an unreal "write" throughput (important for certain types of analytics which write out complex intermediate results) though that is not relevant for the case described.
The problem generally with noSql solutions is that they increase the amount of storage to store the equivalent amount of information. You are essentially redundantly storing schema design with each "record" that you store. This really matters more than some might suspect, because when you can put an entire collection into memory, the read performance is much higher. You usually need 1/5th to 1/10th as much RAM to do the job with a traditional relational database (especially since MySQL and their brethren handle getting in and out memory better than mongo). This isn't so much the case for Cassandra because of its distributed storage nature, but it really isn't usable for real time transactions.
My recommendation, use a traditional database -- if in a Microsoft shop use SQL Server, otherwise I like postgres or mysql. If however, you have complex data storage needs that a noSql solution is perfect for, then I would go with that. If you are into back end analytics, copy the data as it comes in and put into a Cassandra (or one of its similar brethren) as well.
When I was younger, I naively believed that patents demonstrated that the inventor was truly clever and original -- the lightbulb, invention of jet engine, silicon chip, and so on. Now, what I see is a world filled with patents that are a waste of everybody's time and those few who actually truly invent something new are no longer getting the positive rep that used to come with filing a patent.
The solution is simple. You make the patent filer pay a few thousand dollars, you use that money to pay "world class experts in the field" and then you ask the experts, is the invention truly original and of significant value -- so much so that keeping the details of the invention secret would actively harm mankind?
If the patent isn't worth paying a few thousand dollars to file, then why should we even be considering it.
Adding to my confusion is that there is no reference to articles, books, or other subject material that supports the general thesis. If the "mean deviation" is better than the "std deviation", give some real concrete examples and supporting mathematics.
Also, there seems to be no reference to "bell curve" distributions and "non bell curve" distributions. Standard deviation computations are built around bell curve distributions for their mathematical soundness. For example, if I were to take every number and raise it the fourth power, standard deviation would not work so well on this new set of numbers. Is the author suggesting that typical sampling distributions of sampled events tend not to be "bell curve" like?
Standard deviation is taught in 7th grade in my local school. It shows up constantly in any standard K-12 curriculum. To challenge this, you really should bring a lot more substance to any argument that we should do things differently.
For example, I could argue that we should use 1:2 to represent 1/2 because the slash (/) should be used for logical dependency arguments instead. I could create lots of examples and go into a diatribe about how people constantly misuse fractions and ratios because they use a slash in their construction. But I would still be spouting nonsense.
What if we made patents peer reviewed by a group of high profile experts in the field in which the patent is filed. So notable software professionals would be consulted for software patents. This group would use a high bar on the "obviousness" and "prior art" test so that rewriting prior art into a different language and giving a slightly different spin would not make it past this group. The group would be paid based by on the (likely to be substantial) fees charged to the person filing the patent. This is how research articles are handled for the best scientific journals. If a patent is laughably far from being publish worthy for a reputable scientific journal, why are we letting it control millions (or billions) of dollars of commerce? Currently, we are forcing our higher courts to learn all types of arcana before they are able to kill a patent based on prior art and obviousness. Using a group of true experts (not the underpaid and overworked staff at the patent office) would do a lot to improve the situation. Patent lawyers are not a sufficient substitute.
Tolkien beats out a lot of other supposedly excellent authors in the way HTML beats postscript (or any other complicated SGML that you might propose). There is something different about what it is that fundamentally makes it different and better. HTML appears trivial when compared to postscript but that is its strength, not its weakness. Tolkien is, in many ways, the same.
Now assume that at least some of your code in your company is under a proprietary license (maybe you bought a 3rd party library to incorporate in your deliverable -- or you just want to not have competitors popping with a codebase cloned from yours)), could you contemplate even for a moment in using GPL (or even LGPL) code in your suite of products? Even if you assume that a large part of your product base could be shipped with a GPL license with no significant impact to your bottom line, would you still do it given the fallibility of programmers? Given the errors programmers tend to make, is it not highly likely that GPL code would end up being incorporated in software projects which were meant to be closed source? Isn't it true that given the error prone nature of humans, GPL is truly a virus in its ability to replicate and introduce itself into foreign hosts? No wonder legal departments at companies view it with such hostility.
Link to Original Source
Link to Original Source
Lets take two scenarios.
Scenario I - A man craftily and with active malice orchestrates the simultaneous hijacking of four planes and then has three of them successfully crash into highly symbolic targets and kills lots of civilians (about 3000 for those who care about numbers). This man then glories in these deaths and uses this attack to recruit and motivate more like minded individuals.
Scenario II - Small radicalized subgroups in a country attack another country and kill a few hundred people over a duration of years. The attacked country responds by sending in military and bombing suspected locations where the radicalized subgroup is harbored and over the process of a few years kills thousands of people and making the lives of 100s of thousands more miserable. Many of the thousands that die are not directly killed but die of disease, untreated wounds, and the general anarchy of the situation. Most of those thousands are not part of this radicalized subgroup and are civilians. But many of these civilians harbor deep antipathy towards the country that is attacking them even going as far as believing that it would be a moral good if the attacking country were to be removed from the face of the earth. The originally attacked country justifies their aggressive response by saying that it is the only way they know to deter radicalized subgroups from continuing their attacks against them and they have the right to defend themselves.
There are some who argue that the man in scenario I and the originally attacked country in scenario II are essentially equivalent in the moral weight of their wrongness of their actions and others who argue that they are fundamentally different. There are some who would argue those who suffered in scenario II are justified in participating in actions similar to scenario I.
I believe that scenario I is much more representative of true evil than scenario II even though the suffering in scenario II is greater and I see it as the difference that differentiates first degree murder in cold blood and other lesser forms of murder. Each ends up with people dying, but the first should get you put in prison for life, the second may only put you into jail for a few years. I am not saying that scenario II is not evil, but it is hard not to be sympathetic with those who are responding to aggression against themselves with their own aggression even if the response is of disproportionate magnitude greater than the provoking attack.
I will say one more thing about this. I have noticed that people's opinions about scenario II are very much dependent on their connections to and feelings about the people involved. The person in scenario I is pretty much universally despised.
Thinking about this, I believe there is one particular aspect of this discussion that needs more elaboration. Lets look at two ranges of the IQ test. The range from 80 to 120, and the range from 130 to 170. They are both 40 points apart and imply a wide difference in intelligence for those at the bottom vs those at the top of the range. However, the IQ test does much better (in my opinion and I suspect you can find independent literature to support this) on the range 80 to 120. Usually somebody with an IQ of 80 is not destined for a college degree and somebody with 120 has a good chance of finishing college. In this regard the test does fairly well. Whether it is actually measuring real mental talents of one type or another is a different issue.
Now, look at the range of 130 to 170. People with IQs of 170 are a bit different in nature to those who have 130. That seems fairly clear. But focused strengths in particular mental abilities are not well picked out and the IQ test seems to do a terrible job of predicting future grandmasters in chess, future professors at elite schools, future engaging storytellers, or even future great repositories of interesting trivia. Also when it comes to elite abilities, IQ tests at the high end of the range tend to discount the obsessive dedication that is required to become one of the best.
I think one of the issues is that IQ tests are good at finding deficiencies, places where somebody is lacking critical mental skills to learn what is required in our modern society, and does poorly at diagnosing elite mental talents. Those that praise the IQ test usually point out scenarios where the IQ test helped find people who needed additional resources to succeed. Those that criticize the IQ test tend to focus on how those with "genius IQs" tend not to necessarily do great acts that measure up to their numerical IQ score.
Take the relatively simple problem of determining potential skill at chess. Chess makes for a nice example because skill at chess is only somewhat coorelated with other mental abilities (making it possible to "isolate it" from other mental facets) and it is definitely measurable by competing with others. There is a clear cut state of "grandmaster" which all fairly accomplished chess players agree is a statement of real elite capability. It is (probably -- I am extrapolating on my own anecdotal experience) not hard to create a test to determine if somebody is going to play chess adequately and I suspect such a test is somewhat coorelated with an IQ test. A person with an IQ of 80 probably will never play chess that well, while a person with an IQ of 120 will likely learn to play the game adequately (counter examples are welcome). There are kids who clearly do not have much talent for the game and I doubt even focused study would help them. For them, learning how to mate with K and Q against K is a bit of a stretch.
But is it possible to create a test which will determine who is likely to be a future grandmaster (or even master) as compared to just playing "well"? I have recently been a chess coach for elementary school kids and there is one trait that I have determined that is coorelated with future ability. It is an obsessive interest in the game. I have kids who I thought were better natural talents, but they quickly fell behind those who made it their life mission to be better. In particular, I believe that an IQ test result of 170 is practically meaningless in predicting future great success in chess.
I use chess as an example, because I believe much the same can be said about any elite mental talent. Every time I hear debates about IQ, I ask myself, how well does it predict chess failure and how well does it predict elite chess success? I believe such a examination will produce results that are as valid as when the IQ test is used to predict future greatness in scientists and writers.
What would be harmful is if the "best and brightest" were being hired just to aid this "amusement device" for the wealthy. It would be much like rich people hiring the best artists to create personal art works that would not be available to the general public. It is wrong, but not terribly wrong and in the long run it might not be that harmful. In the case of the artists, the artists might otherwise have given up doing art if not for funding from the wealthy. Likewise with engineers, some engineers may find finance closer to their "true calling" than anything they can get outside of finance.
I agree with your assessment in the use of CDOs, but my spin on it is different. In the case of the CDOs, the principal problem is that they disguised the risk from a big "negative event" (house prices stop going up). Because of this, they provided returns that appeared attractive and regulators that monitored risk at our large institutions allowed transactions to occur which should not have occurred. The "crime" here was that CDOs were advertised as a "safe" investment that provided returns better than other "safe" investments when the truth was that CDOs were far from being "safe". All the bad outcomes (banks using CDOs to give them more money to lend) are consequences of this basic fact. My question is how much of financial engineering goes into enabling these types of "crimes" and how much is for "gambling" (which in some cases can actually do good things)?