Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re:danger will robinson (Score 1) 688

Every few years I see yet another "correct way to teach basic mathematics" come through with the promise that this new way will win where the old ways have failed. Common core is in some ways yet another one of these.

My problem is that arithmetic is both a concept and a skill. Most of the teaching methods emphasize teaching the concept. This would be like focusing on teaching you how to ride a bike by trying to come up with constructive suggestions to improve your intuition about how it works, but minimizing the amount of time you actually get to be on the bike.

What is being lost is that basic math is a skill and like all skills, it needs repeated and constant practice sustained over multiple years. I see way too many students that can't do basic arithmetic after going through these "concept" oriented classes. Or to put it more strongly, if learning basic math isn't a boring repetitive chore, than you aren't doing it right.

In the worst of all possible worlds, straightforward skill practice is replaced by repetitive practice of the "concept" building exercises -- so its still boring giving you not even that win. I see this often enough that I rather ditch any "concept" building and just do the arithmetic if the outcome is to train mathematical illiterates. It would be much like doing repetitive practice sessions of "envisioning yourself on a bike" without ever being on a bike. Imagine we treated reading like math. You weren't allowed just to read the books, you had to "read" the books in the correct way showing that you had built up your mastery of parsing words from letters, to syllables, to words.

A kid shouldn't be allowed out of sixth grade if they cannot quickly answer the following questions:

40 - 16
8 * 9
1/2 - 1/3

Comment Re:I just can't get excited about SpaceX (Score 5, Insightful) 87

Hmm... The gist of this is essentially correct, except for one detail. Cost. The only number that is really going to matter in the end is how much money does it take to put 1 ton of stuff into orbit (or beyond) from the ground. Right now it appears to be $10,000,000 USD or even much higher (based on the numbers I see being thrown around on Slashdot). Government subsidies (such as in Russia), can hide some of this, but this seems to be the essential economic truth. As long as that remains the case, mankind is not going to be a space faring race and venturing into space will mostly be for kicks and bragging rights (and maybe a bit of good science, such as Hubble). What SpaceX offers for the very first time, is a path where we may reduce these costs by a factor of ten or more. If we can start putting a ton of stuff into space for less than $500,000 it will radically change what is possible -- a cost of doing something real goes from $200 trillion to maybe $10 trillion -- something we could spend over a 100 years. Things like real space stations, and large space ships with landing vessels.

Comment Depends on the situation (Score 2) 272

I have used Oracle, MySQL, and Mongo in prod situations. I have looked at Cassandra for evaluating it for potential usage in prod.

I can imagine situations where I could recommend any of the above. For example, if you are large financial company with billions of rows, I would go with Oracle. If you have smarts but not money and didn't need somebody to sue if something went wrong, then maybe Postgres would do . If I were a simple web based app with simple form submits, I would go with MySQL. If I had complex unpredictable data blobs and unpredictable needs to do certain types of queries against the data, I might recommend Mongo. If I have large amounts of data on which I want to do analytics I would use Cassandra.

Cassandra wins when you have a lot of data and not a lot of complex real time queries against it. It is especially good at scaling up on cheap data storage (think 100s of terabytes). It also has an unreal "write" throughput (important for certain types of analytics which write out complex intermediate results) though that is not relevant for the case described.

The problem generally with noSql solutions is that they increase the amount of storage to store the equivalent amount of information. You are essentially redundantly storing schema design with each "record" that you store. This really matters more than some might suspect, because when you can put an entire collection into memory, the read performance is much higher. You usually need 1/5th to 1/10th as much RAM to do the job with a traditional relational database (especially since MySQL and their brethren handle getting in and out memory better than mongo). This isn't so much the case for Cassandra because of its distributed storage nature, but it really isn't usable for real time transactions.

My recommendation, use a traditional database -- if in a Microsoft shop use SQL Server, otherwise I like postgres or mysql. If however, you have complex data storage needs that a noSql solution is perfect for, then I would go with that. If you are into back end analytics, copy the data as it comes in and put into a Cassandra (or one of its similar brethren) as well.

Comment Failure in obviousness testing (Score 2) 192

If I were to write in a paper in medicine and try to get it published in one of the various medical journals that are out there that have a reasonably good reputation, I would be rejected so quickly if I were to try a "Algorithm for using instruments in surgery, nurse hands over knives handle first" journal article. But the equivalent of this level of obviousness make it through the patent office all the time. Software I have worked on has gotten patents more than once. In all cases, I thought the patents obvious to the point of silliness.

When I was younger, I naively believed that patents demonstrated that the inventor was truly clever and original -- the lightbulb, invention of jet engine, silicon chip, and so on. Now, what I see is a world filled with patents that are a waste of everybody's time and those few who actually truly invent something new are no longer getting the positive rep that used to come with filing a patent.

The solution is simple. You make the patent filer pay a few thousand dollars, you use that money to pay "world class experts in the field" and then you ask the experts, is the invention truly original and of significant value -- so much so that keeping the details of the invention secret would actively harm mankind?

If the patent isn't worth paying a few thousand dollars to file, then why should we even be considering it.

Comment Bell Curve (Score 1) 312

I find this article quite confusing. Is the actual suggestion that we should be going around using the mean deviation as a way of capturing the general variance of our data sets? Or to put it another way, does he want "deviation" measures not to give us a real sense of the larger deviations that might occur with some real probability. For example, with temperatures, standard deviation is more likely to suggest that we can have periods of significantly higher and lower temperatures than a simple "mean deviation".

Adding to my confusion is that there is no reference to articles, books, or other subject material that supports the general thesis. If the "mean deviation" is better than the "std deviation", give some real concrete examples and supporting mathematics.

Also, there seems to be no reference to "bell curve" distributions and "non bell curve" distributions. Standard deviation computations are built around bell curve distributions for their mathematical soundness. For example, if I were to take every number and raise it the fourth power, standard deviation would not work so well on this new set of numbers. Is the author suggesting that typical sampling distributions of sampled events tend not to be "bell curve" like?

Standard deviation is taught in 7th grade in my local school. It shows up constantly in any standard K-12 curriculum. To challenge this, you really should bring a lot more substance to any argument that we should do things differently.

For example, I could argue that we should use 1:2 to represent 1/2 because the slash (/) should be used for logical dependency arguments instead. I could create lots of examples and go into a diatribe about how people constantly misuse fractions and ratios because they use a slash in their construction. But I would still be spouting nonsense.

Comment Fixing the patent system (Score 4, Insightful) 347

This is just another in a long series of slashdot articles that have pointed out the broken nature of our patent system. What I have not seen is any serious proposals for fixing the issues beyond "throw it all out". I have to agree that making software (even software running in specific hardwire specifications) something that you cannot patent is superior to the current patenting solution. Something similar could be said about some of the pharmaceutical patenting that is going on as well (make it last "seven days" instead of "one", get to extend my patent).

What if we made patents peer reviewed by a group of high profile experts in the field in which the patent is filed. So notable software professionals would be consulted for software patents. This group would use a high bar on the "obviousness" and "prior art" test so that rewriting prior art into a different language and giving a slightly different spin would not make it past this group. The group would be paid based by on the (likely to be substantial) fees charged to the person filing the patent. This is how research articles are handled for the best scientific journals. If a patent is laughably far from being publish worthy for a reputable scientific journal, why are we letting it control millions (or billions) of dollars of commerce? Currently, we are forcing our higher courts to learn all types of arcana before they are able to kill a patent based on prior art and obviousness. Using a group of true experts (not the underpaid and overworked staff at the patent office) would do a lot to improve the situation. Patent lawyers are not a sufficient substitute.

Comment Tolkien fundamentally different (Score 1) 505

I am imagining a prize committee trying to decide whether to give an award to postscript or HTML as the best page description language when HTML first came out. The criteria which seems to be used to choose "good literature" would pick postscript every time. Postscript is far more sophisticated, allows far more options, has a much richer vocabulary for describing positioning, graphing, fonts, scaling, and so on. By any judgement of functionality, postscript would seem to destroy HTML.

Tolkien beats out a lot of other supposedly excellent authors in the way HTML beats postscript (or any other complicated SGML that you might propose). There is something different about what it is that fundamentally makes it different and better. HTML appears trivial when compared to postscript but that is its strength, not its weakness. Tolkien is, in many ways, the same.

Comment The imprecision of the real world (Score 1) 808

I have seen a few comments allude to this, but I thought I would focus on this particular issue. Most of the arguments about licensing assume that coding is a isolated act of creativity with no ambiguities creeping in because people make mistakes. Lets say you are running a company with a software development group and assume that there are five errors per significant body of work (for those who want a precise stat: two hundred lines of code, five mistakes in logic or detail). In other words, assume your developers are human and makes mistakes because of ignorance, losing track of details, or just the general confusion of working on such as large complex project. Now, many of the mistakes that are perceived by end users are scrubbed out (for the most part) by the QA department. But that still means that mistakes of every other conceivable sort are still in the code base.

Now assume that at least some of your code in your company is under a proprietary license (maybe you bought a 3rd party library to incorporate in your deliverable -- or you just want to not have competitors popping with a codebase cloned from yours)), could you contemplate even for a moment in using GPL (or even LGPL) code in your suite of products? Even if you assume that a large part of your product base could be shipped with a GPL license with no significant impact to your bottom line, would you still do it given the fallibility of programmers? Given the errors programmers tend to make, is it not highly likely that GPL code would end up being incorporated in software projects which were meant to be closed source? Isn't it true that given the error prone nature of humans, GPL is truly a virus in its ability to replicate and introduce itself into foreign hosts? No wonder legal departments at companies view it with such hostility.

Submission + - The Uncertain Future of Mono (infoworld.com)

snydeq writes: "Fatal Exception's Neil McAllister sees an uncertain future for Mono in the wake of recent Attachmate Mono layoffs, one that may hinge ironically on help from Microsoft itself. 'To lose all of the potential of these tools now would be a terrible shame. But it seems unlikely that Mono will be able to keep up with the pace of .Net without some sort of commercial backing,' McAllister writes. 'The most likely candidate might be the least-expected one. Microsoft has been working to revise its stance on open source for the last few years, softening its rhetoric and even sponsoring open source projects through the Outercurve Foundation (née CodePlex). Maybe it's high time Microsoft put its money where its mealy mouth is.'"

Submission + - Star Wars MMO: EA's Big Bet to Cost $100M (industrygamers.com) 1

donniebaseball23 writes: EA's BioWare is developing its first-ever MMORPG in Star Wars: The Old Republic, and the publisher is betting big that the project will be a huge success. Wedbush analyst Michael Pachter says development alone cost an estimated $80 million, with marketing and distribution adding in another $20 million. The good news is it shouldn't take much to break even. ""We estimate that EA will cover its direct operating costs and break even at 500,000 subscribers (this is exceedingly conservative, and the actual figure is probably closer to 350,000), meaning that with 1.5 million paying subscribers, EA will have 1 million profitable subs," Pachter noted.

Comment Does how you kill matter? (Score 1) 1855

There is a common theme to some comments made about terrorism and the situation in the middle east which I would like to examine.

Lets take two scenarios.

Scenario I - A man craftily and with active malice orchestrates the simultaneous hijacking of four planes and then has three of them successfully crash into highly symbolic targets and kills lots of civilians (about 3000 for those who care about numbers). This man then glories in these deaths and uses this attack to recruit and motivate more like minded individuals.

Scenario II - Small radicalized subgroups in a country attack another country and kill a few hundred people over a duration of years. The attacked country responds by sending in military and bombing suspected locations where the radicalized subgroup is harbored and over the process of a few years kills thousands of people and making the lives of 100s of thousands more miserable. Many of the thousands that die are not directly killed but die of disease, untreated wounds, and the general anarchy of the situation. Most of those thousands are not part of this radicalized subgroup and are civilians. But many of these civilians harbor deep antipathy towards the country that is attacking them even going as far as believing that it would be a moral good if the attacking country were to be removed from the face of the earth. The originally attacked country justifies their aggressive response by saying that it is the only way they know to deter radicalized subgroups from continuing their attacks against them and they have the right to defend themselves.

There are some who argue that the man in scenario I and the originally attacked country in scenario II are essentially equivalent in the moral weight of their wrongness of their actions and others who argue that they are fundamentally different. There are some who would argue those who suffered in scenario II are justified in participating in actions similar to scenario I.

I believe that scenario I is much more representative of true evil than scenario II even though the suffering in scenario II is greater and I see it as the difference that differentiates first degree murder in cold blood and other lesser forms of murder. Each ends up with people dying, but the first should get you put in prison for life, the second may only put you into jail for a few years. I am not saying that scenario II is not evil, but it is hard not to be sympathetic with those who are responding to aggression against themselves with their own aggression even if the response is of disproportionate magnitude greater than the provoking attack.

I will say one more thing about this. I have noticed that people's opinions about scenario II are very much dependent on their connections to and feelings about the people involved. The person in scenario I is pretty much universally despised.

Comment Talking past each other (Score 1) 488

I have been reading through the comments, and there does not seem to be much discussion about what IQ tests do well and what they do poorly. Generally there is an assertion that they are useful by some and an assertion that they are useless by others. As is typical in these cases, both sides are mostly wrong and only partially right.

Thinking about this, I believe there is one particular aspect of this discussion that needs more elaboration. Lets look at two ranges of the IQ test. The range from 80 to 120, and the range from 130 to 170. They are both 40 points apart and imply a wide difference in intelligence for those at the bottom vs those at the top of the range. However, the IQ test does much better (in my opinion and I suspect you can find independent literature to support this) on the range 80 to 120. Usually somebody with an IQ of 80 is not destined for a college degree and somebody with 120 has a good chance of finishing college. In this regard the test does fairly well. Whether it is actually measuring real mental talents of one type or another is a different issue.

Now, look at the range of 130 to 170. People with IQs of 170 are a bit different in nature to those who have 130. That seems fairly clear. But focused strengths in particular mental abilities are not well picked out and the IQ test seems to do a terrible job of predicting future grandmasters in chess, future professors at elite schools, future engaging storytellers, or even future great repositories of interesting trivia. Also when it comes to elite abilities, IQ tests at the high end of the range tend to discount the obsessive dedication that is required to become one of the best.

I think one of the issues is that IQ tests are good at finding deficiencies, places where somebody is lacking critical mental skills to learn what is required in our modern society, and does poorly at diagnosing elite mental talents. Those that praise the IQ test usually point out scenarios where the IQ test helped find people who needed additional resources to succeed. Those that criticize the IQ test tend to focus on how those with "genius IQs" tend not to necessarily do great acts that measure up to their numerical IQ score.

Take the relatively simple problem of determining potential skill at chess. Chess makes for a nice example because skill at chess is only somewhat coorelated with other mental abilities (making it possible to "isolate it" from other mental facets) and it is definitely measurable by competing with others. There is a clear cut state of "grandmaster" which all fairly accomplished chess players agree is a statement of real elite capability. It is (probably -- I am extrapolating on my own anecdotal experience) not hard to create a test to determine if somebody is going to play chess adequately and I suspect such a test is somewhat coorelated with an IQ test. A person with an IQ of 80 probably will never play chess that well, while a person with an IQ of 120 will likely learn to play the game adequately (counter examples are welcome). There are kids who clearly do not have much talent for the game and I doubt even focused study would help them. For them, learning how to mate with K and Q against K is a bit of a stretch.

But is it possible to create a test which will determine who is likely to be a future grandmaster (or even master) as compared to just playing "well"? I have recently been a chess coach for elementary school kids and there is one trait that I have determined that is coorelated with future ability. It is an obsessive interest in the game. I have kids who I thought were better natural talents, but they quickly fell behind those who made it their life mission to be better. In particular, I believe that an IQ test result of 170 is practically meaningless in predicting future great success in chess.

I use chess as an example, because I believe much the same can be said about any elite mental talent. Every time I hear debates about IQ, I ask myself, how well does it predict chess failure and how well does it predict elite chess success? I believe such a examination will produce results that are as valid as when the IQ test is used to predict future greatness in scientists and writers.

Comment Re:Does Financial Engineering Help the Economy? (Score 1) 732

I actually don't have so much problem with "Finance as gambling" because such people can help create a stable market for securities such as stocks. They are the people who will sell you a stock at a reasonable price when nobody else will because they have "gambled" that the current negative opinion against the stock is wrong. Unless the "gamblers" are acting on illegally obtained information, the losers are not the people who are trying to use the financial markets for reasonable purposes but instead it is rich people gambling with money against other rich people which may be a non productive use of their time but it is not necessarily harmful.

What would be harmful is if the "best and brightest" were being hired just to aid this "amusement device" for the wealthy. It would be much like rich people hiring the best artists to create personal art works that would not be available to the general public. It is wrong, but not terribly wrong and in the long run it might not be that harmful. In the case of the artists, the artists might otherwise have given up doing art if not for funding from the wealthy. Likewise with engineers, some engineers may find finance closer to their "true calling" than anything they can get outside of finance.

I agree with your assessment in the use of CDOs, but my spin on it is different. In the case of the CDOs, the principal problem is that they disguised the risk from a big "negative event" (house prices stop going up). Because of this, they provided returns that appeared attractive and regulators that monitored risk at our large institutions allowed transactions to occur which should not have occurred. The "crime" here was that CDOs were advertised as a "safe" investment that provided returns better than other "safe" investments when the truth was that CDOs were far from being "safe". All the bad outcomes (banks using CDOs to give them more money to lend) are consequences of this basic fact. My question is how much of financial engineering goes into enabling these types of "crimes" and how much is for "gambling" (which in some cases can actually do good things)?

Comment Does Financial Engineering Help the Economy? (Score 2) 732

Unlike some of the posters, I do not have a clear opinion or understanding of exactly finance does for us, especially the part of finance that is done by MIT graduates. I have heard two opposing claims which I put into two opposing categories.

Is it:

* Finance is a fraudulent game designed to fleece others out of their money using complex financial instruments that cannot be understood by those who have the responsibility to prevent fraudulent activities in our financial institutions.


* Finance more efficiently distributes money into investments in our economy so that our resources are more efficiently organized to maximize productivity. Complex financial instruments are used to distribute risk and allow creators of goods and services to protect themselves against risks which would otherwise potentially destroy their ability to provide those goods and services.

The problem is that I believe each of the above statements are true at least to some extent. What I don't know is the percentage to assign to each category or to some new category in between these two polar opposites of categories of results. In particular, I do not how mathematical financial engineering is distributed among these categories in terms of effective output.

If the best and brightest are being hired merely to create profit for the few and have no positive impact on the wealth of the many, then I believe that is wrong and I cannot see justification for this as a moral good. I cannot see any essential difference between this and successful recruitment efforts by the Mafia for new well paid enforcers. An enforcers job might be fun, have good comradeship, work with the "best", and be well paid, but it still does not make it a morally acceptable choice of occupation.

So for me, the key question is whether the mathematically complex part of finance is actually performing in the way capitalism is intended to perform or are the complex algorithms used to better enable parasites to enrich themselves at the expense of the larger body politic. Factual information on this is actually somewhat hard to come by. Certainly I have seen a lot of claims about CDOs, risky mortgages, investment pools, arbitrage and the root causes of recent failures. But when I try to dig a little further, real information based on real data is quite hard to find.

I'll give an example. One typical trick for extracting unfair money from others is to design an investment that pays better than average as long as a seemingly unlikely event does not occur. You get others to put money into the investment by lying or disguising the true risks about whether the event will occur. You then take a portion of the money that investment as your own (as a "fee") and then create a complex derivative to bet against the investment by buying "insurance that pays off if the event occurs". How much of the profit made by financial companies is made from tricks of this sort?

In particular, what percentage of the recent instability was caused by CDOs that packaged risky mortgages and how well did some of the principal players understand the true nature of the risk? Again, I can get vociferously stated opinions on this but I am finding it hard to find real fact. However, in defense of the financial industry, it seems very few were aware of the true risks of the mortgages and many of them lost considerable money (maybe not as much as they should have) after the crisis. But there were some who knew what was going on and many (even though ignorant about what was truly going on) who profited while the times were good who did not suffer proportionally when things went bad (the "private profit" and "socialized risk" that a couple of posters alluded to).

I do have one more thing to say. There is an old saying, "Democracy is the very worst form of government with the exception of all others". I have a similar opinion about capitalism. Capitalism is prone to "bubbles" that grow and burst and this seems to be inherent in its nature. When seen this way, the recent mortgage crisis can be seen as just another one of those "bubbles" and it is not clear to me that the finance industry really deserves the blame that is heaped upon them. It feels a bit to me as if they are being used as scapegoats for what is otherwise a fairly predictable phenomenon. Of course, many in the financial industry like to claim that they are smarter and wiser and know how to protect your investments against such risks and for that they should be culpable when they are proven wrong.

Slashdot Top Deals

The following statement is not true. The previous statement is true.