Please create an account to participate in the Slashdot moderation system

typodupeerror

## Comment Bell Curve (Score 1)312

I find this article quite confusing. Is the actual suggestion that we should be going around using the mean deviation as a way of capturing the general variance of our data sets? Or to put it another way, does he want "deviation" measures not to give us a real sense of the larger deviations that might occur with some real probability. For example, with temperatures, standard deviation is more likely to suggest that we can have periods of significantly higher and lower temperatures than a simple "mean deviation".

Adding to my confusion is that there is no reference to articles, books, or other subject material that supports the general thesis. If the "mean deviation" is better than the "std deviation", give some real concrete examples and supporting mathematics.

Also, there seems to be no reference to "bell curve" distributions and "non bell curve" distributions. Standard deviation computations are built around bell curve distributions for their mathematical soundness. For example, if I were to take every number and raise it the fourth power, standard deviation would not work so well on this new set of numbers. Is the author suggesting that typical sampling distributions of sampled events tend not to be "bell curve" like?

Standard deviation is taught in 7th grade in my local school. It shows up constantly in any standard K-12 curriculum. To challenge this, you really should bring a lot more substance to any argument that we should do things differently.

For example, I could argue that we should use 1:2 to represent 1/2 because the slash (/) should be used for logical dependency arguments instead. I could create lots of examples and go into a diatribe about how people constantly misuse fractions and ratios because they use a slash in their construction. But I would still be spouting nonsense.

## Comment Fixing the patent system (Score 4, Insightful)347

This is just another in a long series of slashdot articles that have pointed out the broken nature of our patent system. What I have not seen is any serious proposals for fixing the issues beyond "throw it all out". I have to agree that making software (even software running in specific hardwire specifications) something that you cannot patent is superior to the current patenting solution. Something similar could be said about some of the pharmaceutical patenting that is going on as well (make it last "seven days" instead of "one", get to extend my patent).

What if we made patents peer reviewed by a group of high profile experts in the field in which the patent is filed. So notable software professionals would be consulted for software patents. This group would use a high bar on the "obviousness" and "prior art" test so that rewriting prior art into a different language and giving a slightly different spin would not make it past this group. The group would be paid based by on the (likely to be substantial) fees charged to the person filing the patent. This is how research articles are handled for the best scientific journals. If a patent is laughably far from being publish worthy for a reputable scientific journal, why are we letting it control millions (or billions) of dollars of commerce? Currently, we are forcing our higher courts to learn all types of arcana before they are able to kill a patent based on prior art and obviousness. Using a group of true experts (not the underpaid and overworked staff at the patent office) would do a lot to improve the situation. Patent lawyers are not a sufficient substitute.

## Comment Tolkien fundamentally different (Score 1)505

I am imagining a prize committee trying to decide whether to give an award to postscript or HTML as the best page description language when HTML first came out. The criteria which seems to be used to choose "good literature" would pick postscript every time. Postscript is far more sophisticated, allows far more options, has a much richer vocabulary for describing positioning, graphing, fonts, scaling, and so on. By any judgement of functionality, postscript would seem to destroy HTML.

Tolkien beats out a lot of other supposedly excellent authors in the way HTML beats postscript (or any other complicated SGML that you might propose). There is something different about what it is that fundamentally makes it different and better. HTML appears trivial when compared to postscript but that is its strength, not its weakness. Tolkien is, in many ways, the same.

## Comment The imprecision of the real world (Score 1)808

I have seen a few comments allude to this, but I thought I would focus on this particular issue. Most of the arguments about licensing assume that coding is a isolated act of creativity with no ambiguities creeping in because people make mistakes. Lets say you are running a company with a software development group and assume that there are five errors per significant body of work (for those who want a precise stat: two hundred lines of code, five mistakes in logic or detail). In other words, assume your developers are human and makes mistakes because of ignorance, losing track of details, or just the general confusion of working on such as large complex project. Now, many of the mistakes that are perceived by end users are scrubbed out (for the most part) by the QA department. But that still means that mistakes of every other conceivable sort are still in the code base.

Now assume that at least some of your code in your company is under a proprietary license (maybe you bought a 3rd party library to incorporate in your deliverable -- or you just want to not have competitors popping with a codebase cloned from yours)), could you contemplate even for a moment in using GPL (or even LGPL) code in your suite of products? Even if you assume that a large part of your product base could be shipped with a GPL license with no significant impact to your bottom line, would you still do it given the fallibility of programmers? Given the errors programmers tend to make, is it not highly likely that GPL code would end up being incorporated in software projects which were meant to be closed source? Isn't it true that given the error prone nature of humans, GPL is truly a virus in its ability to replicate and introduce itself into foreign hosts? No wonder legal departments at companies view it with such hostility.

## Submission + - The Uncertain Future of Mono (infoworld.com)

snydeq writes: "Fatal Exception's Neil McAllister sees an uncertain future for Mono in the wake of recent Attachmate Mono layoffs, one that may hinge ironically on help from Microsoft itself. 'To lose all of the potential of these tools now would be a terrible shame. But it seems unlikely that Mono will be able to keep up with the pace of .Net without some sort of commercial backing,' McAllister writes. 'The most likely candidate might be the least-expected one. Microsoft has been working to revise its stance on open source for the last few years, softening its rhetoric and even sponsoring open source projects through the Outercurve Foundation (née CodePlex). Maybe it's high time Microsoft put its money where its mealy mouth is.'"

## Submission + - Star Wars MMO: EA's Big Bet to Cost \$100M (industrygamers.com) 1

donniebaseball23 writes: EA's BioWare is developing its first-ever MMORPG in Star Wars: The Old Republic, and the publisher is betting big that the project will be a huge success. Wedbush analyst Michael Pachter says development alone cost an estimated \$80 million, with marketing and distribution adding in another \$20 million. The good news is it shouldn't take much to break even. ""We estimate that EA will cover its direct operating costs and break even at 500,000 subscribers (this is exceedingly conservative, and the actual figure is probably closer to 350,000), meaning that with 1.5 million paying subscribers, EA will have 1 million profitable subs," Pachter noted.

## Comment Does how you kill matter? (Score 1)1855

There is a common theme to some comments made about terrorism and the situation in the middle east which I would like to examine.

Lets take two scenarios.

Scenario I - A man craftily and with active malice orchestrates the simultaneous hijacking of four planes and then has three of them successfully crash into highly symbolic targets and kills lots of civilians (about 3000 for those who care about numbers). This man then glories in these deaths and uses this attack to recruit and motivate more like minded individuals.

Scenario II - Small radicalized subgroups in a country attack another country and kill a few hundred people over a duration of years. The attacked country responds by sending in military and bombing suspected locations where the radicalized subgroup is harbored and over the process of a few years kills thousands of people and making the lives of 100s of thousands more miserable. Many of the thousands that die are not directly killed but die of disease, untreated wounds, and the general anarchy of the situation. Most of those thousands are not part of this radicalized subgroup and are civilians. But many of these civilians harbor deep antipathy towards the country that is attacking them even going as far as believing that it would be a moral good if the attacking country were to be removed from the face of the earth. The originally attacked country justifies their aggressive response by saying that it is the only way they know to deter radicalized subgroups from continuing their attacks against them and they have the right to defend themselves.

There are some who argue that the man in scenario I and the originally attacked country in scenario II are essentially equivalent in the moral weight of their wrongness of their actions and others who argue that they are fundamentally different. There are some who would argue those who suffered in scenario II are justified in participating in actions similar to scenario I.

I believe that scenario I is much more representative of true evil than scenario II even though the suffering in scenario II is greater and I see it as the difference that differentiates first degree murder in cold blood and other lesser forms of murder. Each ends up with people dying, but the first should get you put in prison for life, the second may only put you into jail for a few years. I am not saying that scenario II is not evil, but it is hard not to be sympathetic with those who are responding to aggression against themselves with their own aggression even if the response is of disproportionate magnitude greater than the provoking attack.

I will say one more thing about this. I have noticed that people's opinions about scenario II are very much dependent on their connections to and feelings about the people involved. The person in scenario I is pretty much universally despised.

## Comment Talking past each other (Score 1)488

I have been reading through the comments, and there does not seem to be much discussion about what IQ tests do well and what they do poorly. Generally there is an assertion that they are useful by some and an assertion that they are useless by others. As is typical in these cases, both sides are mostly wrong and only partially right.

Thinking about this, I believe there is one particular aspect of this discussion that needs more elaboration. Lets look at two ranges of the IQ test. The range from 80 to 120, and the range from 130 to 170. They are both 40 points apart and imply a wide difference in intelligence for those at the bottom vs those at the top of the range. However, the IQ test does much better (in my opinion and I suspect you can find independent literature to support this) on the range 80 to 120. Usually somebody with an IQ of 80 is not destined for a college degree and somebody with 120 has a good chance of finishing college. In this regard the test does fairly well. Whether it is actually measuring real mental talents of one type or another is a different issue.

Now, look at the range of 130 to 170. People with IQs of 170 are a bit different in nature to those who have 130. That seems fairly clear. But focused strengths in particular mental abilities are not well picked out and the IQ test seems to do a terrible job of predicting future grandmasters in chess, future professors at elite schools, future engaging storytellers, or even future great repositories of interesting trivia. Also when it comes to elite abilities, IQ tests at the high end of the range tend to discount the obsessive dedication that is required to become one of the best.

I think one of the issues is that IQ tests are good at finding deficiencies, places where somebody is lacking critical mental skills to learn what is required in our modern society, and does poorly at diagnosing elite mental talents. Those that praise the IQ test usually point out scenarios where the IQ test helped find people who needed additional resources to succeed. Those that criticize the IQ test tend to focus on how those with "genius IQs" tend not to necessarily do great acts that measure up to their numerical IQ score.

Take the relatively simple problem of determining potential skill at chess. Chess makes for a nice example because skill at chess is only somewhat coorelated with other mental abilities (making it possible to "isolate it" from other mental facets) and it is definitely measurable by competing with others. There is a clear cut state of "grandmaster" which all fairly accomplished chess players agree is a statement of real elite capability. It is (probably -- I am extrapolating on my own anecdotal experience) not hard to create a test to determine if somebody is going to play chess adequately and I suspect such a test is somewhat coorelated with an IQ test. A person with an IQ of 80 probably will never play chess that well, while a person with an IQ of 120 will likely learn to play the game adequately (counter examples are welcome). There are kids who clearly do not have much talent for the game and I doubt even focused study would help them. For them, learning how to mate with K and Q against K is a bit of a stretch.

But is it possible to create a test which will determine who is likely to be a future grandmaster (or even master) as compared to just playing "well"? I have recently been a chess coach for elementary school kids and there is one trait that I have determined that is coorelated with future ability. It is an obsessive interest in the game. I have kids who I thought were better natural talents, but they quickly fell behind those who made it their life mission to be better. In particular, I believe that an IQ test result of 170 is practically meaningless in predicting future great success in chess.

I use chess as an example, because I believe much the same can be said about any elite mental talent. Every time I hear debates about IQ, I ask myself, how well does it predict chess failure and how well does it predict elite chess success? I believe such a examination will produce results that are as valid as when the IQ test is used to predict future greatness in scientists and writers.

## Comment Re:Does Financial Engineering Help the Economy? (Score 1)732

I actually don't have so much problem with "Finance as gambling" because such people can help create a stable market for securities such as stocks. They are the people who will sell you a stock at a reasonable price when nobody else will because they have "gambled" that the current negative opinion against the stock is wrong. Unless the "gamblers" are acting on illegally obtained information, the losers are not the people who are trying to use the financial markets for reasonable purposes but instead it is rich people gambling with money against other rich people which may be a non productive use of their time but it is not necessarily harmful.

What would be harmful is if the "best and brightest" were being hired just to aid this "amusement device" for the wealthy. It would be much like rich people hiring the best artists to create personal art works that would not be available to the general public. It is wrong, but not terribly wrong and in the long run it might not be that harmful. In the case of the artists, the artists might otherwise have given up doing art if not for funding from the wealthy. Likewise with engineers, some engineers may find finance closer to their "true calling" than anything they can get outside of finance.

I agree with your assessment in the use of CDOs, but my spin on it is different. In the case of the CDOs, the principal problem is that they disguised the risk from a big "negative event" (house prices stop going up). Because of this, they provided returns that appeared attractive and regulators that monitored risk at our large institutions allowed transactions to occur which should not have occurred. The "crime" here was that CDOs were advertised as a "safe" investment that provided returns better than other "safe" investments when the truth was that CDOs were far from being "safe". All the bad outcomes (banks using CDOs to give them more money to lend) are consequences of this basic fact. My question is how much of financial engineering goes into enabling these types of "crimes" and how much is for "gambling" (which in some cases can actually do good things)?

## Comment Does Financial Engineering Help the Economy? (Score 2)732

Unlike some of the posters, I do not have a clear opinion or understanding of exactly finance does for us, especially the part of finance that is done by MIT graduates. I have heard two opposing claims which I put into two opposing categories.

Is it:

* Finance is a fraudulent game designed to fleece others out of their money using complex financial instruments that cannot be understood by those who have the responsibility to prevent fraudulent activities in our financial institutions.

or:

* Finance more efficiently distributes money into investments in our economy so that our resources are more efficiently organized to maximize productivity. Complex financial instruments are used to distribute risk and allow creators of goods and services to protect themselves against risks which would otherwise potentially destroy their ability to provide those goods and services.

The problem is that I believe each of the above statements are true at least to some extent. What I don't know is the percentage to assign to each category or to some new category in between these two polar opposites of categories of results. In particular, I do not how mathematical financial engineering is distributed among these categories in terms of effective output.

If the best and brightest are being hired merely to create profit for the few and have no positive impact on the wealth of the many, then I believe that is wrong and I cannot see justification for this as a moral good. I cannot see any essential difference between this and successful recruitment efforts by the Mafia for new well paid enforcers. An enforcers job might be fun, have good comradeship, work with the "best", and be well paid, but it still does not make it a morally acceptable choice of occupation.

So for me, the key question is whether the mathematically complex part of finance is actually performing in the way capitalism is intended to perform or are the complex algorithms used to better enable parasites to enrich themselves at the expense of the larger body politic. Factual information on this is actually somewhat hard to come by. Certainly I have seen a lot of claims about CDOs, risky mortgages, investment pools, arbitrage and the root causes of recent failures. But when I try to dig a little further, real information based on real data is quite hard to find.

I'll give an example. One typical trick for extracting unfair money from others is to design an investment that pays better than average as long as a seemingly unlikely event does not occur. You get others to put money into the investment by lying or disguising the true risks about whether the event will occur. You then take a portion of the money that investment as your own (as a "fee") and then create a complex derivative to bet against the investment by buying "insurance that pays off if the event occurs". How much of the profit made by financial companies is made from tricks of this sort?

In particular, what percentage of the recent instability was caused by CDOs that packaged risky mortgages and how well did some of the principal players understand the true nature of the risk? Again, I can get vociferously stated opinions on this but I am finding it hard to find real fact. However, in defense of the financial industry, it seems very few were aware of the true risks of the mortgages and many of them lost considerable money (maybe not as much as they should have) after the crisis. But there were some who knew what was going on and many (even though ignorant about what was truly going on) who profited while the times were good who did not suffer proportionally when things went bad (the "private profit" and "socialized risk" that a couple of posters alluded to).

I do have one more thing to say. There is an old saying, "Democracy is the very worst form of government with the exception of all others". I have a similar opinion about capitalism. Capitalism is prone to "bubbles" that grow and burst and this seems to be inherent in its nature. When seen this way, the recent mortgage crisis can be seen as just another one of those "bubbles" and it is not clear to me that the finance industry really deserves the blame that is heaped upon them. It feels a bit to me as if they are being used as scapegoats for what is otherwise a fairly predictable phenomenon. Of course, many in the financial industry like to claim that they are smarter and wiser and know how to protect your investments against such risks and for that they should be culpable when they are proven wrong.

## Apache Resigns From the JCP Executive Committee136

iammichael writes "The Apache Software Foundation has resigned its seat on the Java SE/EE Executive Committee due to a long dispute over the licensing restrictions placed on the TCK (test kit validating third-party Java implementations are compatible with the specification)."

## Comment SSL and intranets are a bad fit (Score 1)286

A lot of responses that I have seen to this question are basically the following.

"Create your own CA (certificate authority) certificate and distribute them to the client workstations." Then they accuse the original poster of having asked an overly simple and uninteresting question.

I am going to say something nobody else seems to have said. SSL sucks big time for large workgroups inside a private intranet. It is an inappropriate solution that is being used for the lack of anything better. IE will give AD based authentication for browsers, but did not extend that to securing the communication channel itself.

This issue is much nastier and more complex then anybody has allowed for. SSL does a very good job of solving the problem of creating secure communications over untrusted anonymous networks. However, they are a real pain when the only thing you want to do is create a secure communication between two machines in the same room. In those cases, SSL comes with a lot of overhead that is really not needed. In the case of two machines in the same room (or workgroup), the machines are already on internal corporate IP addresses, so a lot of the issues that SSL was designed to solve (validating that the IP address really points to the expected entity) just are not applicable. Usually the only reason why you want to encrypt the data is so that somewhat private data won't be sniffed by other users. You are not trying to prove that you are a legitimate seller of any goods or services.

What really astounded me were the claims that it would be easy to get users to accept company controlled installs of browsers and tools. I have worked in such an environment and it was actively resisted and foiled because the choices were so limiting. For those who say "it would work it was done right", probably have not done cross browser development where you had to test on Linux, Mac, and variants of Windows machines. Nor have they done Java development where the Java has to communicate to the server (over https) as well (Java has its own client CA chain distribution).

Every place I have ever worked (big or small) has had http web sites when they really should have been https because of the pain of trying to use SSL. To say that this is because of bad IT management I think gets it wrong. SSL is a bad fit for this problem space and browsers (and Java) need to support other security solutions. It would be nice to recommend Kerberos, but Kerberos has really only gotten full implementation with AD and is even more painful for client adoption in most (with non Microsoft machines in the mix) real world scenarios I have seen. The state of intranet security is broken at its foundations and the proposed solutions that have been suggested here would not work (in practical, reliable, real world usage) for many workgroups working inside a much larger corporate entity.

## Comment Re:Oracle is Evil, C# Java (Score 1)428

There is another big difference between C# and Java. In Java, you are strongly discouraged from making native calls. For example, instead of using the native desktop GUI widgets, Java writes a large wrapper (Swing) around the desktop interface and tries to port all that complexity from platform to platform. In C# on Windows, I make Windows (C# friendly wrappers -- but still Windows) specific calls if I am trying to create a GUI user interface. Similar points go for certain other APIs such as ADSI, network pipes, registry, HTTP, and encryption. In fact, if you look at the general low level Windows APIs, there is a lot of functionality there that is not captured in Java.

I agree that C# is better than Java at doing the basics, but C# is hard to separate from the platform that birthed it. This makes C# a difficult language to deal with if you are not writing for a Windows platform. For example, from my understanding, the port to Linux of C# is not to make applications portable from Windows to Linux (except maybe some server apps), but to make C# a productive language for Linux the ways it is for Microsoft. I expect to call native Linux APIs from C# when I am running on Linux. Why not? The benefit of doing otherwise is not so clear given the general lack of portability of C#. This approach does create stress points. The .net API is a large collection of APIs, some of which run on Linux on C#, some that don't. Some are protected by GPL like licenses so you can use them without a fear of a lawsuit, others are not. As an example, I think there are still some controversies about some of the fancier parts of the WebForms APIs (some of the complicated dynamic HTML tables for example).

## Comment Java is important for the server side (Score 1)388

This post is in response to posts that say "Java is not important -- if we kill Java it won't matter that much". I disagree and the fact that Oracle is preventing the language from growing and potentially killing its future is big news and should not be dismissed lightly.

There is a saying, "democracy is the very worst form of government with the exception of all others." I have a similar opinion about Java.

Let me list four key strengths that Java has:

1. If you write code using primitives (such as byte arrays and char arrays) you can write parsing and syntax processing code that has near C-like performance. This applies to other tasks that need high performance such as querying or processing data. It is why higher level scripting languages can be written on top of Java.

2. It eliminates a lot of the dangerous, painful, and unstable aspects of programming in a non scripting language like C. It does garbage collection and does not allow you to corrupt your application memory or your heap in hard to detect ways. It provides clean stack dumps when errors do occur and prevents the application from crashing from silly programming mistakes.

3. It has excellent threading and synchronization support that can be used in a flexible and high performing way.

4. It can run on more than one platform with some success.

Other alternatives do not provide all four of these features (C# misses out on #4, Ruby, Python miss out on #1 and #3, and so on). I am not much of a fan of some of the libraries that have been built on Java (such as J2EE). Google and Eclipse's use of Java is much closer to how I think Java is supposed to be used for development projects. Because of the bad reputation that some Java libraries (such as J2EE and Swing) generate, some begin to associate Java with those libraries and rightfully believe that the world would be better off without them. But Java is used for much more than that. A lot of the more recent scripting languages are now written in Java or have popular ports to Java. As an example, some large portal applications use a variant of PHP ported to Java. And of course, there is Android. If you remember Oracle is suing Google right now for Android's use of Java so Oracle is quite aware of its importance for the future.

# Slashdot Top Deals

Life is difficult because it is non-linear.

Working...