Follow Slashdot stories on Twitter


Forgot your password?

Comment How far does the taint go? (Score 2) 181

If you got a grant from the NSF for research to create new antibiotics, would that be wrong? The NSF works for the US government and so does the NSA. There is some evidence that the politicians give more money to NSF than they might otherwise get because it is good for fundamental research science & math and science & math is good for DARPA and DARPA is good for NSA.

Somebody already asked the question. Would you take money from the NSA to feed the poor? If the answer is no, how far do you have to get away from the NSA before you would take such money? I assume that the NSA, like most large organizations, has many sub organizations, some of which probably do radically different things. I suspect that the mathematicians who work for the NSA are not involved with the data collection and were probably ignorant of the data collection until Snowden came along. So I have some sympathy for their plight. But that sympathy only goes so far. NSA is an off-budget secret organization. When have such organizations ever been morally clean? I find it ironic (and hypocritical) that normally severely left of center political types appear to be willing to work for such an organization.

I personally don't think of NSA as evil -- generally those who are given a particular job to do (such as data collection) will do that job with a zeal that pushes them beyond sensible moral limits. Many Law & Order episodes deal with the problems caused by police pushing the bounds of legality in pursuit of a criminal. I don't see those police as evil either -- even if they have broken both moral codes and laws.

Comment Wicked Cool (Score 1) 248

I personally find this is about as cool as anything I have seen in the last decade. What they are doing requires the very best engineering that mankind currently offers -- I'll take this over building 2000 feet tall buildings, or 50 mile long bridges any day.

Comment Re:Advance to Go (Score 4, Insightful) 155

I feel I have to object to the comment that Monopoly is a terrible game. I know somebody who wrote their economics undergraduate thesis on the discount model for evaluating property values in Monopoly.

But what I really object to is the claim that the game takes hours. Yes, for unskilled players it takes hours. However, top skilled players usually take about 15 minutes to 30 minutes to play a game (and many time even less than that). You buy stuff, you trade, and mortgage everything to build as much as you can, and then somebody is bankrupt in just a few round trips of a game after the house building phase starts. Skilled players roll, move, buy, pay rents, in less than five seconds usually -- so the game is very fast, until you get to the point where you have to think. You can play the game with a 10 minute clock for each player for the whole game without compromising much in the way of skill. Also, you usually agree to a draw if monopolies cannot be formed in a reasonable number of turns.

From what I have seen, the critical phase of the game occurs at the time trading occurs to form monopolies -- and this requires a great deal of skill, some of it involves being artfully persuasive. It is one of the reasons why monopoly is a cool game. Strategy and tactics sometimes are less important than being a great salesperson.

However, never bring such skilled people into a regular monopoly game. Their style of play can leave all the other players bankrupt in less than an hour and leave them wondering what just happened to their casual fun game.

Comment Science makes predictions (Score 1) 795

One of the problems I have with most definitions of Science is the emphasis on making hypothesis and then testing them. The real power of science is when you make predictions that are natural products of your theory. In fact, not all science is made equal in this respect. When you look at some phenomenon, come up with a theory that matches the phenomenon, test it by observing further phenomenon of the same type, and then publish your result you are doing "weak" science. When you look at some phenomenon, come up with a theory for it and with the theory make predictions about something you never thought about before and those predictions turn out to be true, that is "strong" science. This is especially true if the predictions cross into other areas of science specialization. If you read up on the history of evolutionary theory, you will see that there are multiple "strong" science moments in its history involving crossovers into geology, physics, DNA, chemistry, medicine, physiology, and anatomy (and probably many more). That is what makes evolution not just a science, but a strong science.

Comment Re:danger will robinson (Score 1) 688

Every few years I see yet another "correct way to teach basic mathematics" come through with the promise that this new way will win where the old ways have failed. Common core is in some ways yet another one of these.

My problem is that arithmetic is both a concept and a skill. Most of the teaching methods emphasize teaching the concept. This would be like focusing on teaching you how to ride a bike by trying to come up with constructive suggestions to improve your intuition about how it works, but minimizing the amount of time you actually get to be on the bike.

What is being lost is that basic math is a skill and like all skills, it needs repeated and constant practice sustained over multiple years. I see way too many students that can't do basic arithmetic after going through these "concept" oriented classes. Or to put it more strongly, if learning basic math isn't a boring repetitive chore, than you aren't doing it right.

In the worst of all possible worlds, straightforward skill practice is replaced by repetitive practice of the "concept" building exercises -- so its still boring giving you not even that win. I see this often enough that I rather ditch any "concept" building and just do the arithmetic if the outcome is to train mathematical illiterates. It would be much like doing repetitive practice sessions of "envisioning yourself on a bike" without ever being on a bike. Imagine we treated reading like math. You weren't allowed just to read the books, you had to "read" the books in the correct way showing that you had built up your mastery of parsing words from letters, to syllables, to words.

A kid shouldn't be allowed out of sixth grade if they cannot quickly answer the following questions:

40 - 16
8 * 9
1/2 - 1/3

Comment Re:I just can't get excited about SpaceX (Score 5, Insightful) 87

Hmm... The gist of this is essentially correct, except for one detail. Cost. The only number that is really going to matter in the end is how much money does it take to put 1 ton of stuff into orbit (or beyond) from the ground. Right now it appears to be $10,000,000 USD or even much higher (based on the numbers I see being thrown around on Slashdot). Government subsidies (such as in Russia), can hide some of this, but this seems to be the essential economic truth. As long as that remains the case, mankind is not going to be a space faring race and venturing into space will mostly be for kicks and bragging rights (and maybe a bit of good science, such as Hubble). What SpaceX offers for the very first time, is a path where we may reduce these costs by a factor of ten or more. If we can start putting a ton of stuff into space for less than $500,000 it will radically change what is possible -- a cost of doing something real goes from $200 trillion to maybe $10 trillion -- something we could spend over a 100 years. Things like real space stations, and large space ships with landing vessels.

Comment Depends on the situation (Score 2) 272

I have used Oracle, MySQL, and Mongo in prod situations. I have looked at Cassandra for evaluating it for potential usage in prod.

I can imagine situations where I could recommend any of the above. For example, if you are large financial company with billions of rows, I would go with Oracle. If you have smarts but not money and didn't need somebody to sue if something went wrong, then maybe Postgres would do . If I were a simple web based app with simple form submits, I would go with MySQL. If I had complex unpredictable data blobs and unpredictable needs to do certain types of queries against the data, I might recommend Mongo. If I have large amounts of data on which I want to do analytics I would use Cassandra.

Cassandra wins when you have a lot of data and not a lot of complex real time queries against it. It is especially good at scaling up on cheap data storage (think 100s of terabytes). It also has an unreal "write" throughput (important for certain types of analytics which write out complex intermediate results) though that is not relevant for the case described.

The problem generally with noSql solutions is that they increase the amount of storage to store the equivalent amount of information. You are essentially redundantly storing schema design with each "record" that you store. This really matters more than some might suspect, because when you can put an entire collection into memory, the read performance is much higher. You usually need 1/5th to 1/10th as much RAM to do the job with a traditional relational database (especially since MySQL and their brethren handle getting in and out memory better than mongo). This isn't so much the case for Cassandra because of its distributed storage nature, but it really isn't usable for real time transactions.

My recommendation, use a traditional database -- if in a Microsoft shop use SQL Server, otherwise I like postgres or mysql. If however, you have complex data storage needs that a noSql solution is perfect for, then I would go with that. If you are into back end analytics, copy the data as it comes in and put into a Cassandra (or one of its similar brethren) as well.

Comment Failure in obviousness testing (Score 2) 192

If I were to write in a paper in medicine and try to get it published in one of the various medical journals that are out there that have a reasonably good reputation, I would be rejected so quickly if I were to try a "Algorithm for using instruments in surgery, nurse hands over knives handle first" journal article. But the equivalent of this level of obviousness make it through the patent office all the time. Software I have worked on has gotten patents more than once. In all cases, I thought the patents obvious to the point of silliness.

When I was younger, I naively believed that patents demonstrated that the inventor was truly clever and original -- the lightbulb, invention of jet engine, silicon chip, and so on. Now, what I see is a world filled with patents that are a waste of everybody's time and those few who actually truly invent something new are no longer getting the positive rep that used to come with filing a patent.

The solution is simple. You make the patent filer pay a few thousand dollars, you use that money to pay "world class experts in the field" and then you ask the experts, is the invention truly original and of significant value -- so much so that keeping the details of the invention secret would actively harm mankind?

If the patent isn't worth paying a few thousand dollars to file, then why should we even be considering it.

Comment Bell Curve (Score 1) 312

I find this article quite confusing. Is the actual suggestion that we should be going around using the mean deviation as a way of capturing the general variance of our data sets? Or to put it another way, does he want "deviation" measures not to give us a real sense of the larger deviations that might occur with some real probability. For example, with temperatures, standard deviation is more likely to suggest that we can have periods of significantly higher and lower temperatures than a simple "mean deviation".

Adding to my confusion is that there is no reference to articles, books, or other subject material that supports the general thesis. If the "mean deviation" is better than the "std deviation", give some real concrete examples and supporting mathematics.

Also, there seems to be no reference to "bell curve" distributions and "non bell curve" distributions. Standard deviation computations are built around bell curve distributions for their mathematical soundness. For example, if I were to take every number and raise it the fourth power, standard deviation would not work so well on this new set of numbers. Is the author suggesting that typical sampling distributions of sampled events tend not to be "bell curve" like?

Standard deviation is taught in 7th grade in my local school. It shows up constantly in any standard K-12 curriculum. To challenge this, you really should bring a lot more substance to any argument that we should do things differently.

For example, I could argue that we should use 1:2 to represent 1/2 because the slash (/) should be used for logical dependency arguments instead. I could create lots of examples and go into a diatribe about how people constantly misuse fractions and ratios because they use a slash in their construction. But I would still be spouting nonsense.

Comment Fixing the patent system (Score 4, Insightful) 347

This is just another in a long series of slashdot articles that have pointed out the broken nature of our patent system. What I have not seen is any serious proposals for fixing the issues beyond "throw it all out". I have to agree that making software (even software running in specific hardwire specifications) something that you cannot patent is superior to the current patenting solution. Something similar could be said about some of the pharmaceutical patenting that is going on as well (make it last "seven days" instead of "one", get to extend my patent).

What if we made patents peer reviewed by a group of high profile experts in the field in which the patent is filed. So notable software professionals would be consulted for software patents. This group would use a high bar on the "obviousness" and "prior art" test so that rewriting prior art into a different language and giving a slightly different spin would not make it past this group. The group would be paid based by on the (likely to be substantial) fees charged to the person filing the patent. This is how research articles are handled for the best scientific journals. If a patent is laughably far from being publish worthy for a reputable scientific journal, why are we letting it control millions (or billions) of dollars of commerce? Currently, we are forcing our higher courts to learn all types of arcana before they are able to kill a patent based on prior art and obviousness. Using a group of true experts (not the underpaid and overworked staff at the patent office) would do a lot to improve the situation. Patent lawyers are not a sufficient substitute.

Comment Tolkien fundamentally different (Score 1) 505

I am imagining a prize committee trying to decide whether to give an award to postscript or HTML as the best page description language when HTML first came out. The criteria which seems to be used to choose "good literature" would pick postscript every time. Postscript is far more sophisticated, allows far more options, has a much richer vocabulary for describing positioning, graphing, fonts, scaling, and so on. By any judgement of functionality, postscript would seem to destroy HTML.

Tolkien beats out a lot of other supposedly excellent authors in the way HTML beats postscript (or any other complicated SGML that you might propose). There is something different about what it is that fundamentally makes it different and better. HTML appears trivial when compared to postscript but that is its strength, not its weakness. Tolkien is, in many ways, the same.

Comment The imprecision of the real world (Score 1) 808

I have seen a few comments allude to this, but I thought I would focus on this particular issue. Most of the arguments about licensing assume that coding is a isolated act of creativity with no ambiguities creeping in because people make mistakes. Lets say you are running a company with a software development group and assume that there are five errors per significant body of work (for those who want a precise stat: two hundred lines of code, five mistakes in logic or detail). In other words, assume your developers are human and makes mistakes because of ignorance, losing track of details, or just the general confusion of working on such as large complex project. Now, many of the mistakes that are perceived by end users are scrubbed out (for the most part) by the QA department. But that still means that mistakes of every other conceivable sort are still in the code base.

Now assume that at least some of your code in your company is under a proprietary license (maybe you bought a 3rd party library to incorporate in your deliverable -- or you just want to not have competitors popping with a codebase cloned from yours)), could you contemplate even for a moment in using GPL (or even LGPL) code in your suite of products? Even if you assume that a large part of your product base could be shipped with a GPL license with no significant impact to your bottom line, would you still do it given the fallibility of programmers? Given the errors programmers tend to make, is it not highly likely that GPL code would end up being incorporated in software projects which were meant to be closed source? Isn't it true that given the error prone nature of humans, GPL is truly a virus in its ability to replicate and introduce itself into foreign hosts? No wonder legal departments at companies view it with such hostility.

Submission + - The Uncertain Future of Mono (

snydeq writes: "Fatal Exception's Neil McAllister sees an uncertain future for Mono in the wake of recent Attachmate Mono layoffs, one that may hinge ironically on help from Microsoft itself. 'To lose all of the potential of these tools now would be a terrible shame. But it seems unlikely that Mono will be able to keep up with the pace of .Net without some sort of commercial backing,' McAllister writes. 'The most likely candidate might be the least-expected one. Microsoft has been working to revise its stance on open source for the last few years, softening its rhetoric and even sponsoring open source projects through the Outercurve Foundation (née CodePlex). Maybe it's high time Microsoft put its money where its mealy mouth is.'"

Submission + - Star Wars MMO: EA's Big Bet to Cost $100M ( 1

donniebaseball23 writes: EA's BioWare is developing its first-ever MMORPG in Star Wars: The Old Republic, and the publisher is betting big that the project will be a huge success. Wedbush analyst Michael Pachter says development alone cost an estimated $80 million, with marketing and distribution adding in another $20 million. The good news is it shouldn't take much to break even. ""We estimate that EA will cover its direct operating costs and break even at 500,000 subscribers (this is exceedingly conservative, and the actual figure is probably closer to 350,000), meaning that with 1.5 million paying subscribers, EA will have 1 million profitable subs," Pachter noted.

Slashdot Top Deals

The universe seems neither benign nor hostile, merely indifferent. -- Sagan