Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Precision, recall, adversarial threats? (Score 1) 138

My concern is how Google handles removing things accurately. This isn't the white pages-- there isn't some person assembling these indices. They've generated by learning algorithms, and those algorithms themselves misclassify information. So how do you get all of your references removed without inflicting collateral damage? What about people with the same name? Furthermore, how does Google know that requests are legitimate? You can imagine political candidates requesting that Google remove their opponents.

Whatever algorithm Google is using to do this, I think its details are in the public interest. I'd like to see them publish its details.

Comment Re:Calculation was flawed (Score 1) 127

Not to mention: many UVA grads likely stay in Virginia, and Stanford grads likely stick around in Silicon Valley (e.g., 100% of the Stanford grads that I know). The cost of living in Silicon Valley is dramatically more expensive than in Virginia. E.g, the cheapest condo in Palo Alto listed on Zillow is priced at $548,000 (which is > $300k above the already insane appraisal value) and for that, you get 679 square feet. Since I was an intern, my housing was (fortunately!) covered by my employer when I worked in Mountain View, but my boss ended up taking a job elsewhere because he and his wife simply could not afford anything more spacious than an RV. If you don't adjust the salary to the cost of living, your study is fundamentally flawed.

Comment Re:Guarantee (Score 1) 716

We don't need to certify programmers, we need to certify programs. I'm not sure that certification for programmers would provide any extra benefit other than maybe being a prior on whether you think the programmer can get the job done or not (and I'm not a Bayesian, so...).

On the other hand, many properties about programs themselves can and should be verified. A great deal of current programming language research is devoted toward both improving the capabilities of automatic program verification as well as designing languages more amenable to verification. Functional languages, for instance, rule out entire classes of bugs present in imperative languages. People complain that they're hard to understand. Maybe. I argue that they're hard to understand if you're the kind of person who does not care about whether your program is correct or not.

Comment Re:"So who needs native code now?" (Score 1) 289

Unless and until some unforeseen, miraculous breakthrough happens in language design, GCd languages will always be slower when it comes to memory management. And because memory management is so critical for complex applications, GCd languages will effectively always be slower, period.

This isn't true. Have a look at Quantifying the Performance of Garbage Collection vs. Explicit Memory Management. The take-away is that GC'd languages are only slower if you are unwilling to pay an extra memory cost; typically 3-4x of your explicitly-managed program. Given that GC gives you safety from null-pointer dereferences for free, I think that's a fair tradeoff for most applications (BTW, you can run the Boehm collector on explicitly-managed code to identify pointer safety issues).

Comment Re:Maximum precision? (Score 1) 289

I was being glib. Just nitpicking on the phrase "maximum precision". Sorry, it's a bad habit developed from working around a bunch of pedantic nerds all day.

Thanks for the pointer about native ints, although I can't seem to find any kind of authoritative reference about this. This guy claims that asm.js converts these to native ints (see Section 2.3: Value Types), but his link seems to be talking about the JavaScript runtime, not the asm.js compiler. If you have a reference, I'd appreciate it if you'd send it along.

Comment Maximum precision? (Score 4, Informative) 289

Let's just open up my handy Javascript console in Chrome...

(0.1 + 0.2) == 0.3
false

It doesn't matter how many bits you use in floating point. It is always an approximation. And in base-2 floating point, the above will never be true.

If they're saying that JavaScript is within 1.5x of native code, they're cherry-picking the results. There's a reason why people who care have a rich set of numeric datatypes.

Comment Re:Numerical computation is pervasive (Score 5, Interesting) 154

Not to mention floating-point computation, numerical analysis, anytime algorithms, and classic randomized algorithms like Monte Carlo algorithms. Approximate computing has been around for ages. The typical scenario is to save computation, nowadays expressed in terms of asymptotic complexity ("Big O"). Sometimes (as is the case with floating point), this tradeoff is necessary to make the problem tractable (e.g., numerical integration is much cheaper than symbolic integration).

The only new idea here is using approximate computing specifically in trading high precision for lower power. The research has less to do with new algorithms and more to do with new applications of classic algorithms.

Comment Re:Cross language - what .Net gets right (Score 1) 286

Believe it or not, CIL (or MSIL in Microsoft-speak), the bytecode for .NET, is an ECMA standard, and implementations of both .NET JIT'ers and standard libraries exist for practically all modern platforms, thanks to Mono. So I'd say: "competition for portable applications". Really! Just take a look at Gtk#. As a result, there are numerous applications for Linux written in .NET languages (e.g., Banshee). Having written huge amounts of code in both JVM languages (Java, Scala, JRuby, and Clojure) and .NET languages (F# and C#), I would take .NET over the JVM for a new project any day.

Also, to pre-emptively swat down this counter-argument: while the Mono people and Microsoft may have had some animosity in the past, it is most definitely not the case any more. Most of the Mono people I have spoken to (yes, in person) say that their relationship with Microsoft is pretty good.

Build systems and dependency management for the JVM are their own mini-nightmare. .NET's approach isn't perfect but compared to [shudder] Ant, Maven, Buildr, SBT, and on and on and on... it largely just works.

Comment Re:Cross language - what .Net gets right (Score 3, Informative) 286

P/Invoke, the other interop mechanism alluded to by the poster, is substantially faster than COM interop. I spent a summer at Microsoft Research investigating ways to make interop for .NET faster. There's maybe 20 or so cycles of overhead for P/Invoke, which is practically free from a performance standpoint. In addition to having its own [reference-counting] garbage collector, COM has extensive automatic-marshaling capabilities. These things make interop easy, but they add substantial overhead compared to P/Invoke. On the other hand, P/Invoke is somewhat painful to use, particularly if you want to avoid marshaling overheads and play nice with .NET's [tracing] garbage collector and managed type system. P/Invoke will often happily accept your ginned-up type signatures and then fail at runtime. Ouch.

Coming from the Java world, I was totally blown away by what .NET can do. I can't speak for Microsoft myself, but I would be very surprised if .NET was not going to stick around for a long time. With the exception of perhaps Haskell, the .NET runtime is probably the most advanced managed runtime available to ordinary programmers (i.e., non-researchers). And, with some small exceptions (BigInteger performance... GRRR!), Mono is a close second. What the Scala compiler is capable of squeezing out of the poor, little JVM is astonishing, but Scala's performance is often bad in surprising ways, largely due to workarounds for shortcomings in the JVM's type system.

Slashdot Top Deals

The one day you'd sell your soul for something, souls are a glut.

Working...