Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:"So who needs native code now?" (Score 1) 289

Unless and until some unforeseen, miraculous breakthrough happens in language design, GCd languages will always be slower when it comes to memory management. And because memory management is so critical for complex applications, GCd languages will effectively always be slower, period.

This isn't true. Have a look at Quantifying the Performance of Garbage Collection vs. Explicit Memory Management. The take-away is that GC'd languages are only slower if you are unwilling to pay an extra memory cost; typically 3-4x of your explicitly-managed program. Given that GC gives you safety from null-pointer dereferences for free, I think that's a fair tradeoff for most applications (BTW, you can run the Boehm collector on explicitly-managed code to identify pointer safety issues).

Comment Re:Maximum precision? (Score 1) 289

I was being glib. Just nitpicking on the phrase "maximum precision". Sorry, it's a bad habit developed from working around a bunch of pedantic nerds all day.

Thanks for the pointer about native ints, although I can't seem to find any kind of authoritative reference about this. This guy claims that asm.js converts these to native ints (see Section 2.3: Value Types), but his link seems to be talking about the JavaScript runtime, not the asm.js compiler. If you have a reference, I'd appreciate it if you'd send it along.

Comment Maximum precision? (Score 4, Informative) 289

Let's just open up my handy Javascript console in Chrome...

(0.1 + 0.2) == 0.3
false

It doesn't matter how many bits you use in floating point. It is always an approximation. And in base-2 floating point, the above will never be true.

If they're saying that JavaScript is within 1.5x of native code, they're cherry-picking the results. There's a reason why people who care have a rich set of numeric datatypes.

Comment Re:Numerical computation is pervasive (Score 5, Interesting) 154

Not to mention floating-point computation, numerical analysis, anytime algorithms, and classic randomized algorithms like Monte Carlo algorithms. Approximate computing has been around for ages. The typical scenario is to save computation, nowadays expressed in terms of asymptotic complexity ("Big O"). Sometimes (as is the case with floating point), this tradeoff is necessary to make the problem tractable (e.g., numerical integration is much cheaper than symbolic integration).

The only new idea here is using approximate computing specifically in trading high precision for lower power. The research has less to do with new algorithms and more to do with new applications of classic algorithms.

Comment Re:Cross language - what .Net gets right (Score 1) 286

Believe it or not, CIL (or MSIL in Microsoft-speak), the bytecode for .NET, is an ECMA standard, and implementations of both .NET JIT'ers and standard libraries exist for practically all modern platforms, thanks to Mono. So I'd say: "competition for portable applications". Really! Just take a look at Gtk#. As a result, there are numerous applications for Linux written in .NET languages (e.g., Banshee). Having written huge amounts of code in both JVM languages (Java, Scala, JRuby, and Clojure) and .NET languages (F# and C#), I would take .NET over the JVM for a new project any day.

Also, to pre-emptively swat down this counter-argument: while the Mono people and Microsoft may have had some animosity in the past, it is most definitely not the case any more. Most of the Mono people I have spoken to (yes, in person) say that their relationship with Microsoft is pretty good.

Build systems and dependency management for the JVM are their own mini-nightmare. .NET's approach isn't perfect but compared to [shudder] Ant, Maven, Buildr, SBT, and on and on and on... it largely just works.

Comment Re:Cross language - what .Net gets right (Score 3, Informative) 286

P/Invoke, the other interop mechanism alluded to by the poster, is substantially faster than COM interop. I spent a summer at Microsoft Research investigating ways to make interop for .NET faster. There's maybe 20 or so cycles of overhead for P/Invoke, which is practically free from a performance standpoint. In addition to having its own [reference-counting] garbage collector, COM has extensive automatic-marshaling capabilities. These things make interop easy, but they add substantial overhead compared to P/Invoke. On the other hand, P/Invoke is somewhat painful to use, particularly if you want to avoid marshaling overheads and play nice with .NET's [tracing] garbage collector and managed type system. P/Invoke will often happily accept your ginned-up type signatures and then fail at runtime. Ouch.

Coming from the Java world, I was totally blown away by what .NET can do. I can't speak for Microsoft myself, but I would be very surprised if .NET was not going to stick around for a long time. With the exception of perhaps Haskell, the .NET runtime is probably the most advanced managed runtime available to ordinary programmers (i.e., non-researchers). And, with some small exceptions (BigInteger performance... GRRR!), Mono is a close second. What the Scala compiler is capable of squeezing out of the poor, little JVM is astonishing, but Scala's performance is often bad in surprising ways, largely due to workarounds for shortcomings in the JVM's type system.

Comment Re:overly broad then overly specific definition (Score 1) 318

I think the key distinction is that a robot is autonomous to some degree. It needs to make use of techniques from AI. I.e., it learns.

As someone who dabbles in techniques from AI to solve problems in my own domain (programming language research), solutions in AI tend to have the quality that the algorithms that produced them are extremely general. For example, a robot that can manipulate objects may not even possess a subroutine that tells it how it should move its hands. Often, it learns these things by example instead. It "makes sense" of the importance of these actions through the use of statistical calculations, or logical solvers, or both. Since information in this context is subject to many "interpretations", these algorithms often most closely resemble search algorithms! If a programmer provides anything, it's in the form of "hints" to the algorithm (i.e., heuristics). To an outsider, it's completely non-obvious how "search" and "object manipulation" are related, but when you frame the problem that way, you get weird and sometimes wonderful results. Most notably, autonomy. Sadly, you also sometimes get wrong answers ;)

If your washing machine could go collect your dirty laundry and wash it without your help, I'd call it a laundry robot. Particularly if you could tell it something like "please do a better job cleaning stains", and it could figure out what you meant by that. Note that a Roomba doesn't know anything about your house until you turn it on the first time.

Comment Re:Fixed-point arithmetic (Score 4, Interesting) 226

Experiments can vary wildly with even small differences in floating-point precision. I recently had a bug in a machine learning algorithm that produced completely different results because I was off by one trillionth! I was being foolish, of course, because I hadn't use an epsilon for doing FP, but you get the idea.

But it turns out-- even if you're a good engineer and you are careful with your floating point numbers, the fact is: floating point is approximate computation. And for many kinds of mathematical problems, like dynamical systems, this approximation changes the result. One of the founders of chaos theory, Edward Lorenz, of Lorenz attractor fame, discovered the problem by truncating the precision of FP numbers from a printout when he was re-entering them into a simulation. The simulation behaved completely differently despite the difference in precision being in the thousands. That was a weather simulation. See where I'm going with this?

Comment Re:logic (Score 1) 299

There's an interesting project out of Microsoft Research to design a language/IDE that is easy to input via touch interfaces called Touch Develop. I've spoken with one of the researchers at length, and his motivation for doing this boils down to the same argument you make. When he was a kid, programming was simple to get into: he used BASIC on his Apple II. It was a language that kids could teach other kids, because their parents largely didn't "get it". MSR has a pilot program in one of the local high schools where they teach an intro CS course using TouchDevelop. Anecdotally, kids seem to be able to pick up the language very quickly, and the ease of writing games motivates a lot of kids who wouldn't ordinarily be motivated to do this.

That said, I think TouchDevelop's interface (like most of Metro) is a bit of a train wreck. I am a professional programmer, but I find myself floundering around. Part of the issue is Metro's complete annihilation of the distinction between text, links, and buttons. Unfortunately, iOS 7 has continued in this trend. But I digress...

TouchDevelop is also not a graphical language, like LabView, and I also think that's a bit of a mistake. While I agree that I prefer a text-based language for real work, I think a visual interface would be entirely appropriate for a pedagogical language. Heck, LabView is used daily by lots of real engineers who simply want some basic programmability for their tools without having to invest the [significant] time into learning a text-based language.

Slashdot Top Deals

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...