Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Maximum precision? (Score 4, Informative) 289

Let's just open up my handy Javascript console in Chrome...

(0.1 + 0.2) == 0.3
false

It doesn't matter how many bits you use in floating point. It is always an approximation. And in base-2 floating point, the above will never be true.

If they're saying that JavaScript is within 1.5x of native code, they're cherry-picking the results. There's a reason why people who care have a rich set of numeric datatypes.

Comment Re:Numerical computation is pervasive (Score 5, Interesting) 154

Not to mention floating-point computation, numerical analysis, anytime algorithms, and classic randomized algorithms like Monte Carlo algorithms. Approximate computing has been around for ages. The typical scenario is to save computation, nowadays expressed in terms of asymptotic complexity ("Big O"). Sometimes (as is the case with floating point), this tradeoff is necessary to make the problem tractable (e.g., numerical integration is much cheaper than symbolic integration).

The only new idea here is using approximate computing specifically in trading high precision for lower power. The research has less to do with new algorithms and more to do with new applications of classic algorithms.

Comment Re:Cross language - what .Net gets right (Score 1) 286

Believe it or not, CIL (or MSIL in Microsoft-speak), the bytecode for .NET, is an ECMA standard, and implementations of both .NET JIT'ers and standard libraries exist for practically all modern platforms, thanks to Mono. So I'd say: "competition for portable applications". Really! Just take a look at Gtk#. As a result, there are numerous applications for Linux written in .NET languages (e.g., Banshee). Having written huge amounts of code in both JVM languages (Java, Scala, JRuby, and Clojure) and .NET languages (F# and C#), I would take .NET over the JVM for a new project any day.

Also, to pre-emptively swat down this counter-argument: while the Mono people and Microsoft may have had some animosity in the past, it is most definitely not the case any more. Most of the Mono people I have spoken to (yes, in person) say that their relationship with Microsoft is pretty good.

Build systems and dependency management for the JVM are their own mini-nightmare. .NET's approach isn't perfect but compared to [shudder] Ant, Maven, Buildr, SBT, and on and on and on... it largely just works.

Comment Re:Cross language - what .Net gets right (Score 3, Informative) 286

P/Invoke, the other interop mechanism alluded to by the poster, is substantially faster than COM interop. I spent a summer at Microsoft Research investigating ways to make interop for .NET faster. There's maybe 20 or so cycles of overhead for P/Invoke, which is practically free from a performance standpoint. In addition to having its own [reference-counting] garbage collector, COM has extensive automatic-marshaling capabilities. These things make interop easy, but they add substantial overhead compared to P/Invoke. On the other hand, P/Invoke is somewhat painful to use, particularly if you want to avoid marshaling overheads and play nice with .NET's [tracing] garbage collector and managed type system. P/Invoke will often happily accept your ginned-up type signatures and then fail at runtime. Ouch.

Coming from the Java world, I was totally blown away by what .NET can do. I can't speak for Microsoft myself, but I would be very surprised if .NET was not going to stick around for a long time. With the exception of perhaps Haskell, the .NET runtime is probably the most advanced managed runtime available to ordinary programmers (i.e., non-researchers). And, with some small exceptions (BigInteger performance... GRRR!), Mono is a close second. What the Scala compiler is capable of squeezing out of the poor, little JVM is astonishing, but Scala's performance is often bad in surprising ways, largely due to workarounds for shortcomings in the JVM's type system.

Comment Re:overly broad then overly specific definition (Score 1) 318

I think the key distinction is that a robot is autonomous to some degree. It needs to make use of techniques from AI. I.e., it learns.

As someone who dabbles in techniques from AI to solve problems in my own domain (programming language research), solutions in AI tend to have the quality that the algorithms that produced them are extremely general. For example, a robot that can manipulate objects may not even possess a subroutine that tells it how it should move its hands. Often, it learns these things by example instead. It "makes sense" of the importance of these actions through the use of statistical calculations, or logical solvers, or both. Since information in this context is subject to many "interpretations", these algorithms often most closely resemble search algorithms! If a programmer provides anything, it's in the form of "hints" to the algorithm (i.e., heuristics). To an outsider, it's completely non-obvious how "search" and "object manipulation" are related, but when you frame the problem that way, you get weird and sometimes wonderful results. Most notably, autonomy. Sadly, you also sometimes get wrong answers ;)

If your washing machine could go collect your dirty laundry and wash it without your help, I'd call it a laundry robot. Particularly if you could tell it something like "please do a better job cleaning stains", and it could figure out what you meant by that. Note that a Roomba doesn't know anything about your house until you turn it on the first time.

Comment Re:Fixed-point arithmetic (Score 4, Interesting) 226

Experiments can vary wildly with even small differences in floating-point precision. I recently had a bug in a machine learning algorithm that produced completely different results because I was off by one trillionth! I was being foolish, of course, because I hadn't use an epsilon for doing FP, but you get the idea.

But it turns out-- even if you're a good engineer and you are careful with your floating point numbers, the fact is: floating point is approximate computation. And for many kinds of mathematical problems, like dynamical systems, this approximation changes the result. One of the founders of chaos theory, Edward Lorenz, of Lorenz attractor fame, discovered the problem by truncating the precision of FP numbers from a printout when he was re-entering them into a simulation. The simulation behaved completely differently despite the difference in precision being in the thousands. That was a weather simulation. See where I'm going with this?

Comment Re:logic (Score 1) 299

There's an interesting project out of Microsoft Research to design a language/IDE that is easy to input via touch interfaces called Touch Develop. I've spoken with one of the researchers at length, and his motivation for doing this boils down to the same argument you make. When he was a kid, programming was simple to get into: he used BASIC on his Apple II. It was a language that kids could teach other kids, because their parents largely didn't "get it". MSR has a pilot program in one of the local high schools where they teach an intro CS course using TouchDevelop. Anecdotally, kids seem to be able to pick up the language very quickly, and the ease of writing games motivates a lot of kids who wouldn't ordinarily be motivated to do this.

That said, I think TouchDevelop's interface (like most of Metro) is a bit of a train wreck. I am a professional programmer, but I find myself floundering around. Part of the issue is Metro's complete annihilation of the distinction between text, links, and buttons. Unfortunately, iOS 7 has continued in this trend. But I digress...

TouchDevelop is also not a graphical language, like LabView, and I also think that's a bit of a mistake. While I agree that I prefer a text-based language for real work, I think a visual interface would be entirely appropriate for a pedagogical language. Heck, LabView is used daily by lots of real engineers who simply want some basic programmability for their tools without having to invest the [significant] time into learning a text-based language.

Comment Re:Clueless (Score 1) 125

Counting operations is not enough. Memory access time is nonuniform because of cache effects, architecture (NUMA, distributed memory), code layout (e.g., is your loop body one instruction larger than L1 i-cache?), etc. Machine instructions have different timings. CISC instructions may be slower than their serial RISC counterparts. Or they may not be. SMT may make sequential code faster than parallel code by resolving dependencies faster. Branch predictors and speculation can precompute parts of your algorithm with idle function units. Better algorithms can do more work with fewer "flops". And on and on and on...

The best way to try to write fast code is to write it and run it (on representative inputs). Then write another version and run it. Run it like an experiment, and do an hypothesis test to see which one has the statistically-significant speedup. That's the only way to write fast code on modern machines. The idea that you can hand-write fast code on modern architectures is largely a myth.

Comment Re:Clueless (Score 2) 125

As a computer scientist:

We rarely refer to the cost of an algorithm in terms of flops, since it is bound to change with 1) software implementation details, 2) hardware implementation details, and 3) input data dependencies (for algorithms with dynamical properties). Instead, we describe algorithms in "Big O" notation, which is a convention for describing the theoretical worst-case performance of an algorithm in terms of n, the size of the input. Constant factors are ignored. This theoretical performance figure allows apples-to-apples comparisons between algorithms. Of course, in practice, constant factors need to be considered for many specific scenarios.

"flops" are more commonly used when talking about machine performance, and that's why they're expressed as a rate. You care about the rate of the machine, since that often directly translates into performance. Computer architects also measure integer operations per second, which is in many ways more important for general-purpose computing. Flops are really only of interest nowadays for people doing scientific computing now that graphics-related floating point things have been offloaded to GPUs.

If you want to be pedantic, computers are, of course, hardware implementations of a Turing machine. But it's silly to talk about them using Big O notation, since the "algorithm" for (sequential) machines is mostly the same regardless of what machine you're talking about. The constant factors here are the most important thing, since these things correspond to gate delay, propagation delay, clock speed, DRAM speed, etc.

Comment Re:1st step. (Score 1) 227

Microsoft doesn't use Perforce. They use an ancient, crappy fork of Perforce called Source Depot. I had to endure this piece of garbage while I interned at Microsoft. I can only guess that it's institutional momentum-- SD is so bad that I ended up using Subversion locally and then I would periodically sync back to SD when I needed to share my code. Not only does Source Depot lack run-of-the-mill Subversion features like "svn status", when I asked people what the SD equivalent was, a long conversation ensued with other Microsoft developers in which somebody ended up sending me a Powershell script. Gah. Of course, the number 1 thing I missed there was the UNIX shell.

That said, I was pleasantly surprised by the rest of Microsoft's toolchain. Having come from Eclipse (awful), IntelliJ (OK), and Xcode (baffling), Visual Studio was great. Visual Studio is so much better than any other IDE that I've ever used, that I would actually pay full price for it (I get the "Microsoft alum" discount) if I didn't have to use it on Windows. And Microsoft was more than willing to let me write code in F#, which was fantastic. In general, Microsoft takes care of their devs. Having integrated git is just icing on the cake.

Slashdot Top Deals

Work continues in this area. -- DEC's SPR-Answering-Automaton

Working...