Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re: BI == Business Idiots (Score 1) 260

Nope (also [citation needed]). The go compiler is fast because it doesn't use modules/header files.

There are three compilers for Go, one based on the Plan 9 stuff, one a GCC front end, and one an LLVM front end. True, none of them use header files, but this is really something that doesn't affect C-family languages if you use precompiled headers. The Plan 9 implementation is fast because it does a tiny subset of the optimisations that GCC or LLVM would do.

The GCC and LLVM-based compilers are have similar compile-time performance to C or Objective-C. They're only faster in comparison to C++ because they don't do any compile-time specialisation (which, by the way, something like a .NET CLR or JVM will do in the JIT, but which Go never does). In C++, you pay a price in compile time for better run time[1] if you use templates or pay it at run time if you use virtual functions, in Go you pay the price at run time and have no alternative. Unless you're the person implementing the generic Map type (though the Map can't be usefully parameterised, so you often end up paying it as a user of this type too).

Go does a lot of nice things (channels, interfaces, and so on), but it is frustrating when a new language includes problems that other languages fixed decades ago. Share via communicating is a sensible pattern, but a new language for parallelism that doesn't make it trivial to enforce shared xor mutable is embarrassing. Erlang had this right from the start and Pony does it in a very nice way.

[1] Unless you end up blowing away your i-cache. It's true that a lot of C++ programmers will overuse templates and end up sacrificing compile time for no measurable run-time benefit, but at least when you actually want to retain most of the source flexibility of dynamic dispatch without the run-time overhead then you can.

Comment Re:Dumb argument (Score 4, Insightful) 260

Add to that, Go and Swift are pretty small languages. Learning either is something that a moderately competent programmer ought to be able to do in a few weeks. Neither is sufficiently different to other languages that there's a big cognitive jump. The difficult thing is always learning new libraries and frameworks, not learning a new language (well, unless the new language is C++, where after a decade of daily use developers are still not surprised to come across a language feature that they've never seen before).

Comment Re:BI == Business Idiots (Score 5, Interesting) 260

Right. Apple created Swift because Objective-C was a nice language for the requirements of '90s computing, but is starting to be limited by its C heritage. They needed a more modern language that interoperates very well with Objective-C (because they have a lot of legacy Objective-C code that isn't going away any time soon) and this required making a new language because there weren't any good contenders available. MacRuby is the closest, but falls short in a number of areas.

Google didn't create Go as the result of some corporate masterplan, a small team at Google created it and managed to get buy-in from some other groups at Google. It's still far from the most widely used language for new projects inside Google, but it does have some advantages (though is slightly let down by Rob Pike's refusal to accept that some people who are not Rob Pike have had good ideas in the last 30 years).

The recruiting thing can't really work. It would only really make sense if people would learn a cool language and then discover that there are very few places where they can work and use it. This is sort-of true for something like Erlang or Smalltalk, but Swift is fairly widely used by people developing for iOS and OS X (and would probably not be worth Apple's effort in developing it if it weren't). If the language is successful enough that enough people are learning it to significantly affect the pool of potential applicants for a company the size of Apple or Google, then enough other companies are likely to be using it that it isn't a significant benefit.

Comment Re:Academy of Country Music (Score 2) 43

I'm also not in America. The BCS is our local equivalent, but the ACM has a large international membership and either runs, or co-runs with the IEEE, almost all of the top-tier computer science conferences around the world. It also publishes most of the top-tier journals in computer science. What country are you from where no one has ever heard for the ACM? I'd have guessed Australia from your username, but that can't be it as the ACM has a fairly significant presence there.

Comment Re:Why this presumption that you need 3D accelerat (Score 1) 193

A compositing display server saves a lot of CPU, by just doing the rendering and rasterisation of windows once and then alpha blending the resulting windows. You don't need to redraw for expose events, you just composite the results. This saves even having to bring the background applications into the cache (or into RAM if they're swapped out). Within an application, you can get the same benefit, caching the rendered results of (for example) a complex data-driven view and not having to do a load of queries of a model object because an overlapping view needs redrawing.

The down side of this is that you end up doing a lot of compositing. Sure, you can do this on the CPU (and a modern CPU is fast enough to do it on the vector unit reasonably fast), but that consumes a lot more power than doing it on the GPU.

Comment Re:I've personally fixed bugs (Score 1) 193

I've hacked on GPU drivers. These days the command submission part is pretty simple, most of the driver is the compiler that generates optimised code for a processor that's highly optimised for a fairly narrow set of workloads. The firmware, on the other hand, is an entirely different thing. This is closer to CPU microcode than a driver. It's likely written to a one-off architecture and is likely to be completely rewritten for the next revision.

Comment Re:Hummmm?? (Score 4, Insightful) 260

Steve Jobs made anti-DRM statements very early on. At the time, the music industry was insisting on DRM for everything. They eventually learned that it gave more power to the distributors than to them and allowed Amazon to sell DRM-free music (but didn't allow Apple the same deal for a while, to allow Amazon to become a viable competitor). For some reason, the movie studios are intent on making the same mistake and insisting that Amazon and Netfilx take complete control of their supply chain, when the best thing for their business is a healthy competitive ecosystem driving each others margins.

If they had any sense, the music and movie studios would insist that distributors sell without DRM.

Comment Re:Free Speech (Score 1) 180

If you run a messenger service, you aren't entitled to decide that select groups can't use your service. You can't decide that you will monitor the messages, and only deliver those messages that you approve of. You don't get to decide that you will deliver partisan messages that favor your position, and just lose messages that support the other side.

I'm fine with you doing all of these, as long as you're willing to take responsibility for every message sent on your service. Bomb threats, death threats, trade secrets, copyright infringement, all become your liability - if you're policing the content then you're responsible for it.

Comment Re: Can they compile from source? (Score 1) 143

The NSA or GCHQ (or any similar intelligence agency) almost certainly could insert a backdoor into MS software. Doing the same any other piece of proprietary software developed by people that they could easily blackmail would also be easy. There are a number of approaches that would work for open source too - there was a recent story about a lot of contributors to prominent projects hosted on GitHub having weak SSH keys, so compromising one of these from someone who hasn't committed in a long time and putting in a bug fix along with an obfuscated backdoor would be easy.

The danger of doing this is that there's a lot of potential fallout if they're caught. This kind of active intervention raises the stakes and also weakens their defences (it's very hard to create a backdoor that isn't a security vulnerability). Given that almost no software is formally verified and most is very complex and not aggressively tested against hostile input, if you've got enough resources to throw at it then you can probably find an exploitable bug already and not have to bother. This is much more deniable, because no one can be completely sure that you were the ones exploiting the vulnerability.

Comment Re:bullshit (Score 1) 293

In Smalltalk, there are two integer types that can be used pretty much interchangeably. SmallInt objects are stored inside a pointer, so they are typically 31 or 61 bits, with the low bits used to distinguish pointers from small objects (on 64-bit platforms, some float values can be safely stored in a pointer too). If a SmallInt operation overflows, then the result is a BigInt object. BigInt objects are real (immutable) objects that encapsulate an arbitrary-precision integer. They're much slower to use than SmallInts, but you generally don't think about which you're using, you just use an integer type and let the language / libraries handle making it efficient. Floating point values are handled similarly, though they don't grow precision (I don't remember what the Smalltalk-80 spec mandates, but some implementations provide high-precision floating point classes that you can use instead of the efficient hardware-sized floating point).

In contrast, JavaScript provides an IEEE double and nothing else. Implementations try very hard to avoid actually using doubles to implement them for values that are actually integers, because floating point operations are a lot slower than integer ones on modern CPUs. Because operators are special in JavaScript (anyone know what object + array is?), you can't provide any kind of number with increased precision. Try dealing with any data that comes with 64-bit values (e.g. file offsets on a modern FS) in JavaScript and you'll see the pain.

The JavaScript implementation from Cadence that's used for scripting various IC design systems avoids this by providing the full Smalltalk set of number types (it's actually implemented in NewSpeak, which is implemented in Smalltalk, for extra fun).

Slashdot Top Deals

"Why can't we ever attempt to solve a problem in this country without having a 'War' on it?" -- Rich Thomson, talk.politics.misc

Working...