Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Comment Re:MS Office & Games. (Score 1) 281

Not sure. They are often configured with additional software (e.g., but I did not need any of that just to use such keyboards on Windows. However, since Windows installs USB drivers automatically when you plug the device (either from device partition or downloading from the net), I cannot 100% tell whether a special keyboard driver was needed or not. I guess I could check in Hardware Manager but I'm too lazy to boot that machine right now :)

Comment Re:Bigger jump than GL 1 to 2? (Score 1) 281

Yes. Pretty much no existing API would be applicable since the abstraction jumped down several levels - like going from Java to C. You are now allocating and freeing memory buffers on the hardware, filling them out with the complete pipeline state information and passing to GPU to execute as is. I guess one could bolt this onto existing OpenGL API (NVidia tries to do that with GL_NV_command_list), but a) it is still higher level and b) interaction with the rest of GL remains problematic implementation-wise. Naturally a question arises: if that part of API is not interoperable with the rest, why not just separate the two?

Comment Re:Optimization takes time, and time is money (Score 1) 281

That is very true. Software may be written against "standards", but it is tested and optimized against other software. Linux is not a binary stable platform, and you can have two users who claim to run Linux but share almost nothing except the kernel and possibly libc. How can a big software house test and optimize the game? The test matrix would be unreasonable! In order to ship software for Linux, one has to either limit support to particular distribution and drivers, or to waive all guarantees.

Comment Re:hardly something to celebrate (Score 1) 281

He does forbid redistribution of the said binary drivers together with the kernel, so you cannot have them "out of the box" - user has to be the one making the "mix" of proprietary and non-proprietary code. And Stallman does put limitations on the developers - their philosophy essentially fights the contractual freedom between them and their users. Gladly, it is about to die finally, most recent and influential projects (clang/LLVM, docker, nginx) shun GPL.

Comment Re: hardly something to celebrate (Score 1) 281

You seem to contradict yourself. If you're fine with proprietary software (I am as well), then why are you against "trusted path" in the kernel - in what way proprietary kernel is different from a proprietary user application? Are you going to access their content not on their, but on your terms? If not, then why does the existence of the said path matter to you? Just install untrusted Linux kernel and forfeit your ability to access paid content.

Comment Re:javas not dead! (Score 1) 577

No, I'm saying it performs as well as C++ in most cases. Virtual method handling is one example as to why, the JIT has a better view at execution time as to what can and can't be inlined, so it can inline much more than a statically compiled C++ program possibly can.

You realize that JIT is inherently limited to a tiny bit of program which it compiles? JVM cannot spend neither time nor RAM building a whole program tree and making global optimizations like compilers can. And by the way, if you think that compilers are limited to static analysis only - there's also profile-guided optimization.

Well that's precisely the problem you face if you don't have an explosion of optimised binaries, unless you want to accept that the JVM is going to optimise more efficiently. It's not just about compiling for different architectures, it's about the JIT automatically being able to optimise to take advantage of extensions, and other hardware that may be present too. It can optimise dependent on amount of RAM, cache sizes etc. - something that just isn't known when you compile a plain old generic C++ binary for, say, the generic x86 platform.

But there are a number of other things it can do better too - better loop vectorisation (as a result of better inlining of virtual functions) and more efficient heap allocations for example.

In theory, it could do that. But if you do a reality check, you'll find out that JVMs right now are pretty mediocre compilers that lack even basic optimizations. Again, everything that JVM does, can be done by a compiler, but not vice versa. Compilers have nearly unlimited time and can spend gobs of RAM analyzing the program. They can use profile-guided optimization, allowing you to gather stats from a compiled program and then recompile it to better account for runtime behavior - if needed.

Oh, and while we're at it, fine-tuning assembly with specific CPU in mind does not matter these days except for SIMD ops. Waiting for memory accesses dominates CPU time - and here Java is at inherent disadvantage because you cannot really control memory layout of your data.

"Server software does not [need to] have single-thread performance because it's more often I/O bound - that means that CPU vendors can get away with CPUs like Bulldozer or SPARCs that suck at IPC (instruction per clock) performance."

This is nonsense. It depends entirely on the application. A heavy load web server for example may not really be I/O bound in the slightest depending on the size and what it does. Bulldozer is designed for optimisation of performance per watt, you're again confusing cause and effect as to why some things are the way they are.

Before you call this nonsense, go read some analysis and check benchmarks.

"That's not a problem of Java, though, but all managed languages - .NET also sucks."

Really, the problem is simply that you don't understand managed languages. Your understanding of the optimisations performed by JIT compilers is clearly woefully inadequate to being making this sort of comment. Your comments on server applications just don't even make sense for the most part to the point I'm not even sure you have the slightest grasp of what sort of things servers commonly serve.

"Microsoft tried to build an OS which would be .NET based - they wasted like 6 years on that and ultimately had to abandon the idea. Now they are going native :)"

This is just further nonsense. There was a Microsoft research project to try and build such a thing, and they did, and open sourced it. I don't know what you mean by "Now they are going native :)", they've always been native with their operating systems. If you were expecting their managed OS to cause them to throw out 3 decades of legacy code then you have a disturbing view of how software is developed.

It was a research project and nothing more, and even then it wasn't purely managed, they still had to bootstrap natively because no one ever pretended that managed languages are designed to do such low level operations. You can find out more about it here:

It's worth noting though that some of the things learnt from this research project have already made their way into Windows, but that's kind of the point of research.

You probably don't remember that Vista (Longhorn back then) was supposed to be .NET-only, with core API to the OS being written in a managed language. Microsoft was not able to implement that efficiently and had to cut it - this is speculated to be one of reasons for Vista delay.

Nowadays they are phasing away .NET in general, promoting C++ as the primary language for the platform.

"Sure, when I'm forced to use Java (e.g. Android), I immediately use the JNI window to escape. I am not interested in "benefits" of Java, if it means that I need to waste even more time trying to profile the application."

Right, and most people wont be interested in the downsides of your approach either because it simply means you're producing software much more slowly and with more scope for fatal bugs and security vulnerabilities.

You sound like one of those developers who has his little comfort zone and just can't deal with change. Everything should be written in assembly like it used to be! This is telling from your lack of knowledge about both JIT technology and server side software and hardware.

Like it or not, times have changed, there are better ways of doing things now such that the window of cases where C and C++ are the best tool for the job is rapidly diminishing. They're not worthless, they still very much have their place - low level operating system development being one example, some embedded development cases being another. But the fact is these languages have no tangible benefits over their managed counterparts for most real world scenarios that exist today, whether that's building desktop applications, or creating dynamic web pages, or building HPC trading systems. They do however have a number of downsides - slower development cycle and less secure and more error prone development by default being the obvious ones.

You either need to get over your fear of change and learn a bit more about these sorts of technologies and understand why much of what you said is wrong, or just stick to what you know and shut up about things you don't. Either way, sticking to what you know and complaining about that you don't know just results in you spouting nonsense as you have thus far in this thread with your simply outright incorrect comments about JIT technologies and server applications and hardware.

One can argue that everything will be written in Javascript and/or HTML5 by that reasoning. No, there are no major differences in productivity between Java coding and C++ coding. And C++ is not "a legacy language". In these days of multithreaded programming, it turns out that we are again changing the paradigm - we are using Data Oriented Design to gain efficiency and Java with its "everything is a non-trivial object" and lack of POD types fares very badly in that regard.

C++ hits the sweet spot of being pretty high level (heard of meta-programming?) while also allowing you to go all the way down to assembly when needed. With Java or any other managed language, you are inherently limited by what JVM provides, and JVMs have more concerns to care about than runtime efficiency, so they will always offer some kind of trade-off - that's why I called them "generic". While you can probably fine-tune them within reason, you cannot find a JVM that would, say, completely disable all runtime checks because your specific app does not need them.

If you will ever need to code an application that has tight performance requirements, like being required to draw a complicated scene under 16 ms, you will understand what I am talking about better. So far you seem to be only looking at the boring side of programming :) Try doing realtime graphics! ;-)

Comment Re:Java is faster than C++ (Score 1) 577

You are very optimistic. Right now even C++ compilers (which, believe me, are very much performance-oriented and rather are not memory-constrained) have problems with producing a good vectorized code, but thanks God we have assembly intrinsics and use them a lot. For JVM, that is even harder for multiple reasons (and unfortunate - historical - choice of Java bytecode is one of them). Sure, there's a broad class of software where performance does not matter, but as I said again, that is a boring software I don't want to work on. Writing such software is better to be outsourced somewhere where people crave for money more than I do.

As for HFT, I don't think that using Java is a good decision. If you can optimize for certain (best in its class) hardware, why do you need to hop through all the extra abstraction layers of Java? Sure you probably can, but it's like artificially limiting yourself.

Comment Re:javas not dead! (Score 1) 577

JIT is generic in a sense that each program (and even different parts of a single program) is different and you cannot base them all on a common framework. E.g. in C++ sometimes you have to abandon STL at all because you cannot allow dynamic memory allocation (and memory fragmentation it causes). I wouldn't say that "Java performs as well as C++", unless you are speaking about UI-heavy programs where bottleneck is user input - or, alternatively, C++ programs written by people who don't know how CPUs implement a virtual method call and why it's slower than a non-virtual one.

Yeah, with native languages you are bound to a specific architecture (and even variations of it, e.g. AVX, SSE), but is it better to be a jack of all trades and master of none? And also, there's no "explosion of binaries" anymore - unfortunately, the number of architectures available is continuously shrinking (kind of undermining that design goal of Java). "Boring" native software that is not performance-tuned may very well ship just a generic binary targeted at, say, all Pentium IV and higher CPUs.

Server software does not [need to] have single-thread performance because it's more often I/O bound - that means that CPU vendors can get away with CPUs like Bulldozer or SPARCs that suck at IPC (instruction per clock) performance. That is also the reason why Java can be used server-side without much problems, too. However, once you bring it to desktop, where performance matters, it starts to suck immediately. That's not a problem of Java, though, but all managed languages - .NET also sucks. Microsoft tried to build an OS which would be .NET based - they wasted like 6 years on that and ultimately had to abandon the idea. Now they are going native :)

Sure, when I'm forced to use Java (e.g. Android), I immediately use the JNI window to escape. I am not interested in "benefits" of Java, if it means that I need to waste even more time trying to profile the application. Is there any low-level Java profiler, by the way, which would tell where CPU is burning cycles in your code at? Down to the level of assembly - i.e. something like perf annotate.

"I never let my schooling get in the way of my education." -- Mark Twain