No, I'm saying it performs as well as C++ in most cases. Virtual method handling is one example as to why, the JIT has a better view at execution time as to what can and can't be inlined, so it can inline much more than a statically compiled C++ program possibly can.
You realize that JIT is inherently limited to a tiny bit of program which it compiles? JVM cannot spend neither time nor RAM building a whole program tree and making global optimizations like compilers can. And by the way, if you think that compilers are limited to static analysis only - there's also profile-guided optimization.
Well that's precisely the problem you face if you don't have an explosion of optimised binaries, unless you want to accept that the JVM is going to optimise more efficiently. It's not just about compiling for different architectures, it's about the JIT automatically being able to optimise to take advantage of extensions, and other hardware that may be present too. It can optimise dependent on amount of RAM, cache sizes etc. - something that just isn't known when you compile a plain old generic C++ binary for, say, the generic x86 platform.
But there are a number of other things it can do better too - better loop vectorisation (as a result of better inlining of virtual functions) and more efficient heap allocations for example.
In theory, it could do that. But if you do a reality check, you'll find out that JVMs right now are pretty mediocre compilers that lack even basic optimizations. Again, everything that JVM does, can be done by a compiler, but not vice versa. Compilers have nearly unlimited time and can spend gobs of RAM analyzing the program. They can use profile-guided optimization, allowing you to gather stats from a compiled program and then recompile it to better account for runtime behavior - if needed.
Oh, and while we're at it, fine-tuning assembly with specific CPU in mind does not matter these days except for SIMD ops. Waiting for memory accesses dominates CPU time - and here Java is at inherent disadvantage because you cannot really control memory layout of your data.
"Server software does not [need to] have single-thread performance because it's more often I/O bound - that means that CPU vendors can get away with CPUs like Bulldozer or SPARCs that suck at IPC (instruction per clock) performance."
This is nonsense. It depends entirely on the application. A heavy load web server for example may not really be I/O bound in the slightest depending on the size and what it does. Bulldozer is designed for optimisation of performance per watt, you're again confusing cause and effect as to why some things are the way they are.
Before you call this nonsense, go read some analysis and check benchmarks.
"That's not a problem of Java, though, but all managed languages -
Really, the problem is simply that you don't understand managed languages. Your understanding of the optimisations performed by JIT compilers is clearly woefully inadequate to being making this sort of comment. Your comments on server applications just don't even make sense for the most part to the point I'm not even sure you have the slightest grasp of what sort of things servers commonly serve.
"Microsoft tried to build an OS which would be
This is just further nonsense. There was a Microsoft research project to try and build such a thing, and they did, and open sourced it. I don't know what you mean by "Now they are going native
It was a research project and nothing more, and even then it wasn't purely managed, they still had to bootstrap natively because no one ever pretended that managed languages are designed to do such low level operations. You can find out more about it here:
It's worth noting though that some of the things learnt from this research project have already made their way into Windows, but that's kind of the point of research.
You probably don't remember that Vista (Longhorn back then) was supposed to be
Nowadays they are phasing away
"Sure, when I'm forced to use Java (e.g. Android), I immediately use the JNI window to escape. I am not interested in "benefits" of Java, if it means that I need to waste even more time trying to profile the application."
Right, and most people wont be interested in the downsides of your approach either because it simply means you're producing software much more slowly and with more scope for fatal bugs and security vulnerabilities.
You sound like one of those developers who has his little comfort zone and just can't deal with change. Everything should be written in assembly like it used to be! This is telling from your lack of knowledge about both JIT technology and server side software and hardware.
Like it or not, times have changed, there are better ways of doing things now such that the window of cases where C and C++ are the best tool for the job is rapidly diminishing. They're not worthless, they still very much have their place - low level operating system development being one example, some embedded development cases being another. But the fact is these languages have no tangible benefits over their managed counterparts for most real world scenarios that exist today, whether that's building desktop applications, or creating dynamic web pages, or building HPC trading systems. They do however have a number of downsides - slower development cycle and less secure and more error prone development by default being the obvious ones.
You either need to get over your fear of change and learn a bit more about these sorts of technologies and understand why much of what you said is wrong, or just stick to what you know and shut up about things you don't. Either way, sticking to what you know and complaining about that you don't know just results in you spouting nonsense as you have thus far in this thread with your simply outright incorrect comments about JIT technologies and server applications and hardware.
C++ hits the sweet spot of being pretty high level (heard of meta-programming?) while also allowing you to go all the way down to assembly when needed. With Java or any other managed language, you are inherently limited by what JVM provides, and JVMs have more concerns to care about than runtime efficiency, so they will always offer some kind of trade-off - that's why I called them "generic". While you can probably fine-tune them within reason, you cannot find a JVM that would, say, completely disable all runtime checks because your specific app does not need them.
If you will ever need to code an application that has tight performance requirements, like being required to draw a complicated scene under 16 ms, you will understand what I am talking about better. So far you seem to be only looking at the boring side of programming
As for HFT, I don't think that using Java is a good decision. If you can optimize for certain (best in its class) hardware, why do you need to hop through all the extra abstraction layers of Java? Sure you probably can, but it's like artificially limiting yourself.
Yeah, with native languages you are bound to a specific architecture (and even variations of it, e.g. AVX, SSE), but is it better to be a jack of all trades and master of none? And also, there's no "explosion of binaries" anymore - unfortunately, the number of architectures available is continuously shrinking (kind of undermining that design goal of Java). "Boring" native software that is not performance-tuned may very well ship just a generic binary targeted at, say, all Pentium IV and higher CPUs.
Server software does not [need to] have single-thread performance because it's more often I/O bound - that means that CPU vendors can get away with CPUs like Bulldozer or SPARCs that suck at IPC (instruction per clock) performance. That is also the reason why Java can be used server-side without much problems, too. However, once you bring it to desktop, where performance matters, it starts to suck immediately. That's not a problem of Java, though, but all managed languages -
Sure, when I'm forced to use Java (e.g. Android), I immediately use the JNI window to escape. I am not interested in "benefits" of Java, if it means that I need to waste even more time trying to profile the application. Is there any low-level Java profiler, by the way, which would tell where CPU is burning cycles in your code at? Down to the level of assembly - i.e. something like perf annotate.
As for "better than JIT" argument, I think you are putting too much trust into a rather generic approach. Even traditional (and performance oriented) compilers don't handle all use cases well and you can hit a roadblock there too, let alone all the fundamental problems with "managed" code (e.g. random memory access patterns which hurt CPU caches, various safety checks, stack-based VM design that doesn't map well to register-based hardware - stack-based processors are at inherent performance disadvantage, by the way, that's why Intel shunned FPU in favor of register-based SSE). My statement of JIT weakness is supported by the well-known fact that server-side software rarely has great single-thread performance (and often it doesn't need it, but it's another topic), so server CPUs tend to have gobs of cache in order to alleviate that.
Again... language (syntax, etc) doesn't matter much for me, I wouldn't mind Java if it allowed me to get as close to hardware as possible, even via non-portable extensions. It's the layers of code to profile and debug through and resulting feeling of not being in control is what I don't like.
Different people may look differently, but for me, all that means that Java is suboptimal for games or other heavily performance-oriented stuff, and this is the only kind of software I enjoy programming. Making performance-insensitive backends full of "business logic" is for someone who is in the software industry for money only...
It still drives down wages. [...] wages fall, maybe not ot third world levels, but below what the market would normally dictate.
Why do you limit "market" to a single country only? One day we will witness formation of the United States of Earth, and artificial obstacles for population movement will be quickly forgotten. The market is already global.
Also, Linux the kernel is one thing, and Linux the OS is another. Linux graphics stack is certainly not "lightyears ahead" of Windows, where you can reset/reinstall graphics drivers as if they were userland programs - vice versa, we still have X server that probes PCI bus (try grep -i pci