The Rust language is intended to reach version 1.0 soon (either before the end of the year or early 2015), which comes with the promise of being backwards compatible. However the Rust standard library is still undergoing stabilization and parts of it may still change for a while. Right now a lot of work is being done in that area, to stabilize the most important bits.
Mozilla is also working on Servo, a research-project to develop a browser engine in Rust. The goal is to experiment with more parallelization in the browser, and Rust is supposed to help by making it easier to write correct multithreaded code. To do this Rust has a strong focus on ownership of data.
Rust can run without a runtime and the standard library is split up into several parts (which is not invisible to users of the standard library) that can be used separately when you choose to compile without the standard library. The advantage is that when you target, say, a platform that does not support dynamic memory allocation, you can still use the parts of the standard library that do not require allocation (liballoc). Or you can go without libc bindings. So it is relatively easy to run Rust on bare metal. You could write an operating system in Rust if you wanted to (and I think some people are trying to do just that, but I haven't heard from them for a while).
C is great for low level stuff since it is capable of generating machine code that has zero dependencies. K&R even explicitly mentions "non hosted mode" with no libc and implementation defined entrypoint semantics. In fact, it is the only language in mainstream use today that has this feature (aside from assembly.)
I very much doubt that is true. For instance, I think Rust can also make that claim.
Also, memory protection creates lazy programmers. If you have to reboot every time you screw up you will quickly learn to screw up a lot less.
So what's it like programming on DOS in 2014?
Your statement is shit. "Some guy I knew a long time ago once used threads for some unspecified purpose and when he got to thousands of threads it become very slow". Well that is just great. What was he trying to do? How was he trying to do it? You act like your anecdote proves something but without this information it contributes nothing to what could have been an interesting discussion.
Since the discussion was originally about COBOL, are you suggesting that language is more suited to massive multithreading than Java? If so, why? And if you truly need thousands of threads, perhaps you need Erlang?
Someone who deliberately cuts off their own legs with a chainsaw don't get sympathy. So why should addicts?
I imagine someone who would do that on purpose must be suffering from some serious mental problem, or must have been blackmailed or under some kind of duress. Certainly they do deserve sympathy and help.
What "undefined" means here for most compilers is that it will make the best attempt it can under the C rules but the results may vary on different machines. Ie, it will use the underlying machine code for adding two registers, which may wrap around or possibly saturate instead, and the machine may not even be using tw's complement.
No, that would be implementation defined behavior.