Modula-3 and CEDAR/MESA come to mind, but kernels have been written in managed languages.
And C is about as far from being "optimal" for writing a many million line kernel as any language can be.
That's completely and utterly unrealistic. If those techniques and languages produced workable products, some would exist.
The fact that you mention kernels as being millions of lines suggests that you've never seen a microkernel and have little comprehension of the NT architecture.
I hate to break it to you, tux, but academic kernels are actually behind commercial kernels in technology... and they most certainly don't beat them in practicality or performance. If they did, I'm sure someone would use them. Companies are very good at grabbing academic kernels and dropping them in, like NeXT and XNU, for instance.
I'll just leave you to your little "modern kernel" fantasy.
That claim makes absolutely no sense at all, since the Java EE approach was developed on UNIX and Linux. And once you run your servers in managed virtual machines, you don't need all the elaborate kernel-based "multi-role" support anyway. That's another reason the "multi-role" support in NT is superfluous and obsolete.
I would say the Java platform is rather separate from its host architecture.
UNIX and Linux actually offer all the "multi-role" support you could possibly imagine and want: access control, isolation, namespace manipulations, and various forms of virtualization. Among all of those, people choose virtualization because it's the least hassle and the easiest to manage.
Haha... and because the system is so limited that you have to literally customize and hack it to work properly towards any server task, so one machine could never realistically do any task in production. Oh sure, it can "multi-role", it just isn't feasible. Virtualization simply exposes it as a limited architecture.
There's nothing archaic about making a multi-role capable system, realistically systems are getting largely and some people need more work out a single machine. Virtualization is extremely inefficient in comparison, just manageable to UNIX people.
Yeah, that's because you obviously don't know anything, and neither did the people at Microsoft. Microsoft's OS developers in the 1980's and 1990's were a bunch of PC hackers plus industry wash-ups who had no idea what the state of the art in computer science actually was, and they developed a third rate OS that was obsolete from the start.
Where are all these brilliant "true" computer scientists who write "modern" kernels? Every now and then, some PhD's system gets swallowed into a commercial distribution of some sort. If this weren't just some sort of utterly retarded academic systems fanfic on your part, someone would have made something that somehow outperforms or outshines commercial systems.
Besides this, commercial systems are defined by their organization. You know nothing about the actual NT kernel architecture, and that's fine. It sounds like you have a very absurdly ivory tower perspective anyway.