I feel it necessary to point out, though, that OS X is not a microkernel system comparable to Minix
While this is true, it's worth noting that a lot of the compartmentalisation and sandboxing ideas that most of the userland programs on OS X employ (either directly or via standard APIs) have roots in microkernel research. OS X is in the somewhat odd situation of having userspace processes that are a lot more like multiserver microkernels than its kernel...
Translation is like predicting the weather. If you want to do an okay job of predicting the weather, predict either the same as this day last year or the same as yesterday. That will get you something like 60-70% success. Modelling local pressure systems will get you another 5-10% fairly easily. Getting from 80% correct to 90% is insanely hard.
For machine translation, building a database of 3-grams or 4-grams and just doing simple pattern matching (which is what Google Translate does) gets you 70% accuracy quite easily (between romance languages, anyway. It really sucks for Japanese or Russian, for example). Extending the n-gram size; however, quickly hits diminishing returns. Your increases in accuracy depend on a corpus and when you get to the size of n-gram where you're really accurate, you're effectively needing a human to have already translated each sentence.
Machine-aided translation can give huge increases in productivity. Completely computerised translation has already got most of the low-hanging fruit and will have a very difficult job of getting to the level of a moderately competent bilingual human.