It's still not clear how an application rendering Japanese text could end up making the bad assumption. If it's using a Japanese font, why would it bother to switch to another font when the character to be rendered exists in the current font? Does the problem only occur when the current font *doesn't* contain the character, and then the application goes hunting for it and ends up picking up characters from potentially multiple inconsistent fonts? That seems like an application issue, failing to try to retain a consistent font in this defaulting process. It points again to the notion that we should not even be doing that, but rather force applications to use "Unicode fonts" if they want to support Unicode text properly. This seems like a font issue more than a Unicode issue. Does Unicode have separate code points for italic and bold characters in other languages? Why should that information be part of the character instead of the font?