But I suspect that your browser set the character encoding as ISO-8859-1 in its headers.
Drawing an inference from the not-fact that the top of the batting order in every Wikipedia FAQ does not include how to set your user agent to send the right encoding header, I'd suggest that Slashdot's long-disabled Unicode support fell far short of the mark in the first place. (2005 just called. It wants to dissolve its de facto clue-stick monopoly.)
I authored a CJK word processor that ran under MS-DOS in the 1980s and early 1990s. Two of our linguists did our own in-house unification that ended up not so different than Unicode which came later.
At the time that Unicode came out, our largest customer groups were embassies, diplomats (Snowden-style), and other academic linguists (with a strong representation from the Brigham Young young-adult diaspora). Maybe 40% of our new customers in the early 1990s were still running turbo XTs, 286s, and 386 castrati (16 MHz SX of the 16-bit bus resurrected). It takes a long time for the wallet of a dusty academic sinologist to recover from dolling out $5000 in 1985 (true story, many times over). 20-year-old Mormon missionaries where not especially flush, either.
Imagine this as your early-adopter power-user-base for the newly ratified Unicode 1.0 Asian language support.
Many people at the time running Windows 3.11 were running in 4 MB. Multilingual software remained stuck in this grotesquely underpowered rut until the P54 was introduced in the mid-nineties.
It's not just the print and display fonts that were a burden to the software of the day, but the mere Unicode code point tables themselves. 256 KB of code-point mapping tables was the rough equivalent of Google grabbing another 256 MB to process-isolate another browser tab (4 MB then, 4 GB now).
Of course, one can code up a bespoke compression method and clever language subset overlays. I'm sure we invested more man-hours in bespoke compression methods and clever data overlays than Zuckerberg invested in coding up The Facebook, original edition.
It's probably a good thing that Unicode was rushed to fruition, however broken it now appears to be twenty-five years later, before the first release of NCSA Mosaic. Otherwise, Unicode might have been cobbled together Brendan Eich in a succession of 4 a.m. coding binges the week after he pounded out JavaScript.
It's funny that this bug involves typesetting mathematics. If any software was broken with respect to Asian character support, it was surely the original TeX—paragon of infinite breakage that we all now know it to be.
Back in the mid-to-late eighties, the very idea of sprinkling Asian fonts into math display mode would have been delegated to the savant sibling sequestered in Lamport's sound-proof attic.