As you point out - on a weak system, anyone with read access to data can probably write it out somewhere, too. Not so with an MLS system supporting data integrity and/or secrecy protections, combined with adequate label integrity mechanisms for exported data (a form of strong DRM, if you like). Of course, protecting against undocumented features (Easter eggs) required another level of assurance, and the ability to verify that there are no hidden trap doors available for (or planted by) determined adversaries.
Good luck implementing a system like that on anything but dedicated proprietary hardware.
One, at least, has been developed, and certified multiple times, using x86 commodity PC hardware, with a proprietary security controller (to provide isolated identity secret key and crypto support, like would be provided via TPM chip, today)
Virtualization is enough to blow mandatory access controls out of the water without even talking about Van Eck phreaking, physical separation between networks of different privilege levels, etc.
Virtualization DOES require special attention, which is what is so disturbing about the half-baked virtualization support provided for current architectures - see Karger's papers on requirements for MLS VMMs and experience learned working on the DEC Class A1 VMM effort.
In general, I would also disagree with your claim that a proper mandatory access control system allows users of different clearances to use the same device. Covert channels are too difficult to prevent without severely limiting the usefulness of the system. Basically if two users share the same CPU or hardware they can almost always find a way to use some resource conflict as a covert channel, whether it's just the average response time for a certain operation to complete or something like the key recovery attacks that use cache latency to actually steal keys from a non-cooperating process.
Covert channels are hard, but they're more manageable than the hopeless pursuit of zero-day 'sploits that characterizes modern commercial practice. That evaluated system I mentioned above included in its configuration options for the elimination of any storage channels, and controls to manage bandwidth of timing channels.
The disconnect is this - customers have arguably MLS applications (in the commercial sense) that they're running without benefit of even the most rudimentary controls on information flow. The security industry is wrapped up in a hopeless test-and-patch cycle that is demonstrably flawed. The consumer has to conclude, in this market, that computer security is an oxymoron at its heart, and unattainable by any means.
That's too bad. You CAN control information flows according to a sound, reasonable, usable security policy (integrity, secrecy, whichever). You CANNOT do it using systems that were never designed nor intended to do that. You CAN combine trusted systems with untrusted systems such that trusted systems prevent unintended information flows, while untrusted systems provide expansive functionality on the information you allow them to access. You CAN allow "read-down" by untrusted systems to lower security data IF the sharing of that data between sensitivity levels is provided by a trusted system (complete with controls on channels). No, you shouldn't let high systems issue their own request to low networks (ie, browse the Internet) without trusted mediation - either by doing the browsing from a trusted (high assurance) client with trusted cut-n-paste (a thin client would do), or via some trusted server intermediary to cache and clear low data for high system consumers.
But in today's world, everything, and I do mean everything, from routers to hubs and switches to firewalls, workstations and servers, run on multi-million line monolythic kernel systems, whether Linux, Solaris, BSD or Windows. None of those are evaluable above EAL4 (which is "low assurance" by definition).