Have you tried: Oracle VM Server for SPARC and Oracle Solaris Zones for virtualization? Anyway Oracle and Cloud Sigma both offer Solaris in the cloud. And of course there is nothing stopping you from upgrading him to a modern Solaris box.
And there are a bunch of similar applications for which you might want to be able to verify that the mail's only going where it should, and that it won't stick around as a legal record longer than you want it to.
Yes because total control of the commons worked SOOO well for the Soviet Union and their satellites. Governments with total common ownership are responsible for the worst ecological catastrophies in history. Even today, China, with total control of their energy industry, has the worst air pollution in the world.
Face it, the idea that command and control is the only way is a total lie. An open and free market for technological innovation will save the environment, not mimicking failed God damned central planning from last century.
This was exclusively for workstations but in terms of multi processor there definitely were multi-processor 486s sold. I had a buddy with 4x486. SCO was the typical OS for these boxes. OS/2 and Linux were both working on it and would achieve it.
Also also with SCO the x86/i860 combo was popular (for an exotic workstation). The 486 while having good floating point math sucked at vector math. The i860 while good at vector math was bad at multi-tasking. There were both motherboards and compilers to take advantage of this combo which was a winner. It allowed you to build a workstation for under $10k that was a bad version of MIPS style workstations.
No you couldn't. 16mb RAM was out the but was very expensive and many motherboards wouldn't support more than 4MB SIMMs (1 and 2MB SIMMS were still the norm for PCs). Good motherboards (in full tower cases) had at most 8 slots. So I'm going with 128MB as an upper limit.
Sun wasn't that. The 128 RAM wasn't cheap but the 2G HD meant they were skimping. I bet those systems were around $5-7k or so well under double what an x86 workstation would cost.
As for getting professors to give up old equipment, start metering the electricity and billing the department.
I'd take that bet. Don't forget how much faster the ARM chips are. For example the A7 is twice the speed of the A6 which is almost 3x the speed of the A5. Admittedly the A8 is only a 20% speed burst but that's not bad relative to x86 especially for an off year. We'll find out over the next decade plus: can you make ARM faster more easily than you can x86 more efficient? But I'd bet on ARM.
So does this mean that I wouldn't be able to say remotely display a desktop environment which uses QT and within that click a shortcut to a GTK app and expect it to open and be managed by that QT desktop environment.
Remember KDE and Gnome cooperate and there is dbus. What would likely happen is something like this:
a) KDE desktop is running and you click on a Gnome application.
b) KDE passes a message to the local Gnome to handle remoting
c) local Gnome establishes a session with remote Gnome.
d) Those two communicate making the application effectively local
e) The Gnome applications displays on the KDE desktop using the tools they use today to do this sort of thing (dbus...)
So from an end user standpoint nothing changes
Once you have your desktop environment displaying remotely everything you do looks and feels local. How can you have that when each app may have a different remote implementation?
You could get a local feel, its up to the toolkit. You could get something much better. Each application and each toolkit makes intelligent choices about how to handle latency issues. So for example one application might very aggressively cache if latencies are high, while another might be more worried about processing delays and thus might keep intermediate buffers shallow to reduce the effective latency as much as possible even though this means the buffer runs dry. Apple incidentally is currently doing some brilliant work on buffers taking ideas that Linux invented and making them practical. With Wayland Linux applications and thus users will be able to take advantage of these advances.
Yes it is. In my previous statement I chose QT and GTK as examples because they are so common. A user could have any number of applications using any number of GUI toolkits. Assuming they will all bother to implement their own remote access would way over-optimistic.
If an application is written using a toolkit that doesn't support remoting then the application doesn't support remoting by design. The major Linux toolkits are already working with the Wayland team they fully intend to support it. I'd assume that highly specialized toolkits which don't remote, don't remote because they can't tolerate latency. For example good touch toolkits might fail at 1ms latency, 1ms is too fast even for almost all LANs to keep up.
So let's use this example. Human brains aren't designed for touch latency... you are using a touch toolkit that would be unusable remotely. What's the problem if it doesn't remote?
If I can watch a high definition video feed in real time over the internet then I should be able to remotely display a desktop or a user should be able to remotely display a game. The two should not be mutually exclusive. Surely it is possible to fix this in a way that pleases the gamer without screwing it up for the remote desktop user.
It isn't possible to do both. I'll just repeat what I wrote in the post directly above, " There are advantages to splitting application and video buffers for network transparency. There are advantages to unifying application and video buffers for performance." You have to pick. Either the person who wants performance has to lose or the person who wants network transparency has to lose. There are lots of either / or choices in life, there are lots of either / or choices in designing a windowing system. You cannot build a system where everyone gets everything. And even if I were wrong, X11 most certainly is not such a system. In the world of 2015 X11 mostly sucks at everything but via. hacks is painfully being kept alive. The low level choices you keep dismissing fundamentally alter what the system is capable of doing. For you to get feature F Mr G has to not get feature H.
Wayland people are not taking away stuff to be mean or because they are lazy. They are taking away things because they are balancing out the greatest good for the greatest number. Given the 2015 computing environment* the feature set Wayland choose was arguably the best choice. There is no question that X11's feature set is a dreadful choice.
* or more accurately given the 2008 computing environment, Wayland itself is falling behind.
I should be able to see my favorite desktop manager and click shortcuts within it without worrying about which toolkit each uses. It should just work.. just like it does now.
And you will be able to do that. And most likely it will be far better than now.
They were, to be fair, rock solid. I was using a couple until the late 2000s as my DSL gateway and email servers, and it was largely the lack of support (from the rest of the world) for SCSI-2 that made me reluctantly shut them down for the last time.
I'm not sure I've heard anyone suggest ARM is superior. It happens to be fulfilling a good niche as an architecture that provides decent performance per watt. But you're not seeing anyone wanting to use it in areas where power isn't a concern.
I suspect ARM will eventually be the architecture that's supplanted, not ix86 or ix86-64. Intel's getting good at producing low power ix86 family CPUs - I have one in my tablet, and the mobile space isn't really wedded to any architecture, but the desktop space is.
Gold "outperforming" (rising in price faster than) an index based on real world valuations would seem, to me, to be evidence gold is a poor (actually atrocious) substitute for a well managed fiat currency.
The recent OED has 171.5k words in it. Native speakers have a vocabulary of about 20k-35k words. Finally at least now you want to use 4 words not 3 and possibly one substitution trick.
lowest figure: 20k^3 = 8 trillion ~ 2^43 ~ 7 character random password
highest figure: 171.5k^4 = 8.65^10^20 > 2^69 ~ 11 character random password
Humans generally don't remember random passwords very well. This ain't bad.
You know, it kinda makes sense, but given that I've had months where I've been unable to play a specific game or two (without turning off various features that severely degrade performance) because "the latest driver" from AMD/ATI has had one issue or another, with no bug fixes available short of running the unsupported beta version, the idea of being forced to upgrade a driver that is currently not causing any problems is a definitely negative to me.
It'd be one thing if display card drivers were always being updated to fix bugs/security holes, but in practice, 99% of the updates I see are actually to support new cards (which isn't something I need or want a software update for), or to fiddle with the hardware optimization in theory to improve performance (which might be useful, but there's no reason to force such an update on people.)
Windows Update needs the ability to "pin" versions much as apt-get does. For security updates, fine, force them, but if an update is solely there to "improve performance" - or will have no affect whatsoever, it absolutely needs to be blockable.