To be fair, having a single snake-like cable connecting all the PCs was a pretty common idea. To us today it seems obviously easier to have individual connections to a central hub or switch, but at the time a common cable was accepted, perhaps to save on the total amount of wiring used. Token Ring also used a single wire, though joined into a loop, and I think there was also Token Bus.
You are right that "transmit, and retry on collision" can never scale. But it is simple and hard to get wrong, as long as everyone is polite enough to wait a little before retrying. I'd say the success of the original Ethernet is an example of "worse is better".
Digital security experts told me that bad guys can use software to easily translate your âoeatâ and âoedotâ into a regular old email address.
Well, duh. It's trivial. But the question is not whether they can, it's whether they do. The evidence, as far as I can tell, is that generally they do not. It's hardly worth it for them as anyone with the moderate level of intelligence needed to obfuscate their email address is too intelligent to respond to spam. The exception would be if some large, popular website started displaying users' email address using an automatically applied obfuscation. But I think sites aren't that stupid - they usually have "click to reveal the address", which requires a new request to the server and rate-limits any harvesting.
Windows 365 supports nested virtualization...
How does Microsoft manage to do it while Parallels cannot?
Microsoft does say that there are some limitations when it comes to virtualizing Windows 11 on top of macOS, pointing primarily to features that require nested virtualization to function as not being supported.
And that, I think, is the problem with operating systems like Qubes OS, or indeed Windows 11 itself with its Android and Linux subsystems. The PC cannot be fully virtualized, because the virtual environment cannot itself host VMs.
I have no experience on mainframes but I believe that where virtualization is done properly, it's not like that -- a virtual machine can itself host other VMs, down to an arbitrary depth. Anyone care to correct me?
That way we could finally get rid of CONFIG_MATH_EMULATION too.
I guess that's true, Intel never made a cut-price Pentium chip with no floating point unit, and I guess none of the competing vendors did. But isn't it possible we might get CPUs without x87 floating point produced in the near future? Since if you have high-performance floating point code you will be using SSE or later instructions, and crusty old stuff calling x87 will still run acceptably fast under emulation.
Perhaps you will say that silicon is so cheap it costs basically nothing to include a small x87 floating point unit on the die?
This viewpoint motivated the Hutter Prize, offering a prize to the first to compress the one gigabyte corpus to less than 115 megabytes (including the size of the decompression code). "This compression contest is motivated by the fact that being able to compress well is closely related to acting intelligently, thus reducing the slippery concept of intelligence to hard file size numbers."
Here it's a bit different since you have a lossy compression scheme - the exact input image is not returned. That means you couldn't really offer a contest with a cash prize, since the judges couldn't fairly decide whether the output was "close enough". Nonetheless you could say that the model has demonstrated some kind of intelligence in being able to recreate a close-enough image from a short description. As usual with machine learning, it's hard to get the model to explain how it produces the result or what key features it has discerned.
People who go to conferences are the ones who shouldn't.