We know that ChatGPT can make up stuff that vaguely matches its training set. And you could train it on a corpus of peer-reviewed academic papers and newspapers (not including Wikipedia or other websites in the corpus) and it might then be able to write encylopaedia articles. But as Jimmy Wales noted, you'd have to take its output with a very large pinch of salt, and by that point you might as well write the article by hand.
A more fruitful approach, as well as one that's more of a challenge for current technology, would be to ask a machine learning system to find problems with Wikipedia articles. "The claim in the fifth paragraph cites the study [15] as evidence, but the conclusions of that study do not support the claim in the Wikipedia article. This is because the study says... while the article says... "
Once you've got a system that can nitpick articles like this, even if it's unreliable and sometimes gets false positives, it's still a good starting point for a human to go in and check. And if the model has misunderstood, hopefully it would be possible to use these mistakes to correct the model or as additional training data.
To be fair, having a single snake-like cable connecting all the PCs was a pretty common idea. To us today it seems obviously easier to have individual connections to a central hub or switch, but at the time a common cable was accepted, perhaps to save on the total amount of wiring used. Token Ring also used a single wire, though joined into a loop, and I think there was also Token Bus.
You are right that "transmit, and retry on collision" can never scale. But it is simple and hard to get wrong, as long as everyone is polite enough to wait a little before retrying. I'd say the success of the original Ethernet is an example of "worse is better".
Digital security experts told me that bad guys can use software to easily translate your âoeatâ and âoedotâ into a regular old email address.
Well, duh. It's trivial. But the question is not whether they can, it's whether they do. The evidence, as far as I can tell, is that generally they do not. It's hardly worth it for them as anyone with the moderate level of intelligence needed to obfuscate their email address is too intelligent to respond to spam. The exception would be if some large, popular website started displaying users' email address using an automatically applied obfuscation. But I think sites aren't that stupid - they usually have "click to reveal the address", which requires a new request to the server and rate-limits any harvesting.
Windows 365 supports nested virtualization...
How does Microsoft manage to do it while Parallels cannot?
Microsoft does say that there are some limitations when it comes to virtualizing Windows 11 on top of macOS, pointing primarily to features that require nested virtualization to function as not being supported.
And that, I think, is the problem with operating systems like Qubes OS, or indeed Windows 11 itself with its Android and Linux subsystems. The PC cannot be fully virtualized, because the virtual environment cannot itself host VMs.
I have no experience on mainframes but I believe that where virtualization is done properly, it's not like that -- a virtual machine can itself host other VMs, down to an arbitrary depth. Anyone care to correct me?
That way we could finally get rid of CONFIG_MATH_EMULATION too.
I guess that's true, Intel never made a cut-price Pentium chip with no floating point unit, and I guess none of the competing vendors did. But isn't it possible we might get CPUs without x87 floating point produced in the near future? Since if you have high-performance floating point code you will be using SSE or later instructions, and crusty old stuff calling x87 will still run acceptably fast under emulation.
Perhaps you will say that silicon is so cheap it costs basically nothing to include a small x87 floating point unit on the die?
We gave you an atomic bomb, what do you want, mermaids? -- I. I. Rabi to the Atomic Energy Commission