The only reason Linux is perceived as more secure than other operating systems is because most hackers don't care enough to spend time working to crack it, so there are less attempts.
Linux is a major server OS (arguably the largest), very big in embedded systems, and completely dominant on smartphones. Hackers are spending very significant time working to find exploits.
I don't see the problem. "There is bugs. And here is elmer. And over there is daffy." Seems grammatically fine to me.
I'm using LTS for all my work machines. The last round I rarely felt I missed out on anything compared to my updated machine at home. I think it's perfectly reasonable to stay with LTS if you want. You can still update to newer versions of, say LibreOffice and similar applications using snaps if you need it.
The first generation Xperia phones actually did something similar. They kept the battery at above 90%, by charging up to 100% then letting it fall to 90% again before recharging it again. Much better battery lifetime than keeping it at 100%.
But lots of people complained that Sony had a lousy battery charger system that couldn't even keep the battery topped up. So to avoid the bad press they changed it and kept it at 100% all the time, like the rest of the manufacturers.
hostage situation funded by RNC
Citations please. It's not in the Wikipedia article.
Ars Technica allows 30 minutes, I believe, and it doesn't seem to be abused. People that reply will quote the bit they reply to so it's clear what they refer to anyway.
So how about 30 minutes editing window, and a quick, one-button-press to quote the parent post? Just to encourage people to include the original bits in their replies?
For added protection you could colour the edited text in dark purple, say, just to make it clear to people what has been edited?
Well, yes and no. You're limited to 100Mbit/s, which is if course a lot slower than gigabit ethernet, But normally a scientific cluster (which is what I'm interested in) isn't really limited by bandwidth as much as by latency. Going through the USB subsystem for all packets is going to give you worse latency than dedicated hardware. But then, I also use a cheap switch that's probably not a speed demon for retransmitting packets either.
And the thing is, the Pi is a fairly slow computer. I suspect that as a ratio of computing speed to transmission delays, the Pi has as effective communication as a "real" cluster of server systems connected with high-end hardware. The CPU is even slower than the network if you will.
Any particular reason not to just do it in software, e.g xenserver or virtualbox? Virtual networking is kind of messy, but it leaves less cables around
VMs would work well, I agree. But this way I also get real(ish) network latency and delays in the same way a full-size system does. And an actual tiny cluster on my desk is a lot more fun
If you think nobody cares if you're alive, try missing a couple of car payments. -- Earl Wilson