It spreads out the load the peak - we call it a thrust ring in modern building practice.
The article is likely wrong. There are no high tensile forces in the pantheon, including the dome, at least not what we would consider "high" today. The structure is a (mostly) compression-only building. The oculus is a compression ring and the dome shape is close enough to a parabaloid that any tension forces are negated and the thrust at the base minimized.
Concrete has tensile strength all by itself. If I gave you a rod of concrete just an inch thick you wouldn't be able to pull it apart. Even tension from bending is allowed in the design of modern structures with every-day concrete. There are several modern admixtures that even allow cracks to self-heal in the presence of moisture.
To see real math applied to the use of all-compression spanning structures, consider hyperbolic paraboloid (saddle shaped) or inverted catenary (paraboloid domes) roofs.In some cases (usu. flat-ish roofs) it's architectural and rebar or prestressing steel is required, but for pure utility you can define a curve that keeps the surface in compression and then the only steel that is added is typically for shrinkage and thermal cycling crack control (which is cheaper than using shrink-compensated concrete mixtures). They're rare because they tend to be very labor intensive to form and cover.
And then he still has to hope that the bond strength works out.
As for I/O, you can pass through PCI devices in to the guest for pretty-much native networking performance.
Of course, that comes with its own headaches and negates some of the benefits of a VM architecture. Paravirtualized networking is however pretty adequate for most workloads.
It's not like you have to do VM *or* baremetal across the board anyway. Use what makes sense for the circumstance.
In my experience, KSM hasn't helped as much as it promised. It depends heavily upon the workloads. It also impacts memory performance. If things are such that KSM can be highly effective, then a container solution would probably be more prudent.
CPU throughput impact is nearly undetectable nowadays. Memory *capacity* can suffer (you have overhead of the hypervisor footprint), though memory *performance* can also be pretty much on par with bare metal memory.
On hard disks and networking, things get a bit more complicated. In the most naive way, what you describe is true, a huge loss for emulating devices. However paravirtualized network and disk is pretty common which brings it in the same ballpark as not being in a VM. But that ballpark is relatively large, you still suffer significantly in the IO department in x86 virtualization despite a lot of work to make that less the case.
Of course, VM doesn't always make sense. I have seen people make a hypervisor that ran a single VM that pretty much required all the resources of the hypervisor and no other VM could run. It was architected such that live migration was impossible. This sort of stupidity makes no sense, pissing away efficiency for no gains.
Smartphones have actual microphones. Why use the gyro as a crude microphone when you have a perfectly functioning microphone built into the device already?
So to the extent this conversation does make sense (it is pretty nonsensical in a lot of areas), it refers to a phenomenon I find annoying as hell: application vendors bundle all their OS bits.
Before, if you wanted to run vendor X's software stack, you might have to mate it with a supported OS, but at least vendor X was *only* responsible for the code they produced. Now increasingly vendor X *only* releases an 'appliance and are in practice responsible for the full OS stack despite having no competency to be in that position'. Let's see the anatomy of a recent example of critical update, OpenSSL.
For the systems where the OS has applications installed on top, patches were ready to deploy pretty much immediately, within days of the problem. It was a relatively no-muss affair. Certificate regeneration was an unfortunate hoop to go through, but it's about as painless as it could have been given the circumstances.
For the 'appliances', some *still* do not even have an update for *Heartbleed* (and many more didn't bother with the other OpenSSL updates). Some have updates, but only in versions that also have functional changes in the application that are not desired, and the vendor refuses to backport the relatively simple library change. In many cases, applying an 'update' actually resembles a reinstall. Having to download a full copy of the new image and doing some 'migration' work to have data continuity.
Vendors have traded generally low amounts of effort in initial deployment for unmaintainable messes with respect to updates.
According to the gov, 33% total efficiency for coal.
Of course if you take into account the energy expenditure it will take to pull the excess CO2 and other chemicals back out of the atmosphere, that number goes down a bit.
(Impractical to do, and therefore will never be done, you say? Okay, take into account the costs of living with a permanently impacted atmosphere, instead)
Maybe instead of automating the grunt work, we need now to automate the automating itself,
Why instead of? We can (and should, and will) do both.
The costs of laying wire/fiber are expensive but in the end, the people in the area can and do eventually pay for it regardless of who did it. It doesn't matter if Verizon, Comcast, Joe's Fiber Company, or the city of Whatever did the laying of the wire, the final cost for that wiring project would be the same. The problem with the franchise agreements is the people paid but they paid it to a single company that won't share it. People could have paid a third party or the local government the same amount of money in the end to run those lines and had an "open" line and then picked a carrier for their service on that line. The Verizons and Comcasts could still negotiate and run their own lines in the same area instead of providing service on the existing "public" lines but they won't. Why? Because of the competition and choice people have and they do not see money in doing it.
The issue isn't that you disagreed with his assertion, he was obviously incorrect. The issue is that you were an obnoxious twat about it.
When he said "people like you", I read it as "obnoxious twats".
No. Fusion power is roughly $80bn of research away. The problem is the funding has been so meagre that we will never actually reach the goal at current rates of funding. If $80bn sounds a lot, it's not - it's only 0.11 Iraq Wars. We saw fit to spend around $750bn (at a highly conservative estimate - that's the US DOD's own estimate) on bombing Iraq, but we don't see fit to spend just more than 1/10th of that amount on freeing ourselves from dependence on that entire region forever.
$22bn is only 0.03 Iraq Wars, so it's really not that much money in the grand scheme of things.
Probably not. Most normal cars don't accelerate that quickly but have extremely effective brakes. If you're driving properly and at least taking a quick glance at crossing traffic when approaching a traffic light, you'll probably see the red light jumper before he's jumped the light. At that point you can slow down *far more rapidly* than you can speed up.
About the only vehicles where acceleration may change the outcome are supercars and motorcycles.