I have been maintaining infrastructure for over a decade. I know what's involved.
When you mention flexibility, you're getting somewhere. If I needed a temporary capacity bump, the cloud would make a great deal of sense. It's not a bad DR plan either. But for the everyday capacity (the base load if you will), ownership is cheaper and offers better control.
Bank the money saved by owning the server vs cloud. By the time the server fails you'll have enough to buy 2 or 3.
OTOH, definitely consider the cloud as a DR measure.
He'll probably end up in an ally shooting up and explaining why unlike all the other junkies, it really and truly isn't his fault.
In some cases, they HAD to start taking drugs to control agonizing pain.
Later, they get cut off when they heal (or the doctor, threatened by the DEA dares not continue prescribing) and find they are addicted. Then stupid laws made by the small minded turn these ordinary citizens with a medical problem into criminals.
Yes, I do assembly programming when necessary. Of course, this is a compiler, not an assembler.
But since I am speaking of a specific case of a compiler's behavior, I wouldn't actually have to be skilled in assembly code to evaluate if it did or did not run correctly with the offending function overridden in the object code.
The optimization flags have nothing to do with CPU errata. You should know that.
Compile with -On where n>3 and it may not behave correctly on GenuineIntel or on AMD with the crippler defanged. Oddly, it might work on AMD with the crippler in that case (or it might not). Most of that is due to the compiler taking a few liberties with floating point correctness that may or may not work out OK.
Posting anon as I'm a unix sysadmin in an oracle shop.
We'll try not to hold that against you
For any case I have ever heard of..
Note that some programs won't run correctly with some optimizations even on GenuineIntel.
So, until last year, icc had not been available for the majority of Linux's lifetime. And keep in mind, it would be foolish to jump to a compiler with little track record no matter how good it looks. It could vanish tomorrow or quality could fall fast once it gets a foothold if it is a for profit venture. Given that, it wasn't a viable choice on principles for the first few years (and after because it proved to be problematic).
It says something about longevity and continued availability that a simple google for gcc history gets a detailed page indicating 1st release, a description of the development and a roadmap for the future while history for the intel compiler disappears beyond 10 years back.
It has only been recently that there were GOOD options that weren't gcc. The Linux kernel has several tricky bits in it where the tiny details of the compiler matter including how ambiguous bits of spec are interpreted and how bits the spec leaves to the compiler's option. So given a choice between 2 credible free compilers (clang and gcc) and a very much not free compiler with a history of cheating (the AMD debacle is not the only instance), it's not a very hard decision to make.
Meanwhile these days in the embedded space, optimized compilation of the kernel in the embedded space isn't nearly as important as it once was. CPU performance in that space is growing by leaps and bounds such that if a general purpose kernel is appropriate at all, there is probably an embarrassment of CPU cycles available at least to the point that the differences between gcc and icc won't be a deal breaker. The target app might or might not be another story, usually not.
That isn't to say that optimization is at all unwelcome, just that it takes 2nd priority to the compiler being stable and readily available.
Icc (and more often, ifort) are more popular in the HPC area for the application. Usually nobody worries too much about the kernel in that space either since the big gains there are made in efficient coding at the source level rather than in compiler optimization AND most of the calls into the kernel will be for hardware bound I/O. Optimization matters more in the application where the CPU will spend the vast majority of it's time crunching data with (hopefully) good cache utilization.
Did you note that by the time there was an icc, Linux was quite far along? Add to that, a Free OS but depends on a very much not free compiler when a perfectly good Free compiler exists. Add in the limited targets for the Intel compiler and it's no deal.
It's amusing how you are lambasting me here when you are clearly hanging on a literal definition of a word to re-interpret the clear meaning of OP in this thread.
Suffice to say, when decisions were being made as to what compiler should be the gold standard for the Linux kernel, icc was nowhere to be seen and wouldn't have been up to the task anyway. A good thing too given the problems icc has had with non-Intel CPUs and the limited targets it supports compared to gcc.
And yet, when the CPU detection routines are patched out, the program runs at full speed and with no errors. Sounds like FUD to me.
The savings remain clear and obvious, they just can't take advantage of it. It does give them a real measure of the cost of sticking with those apps.
I believe the Cr is for chromium. Thus the correct translation is "bite my shiny metal ass".
Nah, they'll just demand that you do something anyway. They'll go ballistic when the cloud service has a glitch and be absolutely certain you could fix it if you just tried hard enough..