Bill Maher once described gerrymandering as "when politicians choose the voters."
Probably not original to Maher, but bullseye nevertheless.
Bill Maher once described gerrymandering as "when politicians choose the voters."
Probably not original to Maher, but bullseye nevertheless.
The burden of proof for alleged discrimination is to stand up and say "I allege discrimination". Discrimination alleged. QED. You have it in writing, on the court transcript.
If Damore is clever enough to work at Google, he's probably clever enough to figure this out, so I predict 100% chance of success in this legal endeavour.
Unfortunately for Damore, the judge probably will probably award damages on the scale of one piping-hot mocha cappuccino (to be delivered upright, in a protective cup, with a spill-proof lid) and then assign the entirety of Google's legal costs to the plaintiff (Damore begins to faint), up to—but not exceeding—two hours of a discount public defendant, one H1-B dry-cleaning bill (it's just a second day job to pay the bills for an underfunded moonlight startup), and two cross-town Ubers (Damore perks up again like he just received a Mia Wallace special spiked with Adderall).
Google, with a driven corporate culture of work to completion, immediately delivers the requisite coffee to Damore, who expresses no surprise. (The inventor of Google Glass cut his teeth packing a fully articulated pop-up espresso machine into a svelte, feminine attache case—on his seventh of fifteen interviews.)
Judge raps gavel.
Damore approaches bench, profusely thanks his lordship, and pretends to forget his fancy coffee on the judge's bench.
Judge: Well, well, look what the bailiff brought me
Judge glowers dramatically at Google's counsel (all twelve), Morse-coding "should I drink this?" with his bushy eyebrows.
One of the sharper members of Google's half-and-half gender-balanced legal dream team (who happens to be a man) recognizes the Morse code gesture (it's a man thing), and Morses back with a precisely calibrated locker-room shrug "go ahead, it's just coffee, we're not quite that petty, you old toad".
That was extremely respectful, all things considered. The legal costs award will barely pay for his morning aftershave, which he seems to consume at twice his previous rate now that Google hires twice as many women, not that it improves his win rate, on either score, not one little bit.
Mind you Intel will presumably claim KPTI and its equivalents on Windows and macOS fix the security problem and any change in performance doesn't violate any sort of contractual agreement.
You are mentally modelling KPTI as an annoying one-time fix (with not much further burden), and not as a brittle work-around that requires permanent vigilance to pervasively deploy and enforce (Google's retpoline certainly falls into the class of permanent vigilance burdens).
Not to mention that your carefully performance-balanced server loads need to be re-engineered around this unpredictable new performance sink-hole. KPTI is not as simple as a CPU microcode update (which also has performance implications, though not usually broad and severe).
This whole mindset of "a patch, is a patch, is a patch" coincides with faulty logic in OS hardening. You see this opinion all the time that an attack mitigation that can be easily worked around isn't worth implementing.
That's the non-metabolic view.
The metabolic view is that every extra difficulty adds complexity to carrying out a successful attack. When we are trying to build something constructive, difficulty scales exponentially in the size of the specification, as we all know. When we are tying to build something destructive (a root kit), this same scaling term is now magically rendered sub-linear. (There might actually be a grain of truth here, due to the inherent asymmetry of success and failure, but that should hardly be taken for granted.)
We try to stack up ASLR and sundry related obstacles to tilt the balance in our favour, then along comes Intel, who dumps a permanent burden on the white hat side of the fence.
How many hundreds of millions of devices out there are never going to receive a patched OS that's compatible with the existing, validated device configuration?
Well, of course, the newest version of the OS is a strict superset of the old OS (though MacOS is not included on this list by virtue of superior Design), and you should stay up to date anyway.
I've been in this industry since the 8080. Enough already with the Stockholm syndrome.
Just imagine Boeing retrofitting your turbo fan with 20% less maximal thrust due to faulty wing design. So the airlines need to stop flying Boeing into Denver on hot days? (Surely your air service scheduling system can manage one more tiny constraint.) What's the big deal? Problem solved.
I won't argue he's wrong, but I think as fast as CPUs change you'd have to have across the board reductions in workload capacity by a significant number (ie, the 30% touted initially) to be able to claim harm and justify a recall.
Man, have you ever drunk the software caveat-emptor Kool-Aid whole cloth.
The specifications for hardware and software are not that it will work as specified.
The specification is that, with sufficient user cleverness (and sweat streaming off a bulging forehead, and unbroken vigilance) it is possible to almost get the hardware or software to attain its putative performance ratings without catching one or more incurable black hat diseases along the way.
This ethos dates from the 1980s and 1990s when the upgrade cycle (both hardware and software) was three years at most, and the whole drama could barely play out in that time frame.
I bought my Sandy Bridge Xeon in January 2013 and I'm not sure I'd take a free swap to Intel's newest equivalent, because the hassle of swapping over probably exceeds the expected gain (my average uptime, when the power company isn't replacing faulty-from-birth concrete power poles, has averaged well over a year, and there hasn't been a single hardware or OS glitch that I know about since I eliminated one suspect hardware card, back in the first few months).
I make fairly heavy use of BSD jails, and I've always regarded process isolation as more important than raw performance.
Punting isolation from hardware to software is a Sisyphean burden. One important binary on your system that wasn't compiled with the right compiler (on the right day) and the game is lost. Worst, if there's a 'retguard' gadget variant of the attack, you also have to use the right linker (on the right day), because security is (potentially) no longer link-order independent. Lovely, lovely, isn't that lovely.
This giant, amorphous, hard-to-discern attack surface is now my problem because "durf durf, complicated, too many transistors" and "we can't read widely circulated papers pointing out that one of our optimizations is full of shit" and "caveat-emptor grandfather clause—so suck it!".
If there was a paper from long ago pointing this out (as I've seen stated in this thread, but haven't check out myself) then there's going to be an MFT of legal discovery pointed at Intel to find out what they knew and when they knew it.
The GHz wars were a great thing, because it rapidly erased your worst corporate blunders.
The beleaguered end-user responded to constant threat by refusing to buy computer systems with CPUs and memory soldered to the mainboard (only then it was a bug in an ASUS northbridge—newly introduced when they bumped the much-loved board from an A to B revision—that caused the disk controller to scribble randomly over my hard drive, taking out three different OS boot partitions in a single bound).
Once vendors start soldering CPUs and memory to main boards, it's like, so long puberty, you're apparently an adult now.
Adults in every other industry recall or replace products that don't work as least marginally to original specifications (without forcing the end-user to jump through a thousand flaming hoops).
It's high time to set aside childish ideas that this technology (alone) can demand unlimited user backflips before any restitution is warranted or justified.
I don't have the slightest problem with Theo standing up for his principles, but to do so without expecting there to be some rather obvious blowback should there be a similar situation in the future is rather naive, to say the least.
What evidence do you have that Theo didn't expect this to play out exactly as it has?
Is he obligated to say "I saw this coming" if he saw this coming?
You seem to think that Job #1 in these situations is maintaining an unbroken signalling posture that you're truly inside the loop.
There's a name for that.
I got to thinking about Google's clever Retpoline from the other day.
Google Says CPU Patches Cause 'Negligible Impact On Performance' With New 'Retpoline' Technique
The problem is, this is not invariant under peephole optimization. These instruction sequences need to be handled by the compiler through a very literal minded end-game code generation pass.
Which got me to thinking about RETGUARD gadgets.
I know, both of those sites are horrible, but Google fails me here.
Are speculative gadgets a problem here? If so, Google's clever patch is going to need a sump pump bolted on the side.
And then you get into the whole problem of deterministic compilation in order to be certain that the executable you build contains the necessary mitigations (or some tricky post-compile analysis I sure don't wish to develop myself).
What a giant mess.
So you decide to speculate a future instruction.
It happens to be a load.
The address is [ebp+eax]. A recent instruction had the same address field, so you speculate that it remained the same.
Now you need to translate the address. The translate might be in the TLB, but you check, and for some reason it isn't.
So you decide to speculatively trigger TLB load.
Finally, you get a physical address back. A previous write instruction is not yet translated, but it seems unlikely it will translate to the same address, so you decide to speculate the load and you make a cache line request from L1.
It might be in L1, but it isn't. So you decide to speculate again, and request it from L2. Not in L3, either, so finally you speculate the load all the way to external memory. When the cache line returns, you speculatively cache this at all levels. Then you speculatively store the value into the target register. The final step was the least dangerous, because you can dump this later, no harm to the abstract state. But the concrete side effects on the TLB and the three layers of cache are not so easily reversed. In theory, the concrete state doesn't leak into the abstract state. Because we simply don't like to think about time (time, above all things, being never simple; hint: functional programming has no time, only progress).
Not all speculative architectures are created equal. There are many opportunities for an architecture to Just Say No.
With cache coherence, you have the MESI protocol (and its bewildering shoe full of second cousins).
One could apply the same concept of "exclusive" to the page tables, an exclusively mapped page being one mapped only onto into the current process and security context. If TLB speculation hits a different kind of beast, abandon speculation. Same thing with cache fill. Concrete side effects thereby only accrue from speculation to exclusive resources. Share-nothing usually solves most problems in computer science (except performance, which is mainly defined in the time domain).
I'm gong to abandon the back of my envelope here, One has to think really damn hard to take this to the next logical level, and frankly, I don't have a damn to spare right this very minute.
But please, advance the conversation beyond:
[_] has speculation
[_] does not have speculation
Because that is Intel's diabolical trap, for as long as their PR department can continue to get away with tugging their wool in broad daylight.
Intel seems to be the one cutting corners - for decades. You do remember the FDIV and FOOF bugs in early Pentiums?
I recall the FDIV bug quite well, and it had nothing to do with cutting corners. The design of the circuit was correct. In the transfer to manufacturing, some relatively insignificant bits in a hardware lookup table were truncated erroneously. The rarity of the failures allowed the mishap to escape detection in the validation phase.
Intel's test probably should have been stronger in this area, but that's an awfully easy thing to say in hindsight concerning the validation of extraordinarily complex designs.
Nostradamus: "There's a horrible bug in this design, and if you double your test coverage from stem to stern, you'll probably find it."
Intel: "Gee, thanks, Nostradamus. Invest another $10 million and wind up a year late. I think we'll pass on the engineering, and expand our PR team by one full-time professional bullshitter."
Nostradamus: "So be it. For what it's worth, I also wrote this nice quatrain on the horrors of speculation."
Intel: "We'll pass."
Nostradamus: "No, you won't."
Intel has been many things over the years (with a weird, clockwork heel-turn), but skimping on validation is pretty much the last thing on my list of Intel malfeasance.
general crisis-management ethos
I recall that as a great read. From my own notes:
Big numeric coprocessor redesign as part of the Pentium. This lead to the world-famous Pentium FDIV bug. He claims that transcendentals were easy to test on existing software, but most software took extraordinary efforts to avoid division, so that coverage was extremely thin at this testing layer by comparison.
I think that discussion also covers the i860, a litany of terror.
The Intel i860 (also known as 80860) was a RISC microprocessor design introduced by Intel in 1989.
It was one of Intel's first attempts at an entirely new, high-end instruction set architecture since the failed Intel i432 from the 1980s. It was released with considerable fanfare, slightly obscuring the earlier Intel i960, which was successful in some niches of embedded systems, and which many considered to be a better design. The i860 never achieved commercial success and the project was terminated in the mid-1990s....
On paper, performance was impressive for a single-chip solution; however, real-world performance was anything but.
One problem, perhaps unrecognized at the time, was that runtime code paths are difficult to predict, meaning that it becomes exceedingly difficult to order instructions properly at compile time. For instance, an instruction to add two numbers will take considerably longer if the data are not in the cache, yet there is no way for the programmer to know if they are or not. If an incorrect guess is made, the entire pipeline will stall, waiting for the data.
The entire i860 design was based on the compiler efficiently handling this task, which proved almost impossible in practice. While theoretically capable of peaking at about 60-80 MFLOPS for both single precision and double precision for the XP versions, hand-coded assemblers managed to get only about up to 40 MFLOPS, and most compilers had difficulty getting even 10 MFLOPs.
The later Itanium architecture, also a VLIW design, suffered again from the problem of compilers incapable of delivering optimized (enough) code.
Another serious problem was the lack of any solution to handle context switching quickly. The i860 had several pipelines (for the ALU and FPU parts) and an interrupt could spill them and require them all to be re-loaded. This took 62 cycles in the best case, and almost 2000 cycles in the worst. The latter is 1/20000th of a second at 40 MHz (50 microseconds), an eternity for a CPU. This largely eliminated the i860 as a general purpose CPU.
Just think about that for a moment. The same company designed the i860 and the Itanium.
Intel ISA designs are traditionally a seething cauldron of shallow expediency, hubris, paranoia, and greed. And they've never been able to decide whether the software layer is a curse or a blessing. Why else do you think they in-sourced MINIX at the chip level, and told no-one?
But they've always been competent enough at process and validation to survive their worst impulses everywhere else in the value chain.
Recently, I got modded off-topic for failure to spell out the obvious about the ongoing rat race between sugar and alcohol as the worst metabolic offender. Probably because I confused a moderator by including long quotations supporting my position, as I've done here.
On-topic: When someone else ties a short bow around your thinking for you.
Off-topic: TMI / TMW (too many words).
Those workloads with significant performance losses are more or less completely artificial, e.g. average users don't create hundreds of thousands of files day in and day out and even in this case only SSD disks are affected. Considering that SSD disk operations are sometimes several orders of magnitude faster than those for spinning disks this performance loss is still nothing to worry about.
So it won't affect by Poudriere build server at all. Or my Jenkins build server. Or my Zabbix network monitor.
Nice to know.
"Average users" are generally mentioned right before the poster goes ape-shit on pancake empathy. I mean, the average user doesn't hit a data center more than two or three times per day?
Does anyone know if Google's public cloud is CPU-partitioned from their private cloud?
Google won't be impressed if they lose a full datacenter worth of peak compute due to lower generated work while operating in proximity to the thermal wall.
Alcohol below the 'hangover' level is about as bad for you as sugar.
Sugar: The Bitter Truth — 2009, 7.5 million views
If you look up Robert Lustig on Wikipedia, nearly two-thirds of the studies cited there to repudiate Lustig's views were funded by Coca-Cola.
Many serious people now believe that excess fructose (which is metabolized in the liver through much the same pathway as ethanol) is the largest single causal component to the metabolic syndrome epidemic, which is itself one of the largest single causes of runaway healthcare costs in the United States.
He interviewed hundreds of current and former food industry insiders — chemists, nutrition scientists, behavioural biologists, food technologists, marketing executives, package designers, chief executives and lobbyists.
What he uncovered is chilling: a hard-working industry composed of well-paid, smart, personable professionals, all keenly focused on keeping us hooked on ever more ingenious junk foods; an industry that thinks of us not as customers, or even consumers, but as potential "heavy users".
As head of the Food and Drug Administration, Dr. David A. Kessler served two presidents and battled Congress and Big Tobacco. But the Harvard-educated pediatrician discovered he was helpless against the forces of a chocolate chip cookie.
Foods rich in sugar and fat are relatively recent arrivals on the food landscape, Dr. Kessler noted. But today, foods are more than just a combination of ingredients. They are highly complex creations, loaded up with layer upon layer of stimulating tastes that result in a multisensory experience for the brain. Food companies "design food for irresistibility," Dr. Kessler noted. "It's been part of their business plans."
Sugar is the tongs and the hammer.
As Lustig once said (from memory): given the choice between sugar and alcohol, I'll take alcohol, because you can only drink yourself under the table once a day.
At most, it is a recognition that since one doesn't have a degree in some science that deals with climate, that punting to experts who do is a rational choice.
And all the massed & collective & thoroughly reiterated human experience of boy scouts & girl scouts, experts & executives, naked gentry & landed gentry claiming slightly more than they can reasonably chew to make a name for themselves and get ahead in life
Expertise is just ambition with a higher thread count.
Supposing the thread is visible, rather than risible (for which purpose one should always keep a child on hand whose brain & eyesight is not yet damaged by that mercury stuff).
There are three cups, upside down on the table, each one labelled on the inside.
The first cup is labelled "avoidance", the second cup is labelled "evasion", and the third up is labelled "shell game".
And they all wing around the table so rapidly only a meth-addicted Rain Man could tell them apart from any intermediate camera angle.
I find it hard to believe that a virtual memory change will result in a 5-30% slowdown for Intel processors. Maybe for a few extremely specific (likely edge-case) tasks, but if there was a legitimate 5-30% performance decrease, you can bet there would be a far different solution in the works that would suitably fix the problem.
Of course, if microcode update fails, there's always the hail Mary unicorn ass-pull.
I assure you, every Intel employee is kneeling on the carpet this very instant, facing the most auspicious astrological direction like ten of thousands of well-aligned human magnetic domains, while praying in unison like a telepathic Tourettic cuckoo clocks to any qualified god who will take their call.
If you've been harbouring a time machine, looking for optimal market conditions, I suggest you pull it out of your
What a fucking nightmare.
Being a fractal, it's also the monthly, weekly, daily, hourly, and minutely coin flip
Gerontologists call this the paradox of old age
The anti-paradox of the funding cycle is to refer to anything that puzzles a Mayfly as a paradox.
Young people are stressed because Darwin cares.
After Darwin ceases to care, there's little remaining reason to filter world through the mindset of a eww, gross Valley Girl (old people must hate life because creepy).
Unless you want to fund a silly grant application.
Then you haul out the word "paradox" to show that 20 years of formal education can't fix stupid.
You do not have mail.