Over the last decade flu deaths per year have ranged between 12k-61k...covid-19 has killed people at 8 times the rate of our very worst yearly flu season.
It's even worse than that. Those numbers for the flu are actually model estimates of total deaths, correcting for the observation rate. On the other hand, for COVID-19 we have a count of confirmed patients who died in hospitals. The apples-to-apples comparison is to actual confirmed flu deaths - just 3,448 to 15,620 per year in the last 6 seasons.
Accepting Google's invitation to upend that system by eliminating copyright protection for creative and original computer software code
Oracle and their friends continue to try to gaslight the court with this strawman. No one - least of all Google - is trying to eliminate copyright protection for software. They are arguing for the status quo that interface definitions, like forms in general, are not copyrightable.
Oracle's position is that they alone can name the function that takes two integers and returns the larger "max," because "int max(int a, int b)" is in the Java API.
Drivers also idle in random places in the road, generating more congestion, and tend to be unfamiliar with local areas, leading them to drive at inconsistent speeds. I also suspect the app in line/shared/carpool mode of directing drivers out of their way through busier areas in order to fish for another passenger.
I've also talked to quite a few drivers who commute into San Francisco from Sacramento, Stockton, and other relatively distant areas, these jobs generate both local and commute traffic.
Results matter more than a 'peers' opinion of the results.
You misunderstand the process. Peer opinions are based on the results. They are also based on years of study leading to an appreciation of what results are actually 1) interesting and 2) useful. These are crude words for the distinction, but to illustrate, if AlphaFold were to work perfectly it would only be useful. It wouldn't improve understanding and thereby advance science beyond making some specific current task potentially easier. (Even if it might be really great for engineering).
If the training set contains all the magic rules
There's good reason to think this training set doesn't contain all the magic rules. What the AlphaFold team should do is use structures solved before 2005 to train their model, and structures with novel folds solved after 2005 to test. If they can achieve very high absolute performance in that context, all critics will be silenced.
I wonder how much the network has been trained to recognize existing (evolutionary dependent) protein families and their patterns vs. a new random sequence folder.
That's why they should use the historical validation approach! Train on structures solved before 2005, then predict only novel folds solved after 2005. Perform well in that context and I'll be impressed.
The former may be just as useful in practice but may teach us a bit less about the mechanics of folding.
Unlike the physics-based and statistical potential methods, can the DeepMind approach ever contribute to understanding how proteins fold? IMHO that's an open question, and one that's critical to their presumably forthcoming publication. For example, do their features weights say something interesting about cation-pi interactions? Rosetta infamously ignores cation-pi because of overfitting concerns (even though cation-pi can be very structurally important).
Google's team of 10 people produced a better result with 2 years of work than the entire academic field has been able to produce in the last 30
That's not a correct reading of the results. First, previous efforts are based on putative understanding about how proteins fold. Obviously, this understanding is incomplete - or the physics based methods would perform better. (Even statistical potentials like in Rosetta are physics based in important ways). Second, DeepMind isn't even on the radar in the server component of CASP. The server competition is intrinsically more difficult because it requires robust software that isn't highly dependent on user parameters. Rosetta for example is ~20th in the general competition and 4th place in the server competition.
Finally, DeepMind has not demonstrated the historical performance of their approach. They should see how well novel protein folds solved after e.g. 2005 are predicted using only structures solved before 2005 to train. To the extent that Rosetta works, it works in such an environment. In fact, one its first results was a novel fold (Top7).
Kiss your keyboard goodbye!