Comment Re:Agreed: Reddit is badly designed. (Score 1) 108
They've just (as in the last few days) implemented a "Compact" mode. I get 17 headlines vs 9 in "Classic" mode and 2 in "Card" mode.
They've just (as in the last few days) implemented a "Compact" mode. I get 17 headlines vs 9 in "Classic" mode and 2 in "Card" mode.
It gets better. Tesla thinks that they're responding too quickly to be paid the real price of electricity.
What about rating of experiments for peer review, revisions and refinement, requirement lists, step-by-step instructions for repeatability, ease of access, and simple language for people who don't find academia accessible? Does something like this exist already?
For methods and protocols, there's protocols.io.
I don't think these researchers dug deep enough into the history of this. For those who are interested, here is the reddit post:
https://www.reddit.com/r/MapPorn/comments/15mwai/the_longest_straight_line_you_can_sail_almost/
Here's another reddit thread that he cross-posted to from five years ago; it seems that the researchers didn't dig deep enough:
Apparently he learnt it from a Wikipedia article, where it is also reported (without citation) that the longest distance only on land is 13,573 km (8,434 mi).
The edit was added with this revision by Wikipedia user Muh1974 (who doesn't have a Wikipedia user page). The Talk page around that time has unreferenced "I remember reading somewhere" speculation about the longest great circle. My guess is that Muh1974 checked (somehow) that this path was valid, and had a distance at least comparable to the other ones mentioned in the wikipedia article, but that's where the trail goes cold for me.
the bug "resulted in our secure internal logs recording plaintext user passwords when users initiated a password reset."
"We have corrected this, but you'll need to reset your password to regain access to your account."
Er... are you really sure that this has been corrected?
Why do you want to make X do Y? Well, because I want that. What does it matter to you?
When people ask for help on a specific task, it's possible that the thing they actually want to do is different from the thing they have asked for help on. Providing context for why they want to do that makes it easier to judge if this is happening, and can potentially save a lot of time and frustration in the future.
It isn't. Ignoring the fact that you're a chimera and a mosaic, which means you can have multiple combinations of those, we know from genetic genealogy that 111 markers will be sufficient to uniquely identify the group that comprises every relative up to three steps away (so third cousins, great grandparents, etc).
Fine, if you don't like my 50 common variants number, then I'll suggest 120 variants: 111 [oddly-specific] to get down to familial group, and another 10 or so to identify a single person within that group. Whether it's 50 or 500, that's still well within the realm of cheap targeted SNPchip technology.
You'd need far, far more markers to uniquely identify you.... You'd need full genome sequences from multiple collection spots across the body, plus sequencing of the sample, for that.
There's a big difference between uniquely identifying someone, and fully describing their genome. I agree that a full description of a person's genome would require extensive whole-genome sequencing, but that's not necessary for forensic purposes. For monozygotic twins it gets a bit trickier, but for any other comparisons uniqueness is less than half an hour of nanopore sequencing away:
Rapid re-identification of human samples using portable DNA sequencing
At roughly $10,000 a pop, plus borrowing a computer powerful enough to determine the point of intercept from the nearest sample, you're looking at more than most police departments have in budget even for coffee and doughnuts
Moving away from SNPchips, 40X genome coverage can be done for less than $1000 now.
If individual-level genetic data is available (as is the case for at least 23andme), then it can be de-anonymised.
Dr. Erlich also identified a new genetic privacy loophole that allows inferring surname of individuals from simple Internet searches using genetic data.
http://datascience.columbia.ed...
If you have individual level genetic data, fewer than 50 common variants should be enough to uniquely identify a person.
Everyone of your citations is based on speculation, not real historical data.
*cough*
Those predictions are based on the extrapolation of past historical trends. In particular, Tony Seba has been tracking solar since 1976:
https://www.youtube.com/watch?...
Just to remind you, all of the things that I've talked about - batteries, EVs, self-driving, and solar - are technologies. The adoption curve for technologies is never linear. When you read the reports from the IEA, from the OECD and so on, they will tell you, "One percent EV penetration, 2%, 3%, 4%, 5%, right? And maybe at some point in 2040 it'll get to 10%, or whatever; same thing for solar. But whether they do it on purpose, or they don't understand technology, I don't know.
Ramez Naan compares forecasts that the IEA has made, and points out that IEA is linearly projecting the future of solar, whereas solar is clearly progressing exponentially, and has been for a long time:
Ooh, youtube citations! I can do that too:
Solar is becoming cheaper than all other alternative energy sources:
https://www.youtube.com/watch?...
He specifically talks about nuclear here:
https://www.youtube.com/watch?...
Tony Seba suggests that personal rooftop solar will eventually be cheaper than any grid supply, even a fantastical free energy supply, because its cost will drop below the cost of transmission:
https://www.youtube.com/watch?...
OTOH, it's useful to those who want to do unrecorded transactions.
Bitcoin is the opposite of unrecorded. The history of bitcoin transactions are stored, immutably, as an unencrypted public record on the computer of everyone who has a Bitcoin wallet. As soon as a Bitcoin transaction is deanonymised by linking with any other dataset, the entire bitcoin transaction history of both parties can also be linked to that dataset.
The MinION is really good at finding structural variants (i.e. large-scale changes in DNA sequence), but not so good for single point variants (accuracy for single base-called sequences is 85-95%, getting to about 99% in consensus; accuracy is much higher at the signal level, but there are no well-developed programs that do variant matching/detection at the signal level).
I try to encourage people to use the first $1000 for a pilot run, just to see if the MinION is suitable for what they want.
Finding causal genetic variants is tricky, particularly where multiple places in the genome can influence whether or not a disease appears, and people might appear normal, but be carriers for those variants. A common approach is to compare the patients genome with known unaffected genomes; finding a few other people with the same condition makes the search a lot easier because then it's possible to exclude a large amount of genetic variation from the "cases" as well as control genomes.
It can be done, and the resources required to enable anyone to do it are publicly and freely available (most importantly is probably R/Bioconductor and the 1000 Genome project); just don't expect it to be easy.
Not necessarily. If the unit length of the repeat is greater than the fragment length (I've seen tandem repeats with unit lengths of 40 kb), then the region will not be detected as repetitive.
Or just Cassava-killing viruses:
How a TED Fellow is working to save African cassava from whiteflies
The benefit of creating scaffolds first from long reads is that it's a lot easier to capture regions where there is a Very-long Complex Tandem Repeat (VeCTR). These regions are collapsed in scaffolds assembled from short reads.
What good is a ticket to the good life, if you can't find the entrance?