Who became the lead developer after Bond killed him? And wasn't Zorin more of a hardware guy?
Ok, this is AC so no one will read this (thanks DICE for auto modding AC to 0, sigh)
Wasn't AC started at 0 before DICE?
There is nothing before the what. Or is that what you meant?
But are the boring facts of my daily existence worth posting online in the first place?
I believe the GP was asking: What value is Google providing to the user?
Wouldn't it be a whitelist instead of a blacklist?
I was aware that tex4ht could produce HTML output. I wasn't aware (nor does the man page mention) that it can produce doc output.
I'm quite serious. Converting a LaTeX file to pdf is typing pdflatex foo.tex. If you use pstricks, do latex foo.tex; dvips foo.dvi -o; ps2pdf foo.ps. The -o option in the dvips command outputs to file rather than prints. The default file name is obtained by replacing the dvi with ps.
I'm sure I've rolled my face around on my keyboard and produced a Perl script
Why are we even holding onto PDFs, anyways?
Can you even generate Word docs from LaTeX files?
What we were talking about is whether: r1, r2, and r3 can all be normally distributed. The reason being that people investigating the size, weight, and surface area of berries may *assume* (appealing to the Central Limit Theorem) that the quantity they're investigating can be modeled adequately through a normal distribution,
But the Central Limit Theorem is a claim about the distribution of sample means as the sample size gets larger.
Oops, I missed the second image. But the correlation coefficients are there. The sets of data that more closely approximate a line have such values close to 1 or -1. The ones that don't have values close to 0.
Also, you may want to account for the difference between the x coordinate of the point and the average of the xs, as having an x coordinate far from the mean contributes to being farther away from the regression line.
There's no algorithm that will identify the outliers in this example [dropbox.com].
So there's no algorithm for comparing observed values to modeled (predicted) values? The absolute value of the difference between the two can't be calculated? Hmm. . .
What value of correlation coefficient distinguishes pattern data from random data in this image [wikimedia.org]?
Are the data in that image random? Also, the data without the four points at the bottom would have a higher correlation coefficient.
Outliers are often so extreme and rare that despite being statistically unbiased, they nevertheless severely skew statistics which aren't robust to them.
If outliers are unbiased, they can affect the results, but how can they skew the results? Also, if they're rare, how much effect can they have?
A set of random data with a significant correlation coefficient is indistinguishable from a genuine correlation.
Not on a scatterplot. It's pretty clear how close the data are to the line. Also, how probable is it that random data would have s statistically significant correlation coefficient?
1) Outliers will skew the values, and there is no computable way to detect or deal with outliers (source [wikipedia.org])
Do outliers skew the results? If the outliers are biased, then that may tell us something about the underlying population. If they aren't biased, then their effects cancel.
4) There is no way to measure the predictive value of the results. Linear regression will always return the best line to fit the data, even when the data is random.
But random data would generate statistically insignificant correlation coefficients. Also, the 95% confidence intervals used to predict values are wider for random data.