It's pretty easy to run water through a gas chromatograph / mass spec and see if it has anything other than water in it, and how much of that stuff it has. A bit harder to figure out exactly what the pollutant is, but if you have a sample of the fracking water it's easy to look at the peaks the fracking water has and see if they appear in the drinking water even if you don't know the identity of the chemicals.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
I think it's E-Mag = "Electricity and Magnetism", Re-Mag = "take E-Mag again", Three-mag = "take it a third time", Management = give up and take business courses.
Higher density at constant speed means higher signalling rates now vs before. We're already reading more off the disk per second at 7200 rpm than we were at 7200 rpm back when 200GB was big. Power requirements have taken a bigger position, and also at the higher densities tolerances need to be more exact and even more so at higher speeds. Going to lower spinning speeds allows you to get better results without tightening tolerances as much.
Newton's law of gravitation F = G * m * me/r^2 where m is your mass and me is the mass of the earth, r is the distance between you and the earth... well, this approximation only works if you're far from the earth and can treat the earth like a point. To do it properly when you're standing on the earth you actually have to integrate over the volume of the earth all the contributions from point-like sub-regions of the earth. But you can think of W = m g where g is the acceleration at the surface of the earth, and g depends on the mass of the earth and its distribution in space as a big sum of me_i/r_i^2 for i going from 1 to a really large number and each chunk of the earth has a different index i and a different distance r_i from you.
Look if the options are 24 minutes of random error or say 24 seconds of consistently biased error in all the devices in the hospital, I'll take the consistent bias any day. The point of all of this is so that a nurse walking into the room and seeing a blue lipped coma patient can determine things like how long has it been since the monitor whose leads fell off last recorded an accurate O2 saturation.
find . -newer last_backup_timestamp | cpio -o snapshot$(date +%Y%m%d) && touch last_backup_timestamp
It may be almost 5.5 times the population density of California as a whole state, but consider the following, there are 8.8 Million people in NJ but compare with the actually populated portions of CA:
Los Angeles County: 9.8 M people, 2400 per square mile
Orange County: 3 M people, 3800 per square mile
San Francisco County: 0.8 M people, 17200 per square mile!
Alameda County: 1.5 M people, 2000 per square mile.
Santa Clara County: 1.8 M people, 1400 per square mile
Total population of those counties: > 16M people
and that doesn't even consider the portions of those counties that are parks etc (especially significant for Alameda I think)
So the majority of people in California live in a region that is more dense than NJ, and the total number of people involved is close to double the entire population of NJ.
See Andrew Gelman's article in American Scientist that debunks the statistics behind the "having more daughters" data at least. The largest credible effect on sex ratio is around 3% differences between boys and girls among those in famine conditions... and this effect is due primarily to nonsurvival of boy fetuses in famine conditions. The more daughters from beautiful parents effect has been overstated to be on the order of 15 to 30% differences, absolutely absurd if you even stop to think about it. The original studies do not have the statistical power to distinguish between random fluctuations and a real effect and therefore they overstate any effect that you find by the size of the standard error rather than the size of the effect..
I can certainly believe that beautiful women have more children on average though....
Scientists show that even scientists rarely really understand statistics...
It wouldn't be hard to make this double blind, you'd grind up chocolate and put it in capsules, and then grind up something inert, dye it brown, and put that in capsules. Don't tell the dispenser or the taker which group they're in. Of course the takers could open the capsules and try to guess which group they were in, but yeah, it's not impossible to do a good job double-blinding this, it's just not as interesting for the taker if they don't get to enjoy the chocolate.
Hardware random number generator using a couple of resistors, a potentiometer, and a zener diode. For additional points, use a comparator to amplify the noise. You can then talk about the physics of electron transfer across the diode junction and thermal agitation to describe why the noise occurs.
Another interesting project is a feedback controller that levitates a ball hanging below an electro-magnet. You use an LED and a phototransistor to set up a circuit that tries to keep the reflected light intensity constant, which makes the steel ball hang a certain small distance below the magnet.
Neither of these is too terribly expensive, and both have physics content, but neither is what I'd call "modern". Almost all of modern electronics involves digital integrated circuits.
Caltech (not Cal Tech) is a private university, though it receives significant public funding like any research university. However, I don't believe the development of these lectures was publicly funded.
The real reason to do this can NOT be to get better quality random numbers, since you'd be better off just hooking up a webcam with a piece of tape over the lens and hashing the resulting diode noise.
The best reason to do this is because you want to play mechanical engineer in your spare time.
As much as Lisp people want to say that Lisp lost because of the price of Lisp machines and Lisp compilers, it actually lost because it isn't a particularly practical language; that's why it hasn't had a resurgance while all these people move to haskell, erlang, clojure, et cetera.
Lisp is a beautiful language. So is Smalltalk. Neither one of them were ever ready to compete with practical languages.
The idea that LISP hasn't had a resurgence is wrong. Take a look at books published on common lisp recently. You'll see several from about 2004 to 2009. The SBCL project revived the CMUCL compiler in a cross platform and easier to improve way, which resulted in a large number of improvements. And places like common-lisp.net, clocc.sourceforge.net and cliki.net are the repositories for shared code in the free software community.
There are several webservers written in common lisp, this is not the first by a long shot, and in case you didn't know, the technology inside orbitz is written in common lisp.
The reason Common Lisp is not dominating the world is mainly that it takes a fair amount of sophistication to "get" the LISP way of doing things, and the huge availability of C based libraries.
The popularity of Python is essentially about having a LISP that has a more familiar syntax and interfaces well with C programs. Python isn't LISP but it's not very far off.