I really wonder how many shows would be produced if people could pay for individual series on the equivalent of Pay Per View, but at a more reasonable price.
Well, HBO hasn't gotten any money from me, but AMC has. I willingly forked over $2/episode for Breaking Bad and Walking Dead, which I can watch again and I don't have to sit through the FBI warning. So call it $40 for the 2nd season of Walking Dead, and they had an average audience of ~5 million. If all those people were like me and didn't have cable, that's potentially ~$200 million in revenue. Let's be conservative and halve it. The total budget for that season was something like $60 million. So that's a $40 million profit, which I wouldn't scoff at. In fact, that's all of AMC's quarterly profit. Seems like a viable model in a world where people aren't shelling out $100/month for a bunch of crap they don't want to watch.
If you wish to create an ethical construct "You should be monogamous with a member of the opposite sex and faithful for your entire life." Then you should have evidence to support that the outcome of that rule results in the maximum happiness/success/productivity/etc.
The trouble there is in deciding what to maximize/minimize. I suspect it would be fairly easy to show heterosexual monogamy minimizes inheritance battles (assuming patriarchal inheritance) while maximizing baby production and genetic diversity. And I think a few people still genuinely believe these are the sole/primary goals of marriage, but most don't.
Both the logical and philosophical fields also require empirical data to form their assumptions.
I don't think that's how assumptions work.
Empiricism is great, and there are some values everyone's pretty much agreed on (e.g. stealing is a dick move if you have other means of providing for yourself, and don't kill anyone unless you have a really good reason), and I believe in an objective reality. But values can't be empirically determined; empiricism only shows which behaviors maximize values determined by some other means.
Apparently it's blamed for the tsunami.
No it's not. I'm no fan of the Mail, but the headline "Did tonight's super moon cause Japan's tsunami?" leads to "And yet there is not a shred of evidence to support this."
I'm not a geologist so I'm very confused, if something is 'storing up energy' how does moving around equate to that? I mean, if the moving of the ground in violent ways is the releasing of that 'stored energy' then how is small movements indications that it's storing up energy? I would assume that the worse earthquake areas are those when there's a lot of movement going on deep underground but nothing on the surface releasing that energy until a very devastating movement.
Full disclosure: I am a geologist, but I don't study intra-plate earthquakes and I'm not familiar with Stein's work. You're correct that the faults themselves are locked, so there's no relative motion immediately to either side of the fault. But the strain accumulation is caused by the relative motions of much larger crustal blocks, and this movement can be seen using GPS. Saying "the faults are moving @ x mm/yr" is incorrect short-hand for saying "the relative motion of the crust on either side of the fault far away is x mm/yr". That said, I'm used to dealing with GPS for faults that have surface expressions. I'm not sure how the thick layers of sediment above the New Madrid fault zone would affect the signal, but my guess would be that the GPS signal would be more diffuse and noisier. Yes, they are ground stations.
What about the northern earthquakes? Do GPS stations up there report tiny movements in the crusts leading up to those earthquakes? I'm just curious if it's possible that you're dealing with different kinds of faults when comparing the San Andreas fault line versus the Ramapo fault line versus the New Madrid fault line.
The San Andreas is a strike-slip fault, so GPS will show motions of the North American and Pacific plates roughly parallel to the fault line. The Ramapo fault is a normal fault, so far-field motions will be perpendicular to the fault line and directed away. The New Madrid fault zone actually consists of two strike-slip faults and a thrust fault, so the GPS signal will be more complex. I don't know if there are GPS data for the Ramapo area, but I don't see any reason why it couldn't be used to detect motion in Canada, other than the fact that it would mean working in Canada.
This sounds, at best, questionable or highly fitted to very recent events that we've had the privilege to watch. It's difficult to look over long swaths of time historically when our precision instruments for measuring are a very recent thing compared to the age of the crust. I'm not arguing for the spending of billions in the mid-west but I'm not sold on a single expert's opinion, is this consensus in the geological community?
I don't know. Obviously they're not using GPS data to constrain when faults were active thousands of years ago, but other kinds of geologic evidence could be useful in that regard. It may be the consensus of those who study intra-plate earthquakes, which from my perspective are just something that happen sometimes.
I have a theory that it's impossible to prove anything, but I can't prove it.