No, again, not unfalsifiable. If you apply pressure and they don't adapt, then you falsified evolution.
No, that's absolutely incorrect. Evolution, as commonly understood, is the result of a RANDOM genetic process. Whether an appropriate adaptation will arise given selection pressure is always a matter of chance, and any given instance of a species NOT adapting isn't much evidence of anything really. By that logic, we could argue that all of the species currently going extinct due to manmade changes in environment etc. have somehow disproved evolution... which is obviously not true.
The only way to falsify it would be to a systematic study comparing pressure placed on a variety of different organisms in a variety of different circumstances -- and if you saw no adaptations (or the ones you could see could be explained by other mechanisms), THEN you might potentially falsify evolution.
Is that unreasonable? If there were evolutionary pressure (ie, short people kept being killed before reproducing), and tall people got multiple mates, I could see this change happening within twenty generations.
Interestingly, we have had a MUCH faster increase in height in the past couple centuries, probably mostly due to improvements in living conditions, food supply and nutrition, and medical advances.
According to this recent study, for example, European men have gained approximately 4 inches in height in 100 years, i.e., about 4 or 5 generations.
So, it probably doesn't even require significant genetic changes to produce such a shift. I once read somewhere that n the early 1800s, the average height differential between upper-class and lower-class Englishmen was something like 7 or 8 inches (i.e., rich men were something like 8 inches taller than poor men).
No, you apparently don't. In every state I'm aware of yellow means "stop if you can". It's effectively red, but lenient enough that if you're close to it and can't stop when it appears, then you're still fine.
Interesting -- it appears you're only aware of a minority of states.
37 of the 50 states conform to the Uniform Vehicle Code, whose default policy is often known as "permissive yellow." In that case, the yellow is absolutely NOT "effectively red."
"Permissive yellow" basically means that yellow lights signal that a red light is coming, and perhaps "caution." But they are explicitly distinct from red lights -- yellow lights mean that you are allowed to enter an intersection; red lights mean you are not. It's that simple.
The remaining 13 states have various more restrictive policies that usually state something like "stop at a yellow light unless it is unsafe to do so." There are usually a few conditions under which you can still enter the intersection on yellow, and generally you can also be ticketed if you're still in the intersection when the light is red.
Whereas in "permissive yellow" states, you are still free to enter the intersection on a yellow light, regardless of conditions -- and you are generally free to clear the intersection if you are still in it when the light turns red (i.e., it's not an automatic ticketable offense).
It's perfectly reasonable for the Supreme Court to have the power to review laws and strike them down as unconstitutional.
But that's the only thing the phrase judicial review generally refers to in the U.S. The Supreme Court has no power to declare any action constitutional -- statutes are by default assumed to be constitutional because Congress is sworn to uphold the Constitution. Judicial review is specifically the power to overrule that default assumption and declare a law unconstitutional. The failure to overturn a law is merely the standard state of the judiciary, whose general purpose is to resolve conflicts within existing law. That's not "judicial review" as the term is commonly understood.
The very people who they are supposed to limit are the ones in charge of interpreting said limits. As soon as the supreme court gave itself the power of judicial review (despite no such power existing in the constitution), it was over.
I don't understand the logic here. The Framers intended a set of checks and balances. If the Supreme Court lacks judicial review, doesn't that mean Congress has basically unchecked power to pass unconstitutional laws? How could a set of rights last very long when the people could just elect a bunch of representatives who might ignore those rights, e.g. in a time a crisis?
SCOTUS may be far from perfect, but they did in fact hold some lines checking federal power until roughly 1936-38, after which they basically let the federal government do what it wants (except in truly egregious cases... Most of which, guess what -- involve the Bill of Rights).
His use was correct. Liberals are the first to demand everyone else walk on egg shells when their feelings get hurt.
A Libertarian will be the ones trying to remove such laws.
liberal, a. and n. A. adj.: 5. Of political opinions: Favourable to constitutional changes and legal or administrative reforms tending in the direction of freedom or democracy.
Meh. This definition could basically describe just about any political party in the U.S. -- it just depends on how you define "freedom" and "democracy."
And yes, they are often at odds with each other. Majority rule ("democracy") often votes to take away or restrict freedom, especially from minority viewpoints.
If your definition of "freedom" includes things like abortion access and gay marriage (as the Democratic Party), you get to override democratic votes (even ballot initiatives voted on directly by voters) to ensure those freedoms. If your definition of "freedom" involves lots of guns (as the Republican Party), you similarly get to override legislation passed by democratic representatives to protect that freedom.
Often, the idea of "rights" are invoked to justify overriding democracy, but often (though not always) a "right" involves preserving someone's freedom at the expense of restricting someone else. Obviously this is often necessary -- for example, my freedom to go around committing murder is generally restricted to special circumstances, like times of war, because otherwise it would violate the freedom and rights of others to live.
To the issue at hand: "liberal" in the U.S. is mostly associated with Democrats, who are relatively far from the classical liberalism that your definition is mostly associated with. Classical liberalism was associated with figures from John Locke to Adam Smith, and its closest analogue in the U.S. today is something akin to libertarianism (as GP argued), though libertarianism isn't quite like classical liberalism in some ways. (That distinction is for another post.)
Democrats, on the other hand, are social liberals, who generally seem to believe that minority viewpoints and people need to protected from the tyranny of the majority in a democracy. Rather than adopting classical liberalism's philosophy that the free market and free association will just make things work out, they moved away from classical liberalism to argue for child labor laws, minimum wage laws, etc., which go against traditional free association ideals.
In recent years, as GP argues, U.S. "liberals" (i.e., mostly Democrats) have gone further in these protections for those with less power. Thus, they tend to be the most vocal proponents of affirmative action, anti-hate speech laws, harassment laws, etc.
Thus, GP was essentially correct for the U.S. at least: Liberals (in the U.S.) ARE the first to demand that everyone else "walk on egg shells when their feelings get hurt." Democrats believe they have good reasons to invoke these protections, since restricting speech and actions in these ways leads -- to them -- to a more fair and equitable society which protects those who are oppressed.
But all of this is quite far from the basic "freedom and democracy" as defined by classical liberalism in the 18th century. I'm not saying it's bad, or even arguing for any particular party or perspective. I'm just saying that this basic definition of "liberalism" is so vague that almost anyone co-opts it these days in the U.S.
Full disclosure: I just read the full study I linked to in my first post. At the conclusion of the article, the first author does declare that his research was in part funded by lobbyists. I didn't read this full study until now, which I only found this evening when writing my first post -- but it came up in the top hits in a search for "HFCS vs. sugar" and its abstract agreed well with what I've researched myself over the years.
So, I don't know what to say about that -- but once I noticed that, I'm coming clean and noting there was a conflict of interest with one of the two authors.
On the other hand, I've spent a lot of time in the past trying to sort through these issues, and I've come to similar conclusions as those expressed in this article. So, it sort of pains me to be somewhat in agreement with research funded by corn growers. But, once again, let me reiterate my feeling that HFCS is way overused, the excess sugar/HFCS thrown into all sorts of processed foods is a bad thing, and I wish the U.S. government would stop subsidies manipulating agriculture in bad ways (like supporting the corn lobby).
But none of this means that HFCS is so much more evil than table sugar. It's just overproduced and overused, as most sugars are these days. Obesity has risen as more "hidden sugar" has been put into more products, and HFCS has partly enabled that... that's the evil (if there is one), not some sort of weird metabolic effects so much different from sucrose or whatever.
Anyhow, downmod me and my posts if you feel it's necessary. I really was just looking for a recent study on the topic, and despite the conflict of interest, I think the article is mostly a pretty accurate assessment of the literature. (And there are other studies, some of them cited in the article, which don't have conflicts of interest and come to supporting conclusions.)
I normally wouldn't even bother to respond to this, but I just want to be sure no mods are confused.
And here's another study that's not from Yale and doesn't use a red herring to confuse people.
And yet that's precisely what you are doing: introducing a red herring, actually the specific one I addressed in my post, namely:
HFCS != pure fructose
Your study is about consumption of pure fructose. Metabolism of fructose by itself has been shown in numerous studies to be very different from how human metabolism deals with a mixture of sugars, particularly the 50/50 mixture of fructose and glucose found in honey, HFCS, and sucrose (the latter after the one main bond in sucrose is broken up very early in digestion).
And yes, eating a lot of fructose by itself seems to do weird things to metabolism. But, ya know, mixtures make a difference.
What gives you two away as shills is that you use strong, unscientific words.
Yes, "shill" isn't a strong word or anything. Look -- you have one study that's not even on the substance in question. I referred to a metastudy which considered a multitude of research on the actual topic and talked about the current scientific consensus.
I think HFCS is bad, but mainly because its use is propped up by crappy agricultural policy that supports growing too much corn for no apparent reason other than stupid lobbying. I also think HFCS consumed in excess is bad, just like consuming too much sucrose or honey or whatever.
If you know how to use PubMed, then you can't play up ignorance as an excuse.
Funny, given your ignorance of the actual substance to be studied seems to have determined your choice of citations.
Go tell your bosses at Coca-Cola or wherever that we're not buying it.
Yeah, the overall message of my previous post was -- excess sugar consumption in general is bad for you, i.e., even the Coke with cane sugar is crap, even if it doesn't have HFCS. I obviously must be a shill for an industry trying to sell sugary products with my whole "we need to consume less sugar" posts....
I have never seen any study suggesting that, except the single widely ridiculed Yale study. Not surprising given how nearly identical sucrose and HFCS are in the gut.
Yeah, most of the HFCS criticism is built on "natural foods" lore and wacko hysteria about chemicals. It *could* be that HFCS is worse than some other sugars, but the vast majority of studies have shown no significant difference in response to HFCS vs. sucrose.
Just to be clear what we're talking about here, HFCS is not the same as pure fructose, and a lot of the lore about HFCS compares studies on fructose with sucrose or other things, rather than HFCS. Commercial HFCS is generally either 42% or 55% fructose, and almost all glucose otherwise. Sucrose, on the other hand, is a molecule that breaks down in the first stages of digestion to 50% fructose and 50% glucose -- so, as the parent said, they are basically identical in most of digestion. (It's called "high fructose" corn syrup, by the way, because it's much higher than normal corn syrup, which has very little fructose. But acting like pure fructose and HFCS are the same thing in studies is highly misleading.)
Also, for the natural foods buffs, please note that honey is mostly fructose and glucose in almost the same concentration as HFCS, so if HFCS is bad for you, "natural" honey is probably not a solution to this problem.
For further details, here's a link to a recent (2013) metastudy that summarizes what is known. From the abstract:
[A] broad scientific consensus has emerged that there are no metabolic or endocrine response differences between HFCS and sucrose related to obesity or any other adverse health outcome. This equivalence is not surprising given that both of these sugars contain approximately equal amounts of fructose and glucose, contain the same number of calories, possess the same level of sweetness, and are absorbed identically through the gastrointestinal tract. Research comparing pure fructose with pure glucose, although interesting from a scientific point of view, has limited application to human nutrition given that neither is consumed to an appreciable degree in isolation in the human diet. Whether there is a link between fructose, HFCS, or sucrose and increased risk of heart disease, metabolic syndrome, or fatty infiltration of the liver or muscle remains in dispute with different studies using different methodologies arriving at different conclusions.
In general, our dietary issues are probably a result of excess sugar consumption in general. Switching from HFCS to cane sugar is probably not a significant improvement unless you simultaneously decrease overall sugar consumption.
My bullshit meter always starts kicking into life when the hyperbole starts flowing, like the reading comprehension or random amount of payment received having a causative effect on the function of an organic process.
Well, the other things that are mentioned here were age and race, which could conceivably have biological differences that could have an effect.
I suspect that income and education level could be relevant here as a proxy for other dietary trends. People with higher incomes tend to eat better quality food overall than poor people. People with higher education levels also tend to make different dietary choices (and are probably more likely to seek out more "natural" foods or whatever the current research is pointing toward).
So, it's not so much that these aspects are causative as that they are indicative of perhaps a wider variety of potential dietary choices. This study seems to be based on general survey data, so it's not clear that they could rule out various confounding factors, though I'd have to read the study to know for certain.
Showing the trend is consistent is at least a step toward confronting a rather obvious objection that could come up if they only looked at poor folks whose diet is already likely to have a bunch of bad junk in it (and who probably tend to consume the most soda). If they see the same effect in rich, educated folks who drink soda, then it may not be a general "poor disease" issue. (Medical studies have often been plagued by these problems if they only have subjects who are not representative of the general population.)
I'm just guessing here, but that's one reason I could imagine for mentioning this.
That's also why people play Powerball, they only hear stories about the people who hit the jackpot, never stories about not hitting it.
Yes, there's something I find distasteful about states running lotteries for this reason. It's basically a tax on the stupid. Sure, some people play for entertainment. But I personally have known a few lottery addicts who were poor or senior citizens, and they'd shell out literally thousands of dollars each year on lottery tickets. (If only they would invest that money instead....)
And, as I always tell people: I never buy lottery tickets, but I only have a VERY slightly less chance of winning than the addicts. In fact, anecdotally this proved true for me in the past few years -- some members of my family have bought lottery scratch tickets as stocking stuffers. I've received fewer than 10 of these over the past few years, but I've won on 4 of them... Totaling $175. The last year this happened, I had a $100 ticket (more than anyone else in the family ever got, including one person who buys tickets regularly), and someone gave me another cast off that day, and I got $20 more.
And yet, I have absolutely no desire to buy more tickets...I took the money and enjoyed it. Same thing one of the few times I was in a casino (and the only time I gambled)... My father gave me $25 to play some slot machines with, so after spending about $7, I hit $50. I gave my dad back his $25, took the $40+ profit, and I've never played again.
Thus, if you're going to gamble, I highly recommend using someone else's money. It's proven lucky for me.
Sure, sure, 300 years of technology have it all wrong and a few "recent studies"
Did you even look at the link? The guy looked at something like **50 studies** from the past century or so. And there have been at least a dozen more I've seen dealing with readability in a variety of fonts since that article was published in 2008.
show one more way for hipsters to be "smarter" than everyone else.
What do "hipsters" have to do with this?
And by the way, frankly, I prefer serif fonts too for reading -- I think sans serif fonts looks stupid. (Actually I kinda dislike them in general and have been known to change my browser defaults to remedy this situation -- but my personal preference is different from what actual studies show about legibility/readability.)
To mock your most absurd claim further (your last one): you can make sans-serif letterforms distinguishable, barely, with 5x3 pixels to work with.
Yes, to mock you back: this is of course the most common usage case these days with high-res screens.
Look, the question is about LEGIBILITY, not ability to render. At small enough sizes, serifs can't even be placed on fonts -- you're correct. But this has nothing to with whether people prefer to read 8-point or 10-point text in serifs versus sans. And basically there some studies I've seen recently which show people to prefer serif fonts for reading at smaller font sizes and sans only at larger sizes.
But it's a small effect, and I don't know if it's actually significant -- point is, if you're dealing with enough pixels to actually display serifs, there doesn't seem to be a strong preference one way or the other. And if you have fewer pixels, serif fonts will essentially look like sans anyway, so again they're about equivalent.
(By the way, I know it's "common knowledge" that sans serif fonts must be used on things like road signs, because they are so ubiquitous. But most of the studies on such fonts only tend to take into account point size or capital height as the standard for comparison. Factor in X-height, which in many serif fonts tends to be smaller and use at least a semibold or medium weight, and serif fonts can do just as well as sans on signs. Mostly, I think sans serif was adopted for things like signs because such things tended to be hand-lettered rather than typeset in the past, and it's easier to do sans than serifed fonts when doing hand-lettering. For headlines, sans probably was adopted because it stood out: when most printed text was serifed, a sans headline differentiated it.)
Serif fonts are readable: great for reducing strain from hours of reading under good conditions. That's why they're used for books (except some crazy tech books that get it wrong), newspaper text, magazine text, and so on.
Sans-serif fonts are good for remaining legible under highly difficult conditions. That's why they're often the choice for billboards, for headlines (designed to attract you close enough to read the text), for advertising text
Nope, nope, and nope.
Basically, serif fonts are used where serif fonts are used because they're more familiar where serif fonts are used.
Sans serif are used where they are used because they tend to be used in those cases. Readers are used to seeing them there.
Numerous studies have come up with inconsistent results (for a good summary of what dozens of them on the subject say, see here).
The takeaway message is readers find familiar design choices to be easier to deal with. Most books and long texts tend to be set with serifs, so we've come to expect that -- but well-designed studies have shown little difference (or inconsistent results). Web fonts tend to be sans serif, so we expect that. And I have absolutely no idea what you're talking about when you say that sans serif will remain legible under difficult conditions -- if anything, studies tend to show that serif fonts have a small advantage (probably not significant) there. After all, serifs were inherited from Roman techniques for carving letters into giant stones, not in writing: I doubt Roman sculptors would have added things that seemed to decrease legibility to monuments. (The one "difficult condition" where sans serifs have a claimed advantage is in low resolution electronic situations, but recent studies have shown this advantage to be small or non-existent.)