Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Overly broad? (Score 3, Insightful) 422

Full disclosure: I just read the full study I linked to in my first post. At the conclusion of the article, the first author does declare that his research was in part funded by lobbyists. I didn't read this full study until now, which I only found this evening when writing my first post -- but it came up in the top hits in a search for "HFCS vs. sugar" and its abstract agreed well with what I've researched myself over the years.

So, I don't know what to say about that -- but once I noticed that, I'm coming clean and noting there was a conflict of interest with one of the two authors.

On the other hand, I've spent a lot of time in the past trying to sort through these issues, and I've come to similar conclusions as those expressed in this article. So, it sort of pains me to be somewhat in agreement with research funded by corn growers. But, once again, let me reiterate my feeling that HFCS is way overused, the excess sugar/HFCS thrown into all sorts of processed foods is a bad thing, and I wish the U.S. government would stop subsidies manipulating agriculture in bad ways (like supporting the corn lobby).

But none of this means that HFCS is so much more evil than table sugar. It's just overproduced and overused, as most sugars are these days. Obesity has risen as more "hidden sugar" has been put into more products, and HFCS has partly enabled that... that's the evil (if there is one), not some sort of weird metabolic effects so much different from sucrose or whatever.

Anyhow, downmod me and my posts if you feel it's necessary. I really was just looking for a recent study on the topic, and despite the conflict of interest, I think the article is mostly a pretty accurate assessment of the literature. (And there are other studies, some of them cited in the article, which don't have conflicts of interest and come to supporting conclusions.)

Comment Re:Overly broad? (Score 5, Insightful) 422

Troll much?

I normally wouldn't even bother to respond to this, but I just want to be sure no mods are confused.

And here's another study that's not from Yale and doesn't use a red herring to confuse people.

And yet that's precisely what you are doing: introducing a red herring, actually the specific one I addressed in my post, namely:

HFCS != pure fructose

Your study is about consumption of pure fructose. Metabolism of fructose by itself has been shown in numerous studies to be very different from how human metabolism deals with a mixture of sugars, particularly the 50/50 mixture of fructose and glucose found in honey, HFCS, and sucrose (the latter after the one main bond in sucrose is broken up very early in digestion).

And yes, eating a lot of fructose by itself seems to do weird things to metabolism. But, ya know, mixtures make a difference.

What gives you two away as shills is that you use strong, unscientific words.

Yes, "shill" isn't a strong word or anything. Look -- you have one study that's not even on the substance in question. I referred to a metastudy which considered a multitude of research on the actual topic and talked about the current scientific consensus.

I think HFCS is bad, but mainly because its use is propped up by crappy agricultural policy that supports growing too much corn for no apparent reason other than stupid lobbying. I also think HFCS consumed in excess is bad, just like consuming too much sucrose or honey or whatever.

If you know how to use PubMed, then you can't play up ignorance as an excuse.

Funny, given your ignorance of the actual substance to be studied seems to have determined your choice of citations.

Go tell your bosses at Coca-Cola or wherever that we're not buying it.

Yeah, the overall message of my previous post was -- excess sugar consumption in general is bad for you, i.e., even the Coke with cane sugar is crap, even if it doesn't have HFCS. I obviously must be a shill for an industry trying to sell sugary products with my whole "we need to consume less sugar" posts....

Cheers!

Comment Re:Overly broad? (Score 5, Informative) 422

I have never seen any study suggesting that, except the single widely ridiculed Yale study. Not surprising given how nearly identical sucrose and HFCS are in the gut.

Yeah, most of the HFCS criticism is built on "natural foods" lore and wacko hysteria about chemicals. It *could* be that HFCS is worse than some other sugars, but the vast majority of studies have shown no significant difference in response to HFCS vs. sucrose.

Just to be clear what we're talking about here, HFCS is not the same as pure fructose, and a lot of the lore about HFCS compares studies on fructose with sucrose or other things, rather than HFCS. Commercial HFCS is generally either 42% or 55% fructose, and almost all glucose otherwise. Sucrose, on the other hand, is a molecule that breaks down in the first stages of digestion to 50% fructose and 50% glucose -- so, as the parent said, they are basically identical in most of digestion. (It's called "high fructose" corn syrup, by the way, because it's much higher than normal corn syrup, which has very little fructose. But acting like pure fructose and HFCS are the same thing in studies is highly misleading.)

Also, for the natural foods buffs, please note that honey is mostly fructose and glucose in almost the same concentration as HFCS, so if HFCS is bad for you, "natural" honey is probably not a solution to this problem.

For further details, here's a link to a recent (2013) metastudy that summarizes what is known. From the abstract:

[A] broad scientific consensus has emerged that there are no metabolic or endocrine response differences between HFCS and sucrose related to obesity or any other adverse health outcome. This equivalence is not surprising given that both of these sugars contain approximately equal amounts of fructose and glucose, contain the same number of calories, possess the same level of sweetness, and are absorbed identically through the gastrointestinal tract. Research comparing pure fructose with pure glucose, although interesting from a scientific point of view, has limited application to human nutrition given that neither is consumed to an appreciable degree in isolation in the human diet. Whether there is a link between fructose, HFCS, or sucrose and increased risk of heart disease, metabolic syndrome, or fatty infiltration of the liver or muscle remains in dispute with different studies using different methodologies arriving at different conclusions.

In general, our dietary issues are probably a result of excess sugar consumption in general. Switching from HFCS to cane sugar is probably not a significant improvement unless you simultaneously decrease overall sugar consumption.

Comment Re:'Regardless of... income and education level' ? (Score 4, Insightful) 422

My bullshit meter always starts kicking into life when the hyperbole starts flowing, like the reading comprehension or random amount of payment received having a causative effect on the function of an organic process.

Well, the other things that are mentioned here were age and race, which could conceivably have biological differences that could have an effect.

I suspect that income and education level could be relevant here as a proxy for other dietary trends. People with higher incomes tend to eat better quality food overall than poor people. People with higher education levels also tend to make different dietary choices (and are probably more likely to seek out more "natural" foods or whatever the current research is pointing toward).

So, it's not so much that these aspects are causative as that they are indicative of perhaps a wider variety of potential dietary choices. This study seems to be based on general survey data, so it's not clear that they could rule out various confounding factors, though I'd have to read the study to know for certain.

Showing the trend is consistent is at least a step toward confronting a rather obvious objection that could come up if they only looked at poor folks whose diet is already likely to have a bunch of bad junk in it (and who probably tend to consume the most soda). If they see the same effect in rich, educated folks who drink soda, then it may not be a general "poor disease" issue. (Medical studies have often been plagued by these problems if they only have subjects who are not representative of the general population.)

I'm just guessing here, but that's one reason I could imagine for mentioning this.

Comment Re:Or gamblers are masochists. (Score 4, Insightful) 59

That's also why people play Powerball, they only hear stories about the people who hit the jackpot, never stories about not hitting it.

Yes, there's something I find distasteful about states running lotteries for this reason. It's basically a tax on the stupid. Sure, some people play for entertainment. But I personally have known a few lottery addicts who were poor or senior citizens, and they'd shell out literally thousands of dollars each year on lottery tickets. (If only they would invest that money instead....)

And, as I always tell people: I never buy lottery tickets, but I only have a VERY slightly less chance of winning than the addicts. In fact, anecdotally this proved true for me in the past few years -- some members of my family have bought lottery scratch tickets as stocking stuffers. I've received fewer than 10 of these over the past few years, but I've won on 4 of them... Totaling $175. The last year this happened, I had a $100 ticket (more than anyone else in the family ever got, including one person who buys tickets regularly), and someone gave me another cast off that day, and I got $20 more.

And yet, I have absolutely no desire to buy more tickets...I took the money and enjoyed it. Same thing one of the few times I was in a casino (and the only time I gambled)... My father gave me $25 to play some slot machines with, so after spending about $7, I hit $50. I gave my dad back his $25, took the $40+ profit, and I've never played again.

Thus, if you're going to gamble, I highly recommend using someone else's money. It's proven lucky for me. :)

Comment Re: I don't follow (Score 4, Informative) 370

Sure, sure, 300 years of technology have it all wrong and a few "recent studies"

Did you even look at the link? The guy looked at something like **50 studies** from the past century or so. And there have been at least a dozen more I've seen dealing with readability in a variety of fonts since that article was published in 2008.

show one more way for hipsters to be "smarter" than everyone else.

What do "hipsters" have to do with this?

And by the way, frankly, I prefer serif fonts too for reading -- I think sans serif fonts looks stupid. (Actually I kinda dislike them in general and have been known to change my browser defaults to remedy this situation -- but my personal preference is different from what actual studies show about legibility/readability.)

To mock your most absurd claim further (your last one): you can make sans-serif letterforms distinguishable, barely, with 5x3 pixels to work with.

Yes, to mock you back: this is of course the most common usage case these days with high-res screens. :)

Look, the question is about LEGIBILITY, not ability to render. At small enough sizes, serifs can't even be placed on fonts -- you're correct. But this has nothing to with whether people prefer to read 8-point or 10-point text in serifs versus sans. And basically there some studies I've seen recently which show people to prefer serif fonts for reading at smaller font sizes and sans only at larger sizes.

But it's a small effect, and I don't know if it's actually significant -- point is, if you're dealing with enough pixels to actually display serifs, there doesn't seem to be a strong preference one way or the other. And if you have fewer pixels, serif fonts will essentially look like sans anyway, so again they're about equivalent.

Comment Re: I don't follow (Score 2) 370

(By the way, I know it's "common knowledge" that sans serif fonts must be used on things like road signs, because they are so ubiquitous. But most of the studies on such fonts only tend to take into account point size or capital height as the standard for comparison. Factor in X-height, which in many serif fonts tends to be smaller and use at least a semibold or medium weight, and serif fonts can do just as well as sans on signs. Mostly, I think sans serif was adopted for things like signs because such things tended to be hand-lettered rather than typeset in the past, and it's easier to do sans than serifed fonts when doing hand-lettering. For headlines, sans probably was adopted because it stood out: when most printed text was serifed, a sans headline differentiated it.)

Comment Re: I don't follow (Score 4, Interesting) 370

Serif fonts are readable: great for reducing strain from hours of reading under good conditions. That's why they're used for books (except some crazy tech books that get it wrong), newspaper text, magazine text, and so on.

[snip]

Sans-serif fonts are good for remaining legible under highly difficult conditions. That's why they're often the choice for billboards, for headlines (designed to attract you close enough to read the text), for advertising text

Nope, nope, and nope.

Basically, serif fonts are used where serif fonts are used because they're more familiar where serif fonts are used.

Sans serif are used where they are used because they tend to be used in those cases. Readers are used to seeing them there.

Numerous studies have come up with inconsistent results (for a good summary of what dozens of them on the subject say, see here).

The takeaway message is readers find familiar design choices to be easier to deal with. Most books and long texts tend to be set with serifs, so we've come to expect that -- but well-designed studies have shown little difference (or inconsistent results). Web fonts tend to be sans serif, so we expect that. And I have absolutely no idea what you're talking about when you say that sans serif will remain legible under difficult conditions -- if anything, studies tend to show that serif fonts have a small advantage (probably not significant) there. After all, serifs were inherited from Roman techniques for carving letters into giant stones, not in writing: I doubt Roman sculptors would have added things that seemed to decrease legibility to monuments. (The one "difficult condition" where sans serifs have a claimed advantage is in low resolution electronic situations, but recent studies have shown this advantage to be small or non-existent.)

Comment Re: I don't follow (Score 2, Insightful) 370

One quick clarification to my second point:

(2) bigger fonts make reading easier

Bigger fonts are more legible, but they generally make reading slower because it takes more time for our eyes to move across them. So, text that's too big can be annoying for reading, but it's easier to recognize and distinguish the characters.

Once again, even in this case, overall design and use is more important than simply choosing the font or the size.

Comment Re: I don't follow (Score 5, Informative) 370

What makes you think professionals are even qualified to make the call? Presumably you're talking typographers, graphic designers, etc - artist types who couldn't construct a proper double-blind study to save their souls.

While that may be true, there have been a LOT of studies on typeface readability and legibility over the past 150 years or so, of varying quality.

After having spent some time reading these studies, I've basically come to the conclusion that we've learned basically nothing beyond three basic facts:

(1) readers don't do well with "weird" typefaces except in ornamental or occasional use -- use something that's close to what reader are used to encountering when they will read more than a few words in the font
(2) bigger fonts make reading easier
(3) unless the font has really unusual features (e.g., some characters that don't look like "standard" letterforms), overall design can usually fix most problems -- i.e., doing things like tweaking size, space between lines, space between words, etc.

The last point is really important. Most discussions of typeface legibility have to do with things like serifs, x-height, size of holes in characters like 'o' or 'p', etc. But as long as the letters actually still have standard shapes, you can usually tweak the size or spacing to make it just as legible.

Beyond that, it's basically personal preference and what people are used to. There are studies that seem to show small effects for everything -- serif fonts are better, except when they're not. Justified text is better than ragged right, or the reverse. (Hyphens are bad, or they aren't.) Double-spacing is necessary, or it's not. Larger spaces after periods or punctuation help readability to a small extent, or they don't make a difference.

Frankly, having read a lot in the literature of typography, I think the problem with most of these studies is that overall design matters most, and I'm not talking about the design of the study (though that's important), but rather the typographic design and use case.

Some typefaces will perform better when spacing is tight, others seem better if more space is available. Some typefaces are good for people with various disabilities or vision problems, but readability may be different for those with "normal" vision. Some typefaces look better than others when a smaller size is used, but people express a different preference when text is larger... or when resolution is varied... or....

Typeface is just one of many elements of proper design. And usually reactions like "Oooh, you simply CAN'T use that font on a screen!" or "No, no, no! That works well for newsprint and headlines, but no one would ever like long text blocks with that!" are just based on what people are used to, not what would actually be more legible or readable.

For example, there are situations where people have come to expect serif or sans serif fonts. Expect people to complain if you don't use the standard choice in those situations. That's not to say one or ALWAYS better than the other -- it's just a combination of what's expected and the other design choices.

Comment Re:"The Right Choice"? (Score 4, Interesting) 370

The opinion of whether or not it was the right choice is severely clouded by the fact that in the Apple environment, there is No Choice. The user Has To go along with what Apple decides is The Future.

Precisely. Even when Apple's decisions are good, they generally end up inconveniencing a bunch of users for quite some time. I built my current desktop last year, and it's the first machine I built with no floppy drive. But I darn well still have a CD/DVD reader/writer, which is useful periodically. Do I use it everyday? No. Could I get along without it? Yeah. But once every few months I have a task where it's still a useful thing to have around.

The main reason I refuse to buy Apple computers is because of lack of choice. I understand that by locking their users into a smaller set of choices, they make it easier to support. But I often want better options for my particular uses. So even if Apple offered a machine that is exactly what I want (probably at a price premium), I still wouldn't buy it -- because I don't want to support that kind of fascist approach to hardware, software, apps, etc. (And yeah, that's a strong word, but I truly believe it's a potentially dangerous development for free use of computers if everyone were to adopt it.)

Apple has built the walls so high around its empire, that few dare leave. Therefore, they must rationalize that whatever Apple decides for the future is The Right Choice.

Yes, all this justifying of "they ultimately saw what was best for the future" sounds like so many big companies' rhetoric. Google is notorious for this too in recent years, breaking their search for power users so it's only useful to people who can't spell or don't actually know the right word for what they're looking for. (Yes, Google -- I did actually ONLY want results with those particular obscure words in them.)

That's not so much about "the future," I suppose, but the infamous Gmail redesigns are. I don't know and don't really care whether Google employees only use emails as equivalents to IM chats or whatever. I need to send emails every day that require me to do things like alter recipients, change subject lines, cc or bcc people, etc. -- and now I'm forced to do 2-4 extra clicks just to get what was there before. As someone who joined the Gmail beta via invitation very early, I almost abandoned Gmail completely last year -- until the unauthorized browser plugins came that basically allowed a reversion.

Maybe email will become obsolete or turn into text messages in a decade or so. But that isn't true in most places now, and I don't appreciate the current giant corporation attitude of "we're going to make random changes to 'simplify' our user experience. But if it makes your life a lot harder, too bad." It's not just an Apple thing.

All of that said, I don't really get what the big deal about a TYPEFACE change is. Resolutions are good enough now that legibility will be fine for just about any decent typeface. There's nothing "futuristic" about Helvetica or any other. Frankly, it's just some random change to UI that makes something look "new and improved" to differentiate from old, even if the actual changes "under the hood" are less pronounced.

Comment Re:Things once thought impossible... (Score 1) 350

Sure, but that's no reason to believe obvious fraudsters like the cold fusion guys, or supernatural hucksters,

Absolutely. I wasn't in any way implying that we should believe wackos. I'm just pointing out that a proper SCIENTIFIC attitude is to always be open to novel experiments and novel data -- but if it seems "too good to be true," it likely is.

Aristotle didn't do science, he just wrote down some stuff people said - no testing

Umm, no. Sorry. Aristotle and Aristotelean scientists did a LOT of testing and experiments. This is this grand mythology of the scientific revolution that everybody for millennia were just sitting around and making crap up. No -- they observed things in nature, and they theorized about them. They may not have had the formal scientific method we use today, but they most certainly did experiments and learned from them.

For example, Aristotle's physics works pretty well as an approximation, just as you claim about Newton. Normal things on Earth actually DO come to rest. They do NOT continue in motion forever. Most of his other observations also agree pretty well with normal everyday experience, which is why the rediscovery of Aristotle in Europe from Arab sources in the Middle Ages was so significant -- after some centuries of little scientific progress, Aristotle's treatises had a lot of useful information.

We know there are no unknown forces that have any effect at human scale, because very accurate experiments have been done to look for them. New physics can only happen at very large scale, or at very small scale, meaning very high energy, or involve new forms of matter that doesn't interact with normal matter.

Yes, that may be true. But again, that doesn't mean cold fusion is impossible. I will also point out that our current model of the universe includes a lot of "dark" stuff (the majority of the universe, apparently), which we only have theories about. I'm not trying to make this out into some sort of wacko mystical "anything can happen because of 'dark energy'" nonsense -- of course, that's unlikely to be so. All I'm saying is currently we KNOW there's a lot of stuff out there which we haven't quite sorted out in detail yet. And maybe some sort of exotic thing can be used to start a fusion reaction under some weird circumstances... I have no idea. I really it seems very UNLIKELY according to modern scientific understanding.

Comment Re:Article or link (Score 3, Insightful) 113

Google is blatantly trying to manipulate public opinion through journalists. They are deliberately misinterpreting the law to create an impression of draconian consequences.

Could be, I suppose.

Or this could just be a result of the massive number of requests they are dealing with. Earlier this month, they mentioned they had received about 150,000 requests in the past 5 months, dealing with roughly 500,000 links. That's roughly 1000 requests and 3500 links to evaluate PER DAY.

Even if they have legal experts reviewing every case, there are bound to be a few questionable calls with such volume.

Comment Re:Why Cold Fusion (or something like it) Is Real (Score 1, Insightful) 350

I cannot think of any other phenomenon that eventually proved to exist that shares anything like this history of failure.

That's because such a history of failure is often written out of the history of science -- because those failures aren't generally relevant to our narrative of discoveries in science. Or the "failures" are reinterpreted within a new framework so that they are no longer viewed as "failures" but rather as experiments that demonstrated something else, or which didn't work as expected because they were measuring the wrong thing or weren't conducted under the right conditions, etc.

Just because you don't know of them doesn't mean they don't exist... and aren't actually somewhat common in the history of science.

Take one of the standard elements of the Scientific Revolution, for example: the idea that the Earth is in motion. This required a new model of physics, since Aristotelean physics taught that normal matter came to a state of rest (as observed with all terrestrial matter). The Earth could not possibly be in motion, because what would be driving its motion?

But some astronomers and physicists became convinced that the Earth must be in motion, since the arrangement of the solar system would be much simpler in that case. So they set about trying to prove it. They started searching for stellar parallax. They started doing detailed observations of the stars. They looked for abnormalities in projectile motion (i.e., Coriolis effects). These searches began in the late 1500s and 1600s.

And they didn't find any. FOR CENTURIES.

It turned out that the "fixed stars" were farther away than anyone had imagined, so parallax was a lot smaller than expected. It turned out that Coriolis effects were hard to observe given the accuracy and range of projectiles in the 17th century.

And things that were actually observed seemed to argue AGAINST the Earth being in motion, like the fact that the stars didn't get larger and smaller as the Earth revolved around the sun. Again, we now know this is because the stars are so far away, but at the time it was yet another strong argument against the Earth's motion.

Despite all these objections, a heliocentric theory became dominant by the early 18th century because the math simply was easier -- it wasn't until a century later that most of these anomalies were eventually explained (parallax and Coriolis effects actually observed, etc.).

This is one major example in the history of science, but we tend not to be taught about it this way. It ruins our story of Galileo as a lone scientist raging against idiots in the Church who failed to respond to what they saw. Except the reality is that the Church had scientists too, and they had a LOT of scientific observations that contradicted the heliocentric model (or at least couldn't differentiate between it and the geocentric or Tychonic ones). And not just the Church -- keen non-religious scientific observers often weren't sure about the matter until Newton eventually came along and put the model on a solid mathematical footing. (By the way, I'm NOT at all defending the Church's trial of Galileo here -- but the argument here is about suppression of free speech, not what Galileo could actually prove according to the science of the time.)

Anyhow, off the top of my head, I can think of at least a half dozen other major episodes in the history of science where a new idea that contradicted current understanding of fundamental physical laws took a long time to actually be proven. But again, we tend not to talk about such episodes. We generally focus on the people who finally made the experiments that proved something, rather than multitude of failed experiments that seemed to preserve the status quo because of various flaws in their construction.

To the topic at hand: I have no idea whether cold fusion will ever be possible. If our current understanding of physics is correct, I agree with you that it seems unlikely. But humanity also has a poor track record of thinking that we know exactly how nature works only to have our models disproven or shown to be very incomplete.

Comment Re:Things once thought impossible... (Score 3, Insightful) 350

guess this needs to be said again

"Apparently, you don't fully understand the difference between physics and engineering. Technological barriers can often be overcome with advances in materials and design. Declaring them to be insurmountable has been shown to be foolish, many times. Barriers imposed by the properties of matter, on the other hand, are much more durable. Declaring them to be insurmountable is rarely a mistake"

Hmm, yes, let's see. Nowhere in the history of science has any fundamental "property of matter" been found to be completely in error. Nope. Never.

Oh wait...

-- According to Aristotelean physics, each "element" has its fundamental natural place of rest. So the idea that matter would continue in motion forever was impossible. The idea that the Earth could possibly be in motion was ridiculous, since "earth" (the element) was heavy and came to a state of rest. Well, until Newton and Galileo and those folks came up with the idea that inertia allows things to keep moving forever and the entire Earth (and all matter on it) were actually in motion.
-- Phlogiston was a fundamental component of matter that made combustion possible. It was ascribed increasingly bizarre properties (including negative mass) until it was shown to be a myth.
-- Waves can't propagate without a medium -- that's a fundamental property of matter. Light therefore required luminiferous ether to travel through space... until Einstein showed it didn't.
-- Atoms are fundamental indivisible parts of elements, an idea that had been around since the Greeks. Until the electron was discovered. But even then, electrons and other parts of atoms were fundamentally a kind of "plum pudding" mixed in creating solids... until Rutherford showed they were mostly empty space, with a concentrated positive nucleus. But they were immutable, until things like nuclear fission showed they could be changed. And we could go on with the various problems with all the atomic models that assumed to be the fundamental structure of matter, but which were wrong.
-- Matter is made up of particles which are definite things which are in a particular place... until theories of uncertainty and wavelike characteristics showed that things were a lot more complicated and sometimes apparently indeterminate.
-- Etc., etc.

I could go on, but hopefully you get the point. Throughout the history of science, there have been multitudes of assumptions about the fundamental, essential, and immutable "properties of matter" which must be the case. And these theories have often been shown to be incomplete misunderstandings or sometimes utter falsehoods.

I have no idea whether cold fusion will ever be possible. I have no idea what holes or misunderstandings may still be present in our current understanding of the "properties of matter."

But I'm not so stupid as to ignore history and declare that our current understanding of the the laws of physics and fundamental "properties of matter" is so utterly complete that we could declare such a thing impossible for all time.

Given our track record for thinking we've come to a final complete understanding of nature, only to realize we were completely wrong, I'd say it's a pretty egotistical perspective to say that we actually know exactly the "barriers imposed by properties of matter" to a high degree of certainty FOR ALL TIME.

Slashdot Top Deals

Make sure your code does nothing gracefully.

Working...