Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Submission + - A Taxonomy of Visualization Techniques (acm.org)

CowboyRobot writes: "The ACM's Queue magazine has a new, comprehensive taxonomy of visualization techniques drawing from the theories of Edward Tufte and citing examples from academia, government, and the excellent NYT visualization team. This list contains 12 steps for turning data into a compelling visualization: Visualize, Filter, Sort, Derive, Select, Navigate, Coordinate, Organize, Record, Annotate, Share, & Guide. "For developers, the taxonomy can function as a checklist of elements to consider when creating new analysis tools." The citations alone make this an artucle worth bookmarking."

Comment Re:Some questions about gene expression (Score 1) 34

I find it slightly ominous that you call data-driven research a thing of the past; I just got through a course that hailed it as a Big Deal—though the class's attitude was that it was a process for finding a hypothesis, not really testing one. Given that a full human gene expression microarray really is quite excessive for pointing fingers at only a handful of genes (those could be done through much cheaper RT-PCR, after all) my instinct is still to suspect that they're not as organized as you or I might like them to be—or, at least, they're prepared to fall back, and since they're grinding up such a valuable resource already, decided to go for broke with the gene chips, to make sure any negative confirmations they generate are as useful as possible.

Don't get me wrong, I think genechips are awesome tools. I've seen some really nice work on elucidating what changes at different points in the cell cycle, for example, that really couldn't have been done with anything else. But just because it's a high-throughput tool that could be used to brute force things doesn't mean we don't have to pay scientists to think anymore. I think genomics really planted the idea in people's heads that if you collected the data first, other people would be able to make it useful later, but then you have the luxury of there being a definite consensus sequence of an organism. With genechips, the results you get can depend alot on where you decide to focus your attention, I would argue if you don't know what to look for you just won't find it.

If these big, multi-center consortium projects had some sort of arrangement where after they identify "interesting" things they could then go in a more detailed study and pick the interesting things apart, that would be something. But they don't, because the people who are good at pushing the technological hi-throughput capabilities are usually not the people who know what a biologically interesting results might look like or where to look for them and that could be fatal if the community these projects are supposed to be serving have no ability to tell the ship it's headed in the wrong direction.

There's a great editorial from a personal hero of mine Sean Eddy (of pFam fame, we overlapped a bit when he was in St. Louis). He has a great point about how science is evolving to where ee completely divorce the people who have the technical knowledge of how to do an experiment with the people who care about the end results and why this might not be such a good idea.

http://selab.janelia.org/publications/Eddy05b/Eddy05b-reprint.pdf

I was at a conference lately where basically all the bigwigs were predicting the death of data-driven research simply due to a lack of bang for the buck an age of very tight pursestrings and technology that outdates itself in a matter of months. And there was a consortium guy there, whose passionate defense of it was "well, it kept the lights on in my lab so I could do the other cool stuff I wanted to do. Oh, and we standardized on data formats and have a central repository, that's good, right?"

Comment Re:Some questions about gene expression (Score 1) 34

With big disorders like autism and schizophrenia, where the underlying causes are so complex that we haven't yet found them, the story generally seems to be the case that we throw microarrays at them just in case, not because we have substantive reason to believe that a hypothesis might be sound. If these studies fail to yield anything, then that's all well and good: we know the problem is either an aberration in the networks which is too complex and subtle for us to detect, or we've narrowed it down to one of the other major things that can go wrong, like epigenetics, a good old wholesome mutation (which could be picked up Mendelianly with linkage analysis), environmental exposure, or prions. The language you used was very certain (e.g. 'prove what they want to show') and I just wanted to emphasize that, while such thinking may be the unfortunate reality of grant-writing, determining that schizophrenia can't be detected on a mRNA-based microarray is almost as significant. (As an undergrad I have the luxury of not thinking about the miserable reality of how competitive research can be. Out of the mouths of babes, if you will.)

Hmmm, I have to disagree here, and not because of cynical grant realities per se. .

Consider if I was a grant reviewer, and you're proposing to grind up valuable donated brain tissues from dead patients (which is pretty much irreplaceable and in high demand since not alot of people donate their bodies to science anymore). You are proposing an experiment where each sample will require a separate 500-1000K$ genechip to measure the mrna levels (they are not generally re-useable). You need samples from multiple patients to establish a profile that is representative of the disease state (and not specific to one individual). You probably have multiple tissues you could choose from within the brain. You need controls to establish a profile for the healthy state. You need multiple replicates for statistical significance. That's easily in the mid five-figure range for the consumables alone for a single set of experiments, to say nothing of the cost of acquiring and processing the tissues. I'm not going to fund you if you have no hypothesis you are trying to prove other than "we might find something interesting with genechips", that's what we call a very expensive fishing trip. Even if there's lots of grant money around, I will go to the next application in the pile and give the slot to anyone who can convince me they have a hypothesis worth testing (any hypothesis, it doesn't even have to be exciting, it doesn't have to be right, but it should be well thought out).

The drug industry used to do alot of this, what we call "hypothesis-free" research (I think they called it "data driven" research, as in, we'll collect the data first and come up with a hypothesis later). I would argue this doesn't follow the scientific method which allows us to refine a hypothesis given a set of observation until it ends up revealing something new. A well designed experiment will allow you to learn something even if the hypothesis turns out to be false, because it's systematic and tests something from multiple angles, whereas a poorly designed genechip experiment can end up telling us nothing in particular because there's often more noise than signal. . .

Anyway that's my 2 cents. You will certainly see alot of fishing trip type of research out there in the literature, especially from the period when genechips were brand new and you could get anything published just by using one. And there are still plenty of projects like that trying to recruit bioinformaticists to come in and miraculously save the day by finding something interesting in those very expensive datasets that don't show anything (I was in a hospital where they spent a few million dollars taking genechips on all surgery patients pre and post-op, for example, with no particular hypothesis in mind). I suggest avoiding those projects with a 12-foot pole.

Comment Re:Some questions about gene expression (Score 1) 34

Ack! I got carried away. My original intent on replying to this thread was to plug a good book I've been reading that summarizes how all the classic experiments from the 30-60's showed what we know today about how mRNA works and how it is regulated. For me at least, it's much more interesting to read than a dry textbook that just presents the end results as facts handed down from on high (such textbooks also tend to get outdated rather quickly, whereas understanding the logic behind the classic experiments will never be outdated).

http://www.amazon.com/RNA-Indispensable-Molecule-James-Darnell/dp/1936113198

check your university's library, you ought to be able to get it from interlibrary loan at least.

Comment Re:Some questions about gene expression (Score 1) 34

Heyo -- thanks for the heads-up on Twitter. I'm the sysadmin at a small university department, and I work with scientsts studying gene expression. They're good and patient people, but sometimes I feel a bit like I'm questioning the foundations of their work...which feels either rude or ignorant.

First off, I'd always been under the impression that DNA was only/mainly used during reproduction -- a cell divides under DNA direction, some bit of the cell is the machinery that makes whatever protein is needed during its life, and DNA isn't involved much after that. However, I'm starting to understand (I think...) that I've got it all wrong. My understanding now that gene expression can basically turn on a dime, and that *this* is the usual way a cell makes a protein: something happens to a cell, it says "Whoah, I need protein X", and it starts transcribing the DNA so it can manufacture it (modulo things like gene regulation). This process can take very little time (hours or less). Have I got that right?

Second: one of the things they study is datasets of gene expression in post-mortem brains. (Well, technically I guess I've got that wrong, since genes aren't expressed post-mortem... :-) As I understand it, someone dies -- say, someone with schizophrenia -- their brains are donated to science, and at some point someone does microarray sequencing of blendered neurons. This is compared to brains of control subjects, gene X is found to be over/under-expressed in schizophrenic brains, and so gene X is involved somehow in schizophrenia. (This is a gross simplification, especially in the case of schizophrenia; my understanding is that these signatures cover many, many genes, they're subtle at best, and there's nothing like "a gene for schizophrenia".)

What I don't understand:

a) Since time passes between death and sequencing, how much fidelity does/can this have do what was going on at the point of death?

b) Even if it is a good indication of what was going on at death, how does that relate to a long-term illness like schizophrenia when (assuming I've got this bit right) gene expression can turn on and off in a very short time? I realize there are (ahem) ethical problems with doing brain biopsies on living subjects, and that post-mortem is the best that can be done -- but how good can it be?

Many, many thanks for your time. Any questions about system administration, let me know. :-)

Hope you don't mind me hijacking this thread, I think it's a great service Samantha is providing here. I just wanted to add a few comments as someone who has sadly seen alot of sloppy gene-chip experiments going on (but also some very nice ones).

It's really encouraging to hear that you are taking an active interest in what your scientific collaborators are trying to show. You'll be that much better equipped to help them prove what they want to show if you are roughly on the same page as them - something alot of scientists overlook when they delegate out the technical stuff they don't know how to do themselves. You might find that the group you are working with has some graduate students or maybe postdocs (i.e. probably whoever you have day-to-day contact with who actually does the experiments and hands you with the datasets) that would be much more available to answering your questions than the big bosses who have to consult a calendar to even see if they have time to meet with you.

As a biophysicist (but importantly not a neuroscientist), I can still say that I am not aware of any consensus on what causes schizophrenia other than it must have both a genetic and an environmental component (i.e. having relatives that had it greatly increases your risk, as does certain types of substance abuse). Therefore it is absolutely a central assumption of your collaborators' research that some key component is due to a long-term up or down-regulation of expressed mRNA's, that's certainly not an established fact anywhere in the literature although there is speculation and circumstantial evidence and it might be a favorite hypothesis in the field. I am 100% certain that a large portion of the grant proposal that funded this research was devoted to justifying this assumption and that it will be the first question out of the mouths of peer-reviewer of any papers that come out of it, so I can only assume that very persuasive arguments were made since microarrays are expensive.

I can give an example of an over-simplified hypothesis that, nonetheless, would be a home run to your collaborators if they could prove it. Certain drugs that interfere with neurotransmitter receptors (ketamines?) can induce schizophrenia-like symptoms, so maybe one component of schizophrenia is something that systematically lowers mRNA levels of the neutrotransmitters that interact with ketamines so there are less of them in the brain. It would have to be chronic and long-term to make a huge difference to cognition (surface receptor concentrations take along time to build up as they are expensive to make) so such a scenario would in fact show up in the mRNA levels. Or maybe something upstream that helps promote production of a neurotransmitter (like an activator) gets down regulated and has the same end effect. But I can also make up hypothesis that don't involve mRNA's at all - What if reduced neurotransmitter function is caused by some sort of protein misfolding due to a genetic mutation, so they end up being recycled instead of on the cell surface? Or what if it's an adverse reaction to something else in the environment that interferes with receptor function as opposed to merely diminishing their numbers? As Samantha pointed out, people measure mRNA levels mainly because the technology exists, whereas it simply doesn't for alot of other important processes.

Off the top of my head, there are super-lots of other assumptions that would have to be worked out as well, even assuming there is no significant degradation of the mRNA after death (which as Samantha points out, requires freezing to be absolutely sure, but AFAIK you don't freeze donated organs, since mammalian cells don't handle freezing and thawing very well). Will mRNA expression levels at death have more to do with the dying process than the underlying long-term neurological condition? Would it affect all neurons or just a certain type from a certain part of the brain? What is the false-positive rate on the gene chips themselves? (I've read alot of commercial ones have errors on them). These are all questions that a cell biologist/neuroscientist would be able to answer far better than I (i.e. the lab you are working with). But I would characterize your concerns about "questioning the foundations of their work" as totally legitimate questions since they are relying on you to help them sort the signal from the noise - a task that would be much easier if you know what assumptions are being tested by the controls and where to look first when you get a new dataset.

Good luck with your work!

Comment Re:Blatant agenda? (Score 1) 218

His definitions require replication with variations. So if someone found a way to suppress genetic mutation in humans, we would not be alive right? An artificial creation can also not be alive unless it can reproduce? Does factory production count? It seems we can shorten his definition even more if we embrace his bias:

Life is: from evolution.

I don't object to evolution, but I don't think it's correct to define life by this existing process. Or am I missing something?

There IS a blatant agenda here, and it has nothing to do with defining life.

This type of paper is what I would call "borderline scholarship". It was done by a real scientist, passed "real" peer review, and even ended up in a "real" journal (more on that in a bit). But I would estimate this sort of work took maybe one weekend in a library and 20 minutes in excel. "Top science" this is not. It was picked up by JBSD, a washed-up journal that used to publish edgy stuff a few decades ago, and has lately decided that the way to regain relevance in age of science-by-press-release is to publish edgy sounding papers (no matter the quality of their content), invite two dozen "expert commentaries" from actual experts in the field, make a press release, hope it gets picked up by popular media (such as slashdot) and then watch their citation index go up the wazoo.

It doesn't matter that most of the two-dozen expert comments are basically rehashes of "why is this being published, again, and why was I asked to comment on it?". Hey, it's such an edgy paper, 20 experts "couldn't wait" to submit their comments, and if they are not familiar with the publication they don't realize their commentary just got counted as an actual citation for the original paper and upped the citation index for the journal itself.

Oh, and make it "open access" to sound like they are so generous as to let the public in on this amazing breakthrough (actually, this is a journal that long ago stopped being able to charge anyone to subscribe to them).

As is typical from such self-serving PR exercises, I actually learned more from the criticisms than from the actual paper.

Here's what Eugene Koonin says, and he's the one that proved life exists in three separate branches (prokaryotes, eukaryotes, and archaea). So he's thought about this alot more deeply than TFA.

http://www.jbsdonline.com/mc_images/category/4317/4-koonin-jbsd_29_4_2012.pdf

Yet, all its simplicity and appeal notwithstanding, the minimalist definition appears to be neither necessary nor sufficient, not even internally consistent. A simple implication of information theory (and more fundamentally, thermodynamics) is that error-free replication (more precisely, any information transmission process) is impossible (5). Hence the phrase self-reproduction with variation is actually redundant because any replication process will be characterized by some intrinsic error rate. The problem is exactly the opposite: it has been shown by Eigen and others that for stable information transfer (inheritance) down the chain of generations to be sustained, the error rate must not exceed a certain critical value known as error catastrophe or mutational meltdown threshold (6, 7). Thus, a necessary condition for life to evolve is not simply replication and not ‘replication with variation’ (a tautology) but replication with an error rate below the sustainability threshold. .

And here's a snippet from the response from evolutionary biologist Richard Egel, author of "Origins of Life: The Primal Self-Organization", so yeah, he's thought alot about this question too:
http://www.jbsdonline.com/mc_images/category/4317/8-egel-jbsd_29_4_2012.pdf

In summary, the statistical vocabulary approach of Tifonov (1) to extract a simple defining formula for the intrinsic complexity of life amounts to an enchanting exercise on the border between basic science and aphoristic poetry. I was somewhat reminded of my first visit to the United States in the mid sixties, when a frenzy florished among high school kids to come up with the most fanciful variation on “Happiness is ...”.

Comment Re:MD degree is to long and the school mindset may (Score 1) 238

How is it, then, that, say, in Poland you can do medical school as a 6 year integrated program, starting straight out of high school, while in the U.S. you need an undergrad degree followed by what, 5 more years? I don't think that the polish model produces any worse doctors...

I agree completely it could be done in 6 years in terms of the curriculum itself and once you've isolated the right student pool (med school here is 4 years, BTW, not 5, and there is not much to do your 4th year except applying and interviewing for residencies). But for a variety of competing interests in the US it is much harder to image a universal shift to 6-year integrated programs succeeding. Given that even the doctors that graduate last in their class here still have an automatic ticket to earning potentials in the top 1% of society, pre-med students will continue to bend to whatever admissions criteria are thrown their way.

I think the real question that differentiates the two models is if it's easier to judge on paper whether an 18 year old high school graduate vs a 22 year old who has attended college is going to make the final cut to be a successful doctor.

Consider that in the US, less than 50% of medical school applicants (i.e. premeds) are accepted to any medical school at all. Combined with the fact that somewhere between 60 and 80% of college freshman declaring pre-med intentions change their mind before even getting to the point of applying to medical school (either due to a change of heart or being "weeded out" by the pre-med curriculum). Medical schools have little incentive to increase their student capacity (due to vested interests such as maintaining their "elite" rankings and limiting the overall number of licensed physicians competing for jobs) so a universal shift to a 6-year integrated programs would also mean having to sift through an order of magnitude more applicants with much less data to compare them by (high schools in the US being notoriously uneven in quality and, on average, well below European standards in terms of college preparedness).

In the US, dropping out of med-school is NOT an option due to the obscene amount of loans one needs. Conversely, medical schools here covet high graduation rates to improve their standings, so try to do all their weed-out in the application process and then try their hardest to make sure everyone who is accepted makes it through. So within that framework, I think it's easier to judge candidates who proven university academic track record rather than just a high school diploma.

Maybe it works in poland because the secondary education system is more uniform? I would still expect a model like the Polish one would have to compensate for students who just don't prove to be up to the task by failing a substantial portion of them out over that 6-year period. I don't think that's necessarily better or worse than the US system, but given how the system is set up here (where rankings mean everything and students often have to go a quarter million dollars in debt to finance their M.D.) there's little incentive for medical schools here to change their requirement until society collectively decides that we need more doctors who are paid less rather than a restricted number of super-specialists who earn stratospheric sums.

Comment Re:MD degree is to long and the school mindset may (Score 3, Interesting) 238

MD degree is to long and the school mindset may be to much drilled in to people. Going to med school do they really need a full 4 year BA with all the filler classes before med school? Why not 2-3 years and then Med school? Now I can see what that setting in a class room for years with lot's of tests and some stuff that you will never use can do to your mindsets. Testes become more about craning for the test then studying the full topics. Now some of this comes from poor tests and the other part comes from the tech the test idea.

Well, it wasn't always this way. Used to be, you didn't need a B.A. to enter medical school. Heck, you didn't even need to have any contact with real patients before you set up your own practice (i.e. no residency or clerkships). Medical schools used to be giant diploma mills that would take any paying student. Accreditation and board certification were a complete joke.

Then the civil war came along, many of those doctors were drafted to help the army, and to the horror of wounded soldiers everywhere, it soon became clear that your chances of survival were often *better* if you were not treated at all than if you were allowed to be operated on by one of these diploma mill graduates with no real qualifications.

Since then, all medical schools have required a bachelor's degree.

I entirely agree one could theoretically teach all the relevant pre-med material in 2-3 years, nothing is stopping anyone from simply finishing a B.A. a year early if they want. Most pre-meds I knew could have too, they just chose not to because they wanted to live a little before going to med school, or buff their resume and get into a really good one.

And sure, you can always argue pre-meds are being weeded out with only slightly relevant material (yes, orgo II, I'm looking at you). But, you know what? I aced that class without really understanding it and all it took was applying a few key chemical concepts and a fair bit of rote memorization. If you can't hack that, I don't want you interpreting my MRI scan or prescribing me an immunomodulator that might or might not interact with my heart medication.

Comment Re:MD degree is to long and the school mindset may (Score 4, Interesting) 238

As a molecular biologist I have to ask: how would that matter? The MDs that have patients don't really need to be thinking about ATPases or the Michaelis–Menten equation. The MDs that are taking basic research and putting it into the field seem to be getting their PhDs which can't be easily faked. And the just regular PhDs are in theory doing the really basic research that involves knowledge of mobio, we don't go to med school or see patients.

Having gotten my Ph.D in the basic research wing of a major medical school, I can concur that MD's typically have only a vague understanding of mechanistic biochemistry, and that the Ph.D's designing future treatments have only a vague understanding of human physiology. Exactly how is this a satisfactory state of affairs?

If you were ill with some condition that presented in an unusual way, (say, a borderline metabolic deficiency), would you prefer your M.D. to actually be able to figure out on their own what's wrong with you, or just blindly follow diagnostic recipes they memorized from the New England Journal of Medicine?

The only reason I can see for wanting a premed student to take molecular biology is to add another level of selection to deter the weakest students from becoming doctors.
 

You are aware that intro molecular biology is now taught in the second year of any standard biology major, or sometimes combined with biochemistry in your third year? My wife is an ecologist and she took it. Pre-vets take it. Nurses take it in nursing school. Heck, my dentist took advanced biochemistry as well. So why are you against pre-meds taking it? You think a doctor doesn't need to be as capable as a nurse, vet, or dentist? It's not exactly quantum physics, and it's extremely useful since you may only get the abbreviated "molecular medicine" type of crash course in med school since they assume you already took it as a premed.

Interestingly, I've heard that the major that scores the highest on average on the MCAT is actually not premed, biology, or chemistry. Philosophy majors do the best on the MCAT. Granted, there's a lot of self-selection going on there, they probably make up at most 1% of the MCAT takers, and the MCAT is not necessarily an indicator of who will be a good doctor.

You can see a list of the topics covered on the MCAT below which covers (surprise!) molecular/cell biology and biochemistry. Unless the philosophy majors are cheating, they must have at least self-studied the material to score so highly, but more likely than not they took a course or two. I'm really puzzled what you are trying to prove here.

https://www.aamc.org/students/download/85566/data/bstopics.pdf

Comment Re:So, treating 4000 people (Score 1) 264

You know what's hilarious? We spend $700 billion/year on the "defense" budget vs. $30 billion/year on the NIH. I find that hilarious. All these stupid diseases could be cured in 15 years if we reversed those numbers.

Agreed. If it's any consolation, the NIH itself is the only government agency I can think of that is uniformly filled with the most frikkin brilliant researchers in the entire field and then some, better than you'll find in even the most highly-compensated strata of the private sector. NASA is just a shell of it's former self, the DOE is a cold-war dinosaur, the great industrial blue-sky labs are all gone or completely unrecognizable (Bell Labs, Kodak, GE, etc). Meanwhile, public universities are cutting "frills" like entire humanities departments due to budget cuts, while private universities are slowly morphing into elite boarding schools catering exclusively to the ultra-wealthy. Sure, they do alot of good research too, but only if someone else pays the bills for it like the NIH or NSF or a frikkin charity, but they still want YOU to donate to them because they argue they are almost like a charity. I mean, how are the 1%'s progeny supposed to *study* if they don't have 24 hour gourmet cooking and multimillion dollar fitness centers like they did in the upper east side?

The pharma industry has all but admitted that their entire economic model is broken, there will be no more blockbusters to make up for their research misses and they aren't agile enough to do the risky legwork to find the new drug candidates that require yet-undeveloped technology to even identify. Meanwhile, small startups basically stand no chance against the big guys unless they plan on being acquired first, at which point whatever risk-taking culture they had cultivated becomes superfluous. So if you hear oneday in the future that the NIH has become a dismal, depressing place to work full of do-nothings waiting to collect their federal pension, then we're all pretty screwed, it's the only part of the biomedical R&D ecosystem that is working the way it should.

Comment Re:So, treating 4000 people (Score 3, Informative) 264

We spend ~$30billion a year on research in the U.S. on the NIH, so a partial solution is already in place.

The other thing to keep in mind is this drug is only highly priced for the next 20 years. After that the generic versions will be cheap, so future patients will benefit hugely. That's the beauty of the patent system. It hasn't been outrageously extended to hell like the copyright system has.

It's worth pointing out that part of the calculus that goes into pricing a drug has to do with the fact that drugs rarely enjoy all 20 years of patent protection, due to the fact that the invention of the drug usually occurs in the R&D phase which predates the clinical trials, approvals, and manufacturing scale-up. The average effective patent life (i.e. the period during which a drug is actually for sale) is 7 to 12 years, so prices tweaked to compensate. The flip side is that it really discourages treatments for diseases that affect very small portions of the population,since you cannot count on recouping costs over long periods of time to compensate for the small patient pool. This is partially addressed by the Orphan Drug Act, but more often than not this is where charities funding disease-specific research really play a crucial role.

Comment Re:Cell MEMBRANE (Score 5, Funny) 264

Good lord, we are animals not plants. There is no such thing as a "cell wall" in our cells! Call it what it is: the cell membrane.

Pedantic? Yes, but the definitions are precise and are intended to be used precisely. Journalism like this makes me want to gouge my eyes out; a single high-school biology class teaches cell wall vs. cell membrane!

I'm a scientist., I'll handle this!

By the power vested in me by science, I hereby retroactively flunk the original submitter's high school biology grades, and also raise the grade of the bookish, socially awkward lab partner you conned into doing all your work. The sentence is to correct 10 obnoxiously factually incorrect slashdot comments without invoking any of the following: Godwin's Law, correlation vs. causation, Ron Paul, or conspiracy theories of any kind. Oh, and just for good measure, rule 34.

Until then your slashdot submitting privileges are subject to double-secret probation.

Comment Re:Where Does the Money Actually Go Though? (Score 3, Informative) 214

Well, I have an issue with this. From the article:

While that will give an immediate boost, more is needed from governments, which have provided the bulk of the $22.6 billion that has been raised by the Geneva-based organization to date for its work in 150 countries.

The commitment of governments was shaken last year when the fund reported "grave misuse of funds" in four recipient nations, prompting some donors such as Germany and Sweden to freeze their donations.

Why do coutnries pay into this foundation that invests primarily in American funds and stocks? Why do they not setup their own charities that invest in their own stocks or -- better yet -- give it directly to the institutions of medical research?

This perplexes me to no end. This foundation is at the mercy of the stock market and rely on money managers to post returns every year so that it can give those returns to the targeted countries and research -- right up until a crisis causes those funds to greatly shrink.

I have complained about this before and been called "full of bullshit" and I guess this is just one thing that my opinion and concern diverges on from the rest of the readers here. This is charity in the form of keeping the capital inside America's border and shaving off returns. The money stays at work in America and no such stock or company or infrastructure is built up in the countries that could truly use it and truly need it.

When you're talking billions of dollars, you're talking enough money to start internal institutions and programs that could create jobs or better education as well as do medical research. Instead this money stays in the coffers of rich Western companies and even after the returns are "given" to the countries, it is given in the form of purchased medicines often made by American companies. And that strategy of deciding where your donations gets spent doesn't always work out like you would expect.

It's great he donates all that money but that method is never going to change anything. The real winners here are the companies that get huge cash infusions from the foundation in the form of investment (like Monsanto) and Big Pharma who gets the revenue from all the AIDS medicine that is bought and shipped. Exactly why are foreign governments investing in the Bill and Melinda Gates Foundation instead of finding a better solution?

Bring on the "look a gift horse in the mouth" posts. They may be right but there has to be a better way to use this money to accomplish these goals. It's almost designed to be a perpetual medicine exporting machine.

You are mixing up two things here. There's the Bill and Melinda Gates foundation, and there's the Global Aids Fund.

Bill Gates just donated money to the latter, which depends on donations from individual countries, is run out of Geneva (not by the Gates foundation) and has criticized for being poorly managed.

The Gate Foundation invested in Monsanto, which is the link you provided, not the Global Aids fund. I'm not aware of foreign countries investing in the Gates Foundation.

As unsavory as it might be for charities to be using donated money to invest, the purpose here is long-term viability. The purpose of the Gates Foundation is to fund things that might not show tangible results for decades that traditional, government-directed research and public health funds cannot address. This type of planning is pointless if you can't guarantee the Gates fund will be able to sustain funding for such projects on a decade timescale, which is simply not possible without some sort of long term financial investing. It would be nice if the investing was also done in way that benefits the third world, but keep in mind do we really want a charity trying to meet certain financial profit targets (based on their grant commitments) made in investments in the poorest countries in the world? Probably better to keep development projects and financial investments as separate as possible, avoid any potential conflict of interest.

Analogously, if you wanted to set up a scholarship at your alma matter to defray some deserving student's tuition by 10k$, you could donate 100k$ and have no more scholarships after 10 years. What the school would mostly likely do instead is invest that 100k$ in a separate part of their endowment, buy monstanto stock (or google or walmart or whatever gets a good rate of return) and then assuming a 10% return on investment, they give out a 10k$ scholarship in perpetuity.

Now, the subtlety here is that it's easy for charities, especially long after the death of their original benefactor, to become more obsessed with retaining and generating wealth than spending it on their supposed beneficiaries. Which is why charities often have to be regulated to force them to spend a certain % of their proceeds every year instead of just re-investing it to keep their charity status.

Finally, a word on why we need the Gates foundation. I have colleagues (in the US) that research things like Malaria and Dengue Fever who say it was nearly impossible to get funding to study it because these are not pressing health issues in the developed world, whereas the countries where they are pressing issue are too poor to fund this research themselves. It's been a total game changer that is greatly increasing the amount of our top researchers that can pursue health concerns of the third world. This is through grants given by the Gates foundation directly to medical research institutions, I can't remember the last time I saw an HIV/AIDS/Malaria/TB talk that didn't acknowledge partial funding from at least one of the authors involved by the Gates Foundation. So, from where I'm sitting as a biomedical researcher, it IS working. Because governments will only allocate research funds to things that affect their own population, and in this case those countries have no research funds to speak of.

Comment Re:Yes - sounds like "grant time" (Score 1) 285

Is there some environment where sinkers get more nutrients and floaters get eaten or killed?

This is saccharomyces cerevisiae, the yeast used to make beer. Brewers have been selecting for floculent yeast since long before scientists started playing with them. The fact that this isn't mentioned once in the article invalidates the entire thing for me. This is not wild yeast learning a new trait. It's a well known trait being selected for. When I was brewing, I spent many hours watching yeast colonies, which vary wildly from strain to strain. Personally, I prefer the clearer taste that come from floculent yeast.

You, sir, are hereby promoted to "King of the Lab". I had this nagging feeling it would be something like this, but I missed the connection completely!

Slashdot Top Deals

Never ask two questions in a business letter. The reply will discuss the one you are least interested, and say nothing about the other.

Working...