Forgot your password?
typodupeerror

Comment: Re:no, condoms are for people who don't want to ge (Score 1) 330

by Joe Torres (#44539519) Attached to: The Science of 12-Step Programs
I forgot the reference (https://en.wikipedia.org/wiki/Comparison_of_birth_control_methods#Comparison_table), made a bad joke (sorry about that - I forget that my jokes are not as funny as I would like to think they are), and apparently didn't get my point across clearly.

This topic is pretty dead, but I'll try to clarify some last points:

The birth control example was meant to show that scientists and doctors need to consider the actual statistical outcomes of different treatments and not the ideal outcomes. The treatment option in that set with the best "real" outcome also had the worst ideal outcome, but for modern medicine the "real" results are what matter.

You never provided a reference for the study you first brought up, so arguing about it isn't really going to be productive. My guess was that the study you were mentioning would have included control groups where they informed a similar sets of people about AA, control non-AA program, a unrelated control program (e.g. a writing club), or did not inform them of any program. If this were the case, then any significant differences should be due to the program they were informed about (they should all have similar stats on people not showing up at all - the 30% you mentioned). The study you mentioned could have been a flawed study that overstated its conclusions or it could have been a perfect study, but you seem to be the only person in this discussion that has knowledge of it.

Without a well controlled (demographic or experimental) scientific study, it should not be assumed that AA is the best course of treatment for a patient.

Modern medicine should not give up when a patient is non-compliant just because it is unethical to force them into treatment. Improved treatments and ways of informing patients about them (in order to get their consent or compliance) is needed when there are problems such as this.

The AC probably meant that if you only count the successes, then your success rate will be 100%. If you exclude a set of data from one condition then you have to exclude it from the controls as well (so you can properly compare them). If all data is included when comparing different conditions, then other variables that are independent of the conditions should fall to background noise (e.g. patients dying in a plane crash or not showing up to any meeting should be the same for both conditions and not affect the end conclusion)

If you are unable to force a patient to go to a program, then the beginning of the treatment starts when you inform them and encourage them to go. A large problem with treatments starting this way is that many patients will not listen. In this case the treatment failed for the patients that did not listen.

Comment: Re:huh? nothing works then. cars don't work, TV do (Score 1) 330

by Joe Torres (#44538871) Attached to: The Science of 12-Step Programs

Condoms don't work then, people who don't use them get pregnant.

I'll use this example, since it is easier. Methods of birth control with failure rates for typical use or perfect use: Condoms (typical - 15%; perfect - 2%), "Pulling out" (typical - 18%; perfect - 4%), Plan B - levonorgestrel (typical - 12.5%; perfect - 12.5%)

Ignore anything you think you may know about birth control and any other confounding variables for the sake of argument: If you could only inform a patient about one of these methods, which would it be?

.

.

.

Answer: More people become doctors after attending medical school when compared to those who attend law school. Also acceptable: Plan B - levonorgestrel

Comment: Re:a million studies say it does (Score 1) 330

by Joe Torres (#44538655) Attached to: The Science of 12-Step Programs

FYI wiki is not a source, it's some random person's posting on the internet, just like this is.

There is a reference in the Wikipedia article, but I did not use that reference because I did not read that paper (behind a pay-wall) and I did not want to pretend like I did.

I myself used to live under tarp behind the Target store.

I am sincerely glad that you got through that and I am glad you are helping others (you should give yourself more credit for this - it is a great thing to help others).

Each of our case studies shows that doing the AA system changes lives in a radical, amazing way.

Anecdotal evidence is not enough for modern medicine.

30% of those people didn't even go into the meeting to find out about AA ... The reasonable conclusion is "telling someone to find out about AA doesn't work. Actually doing the AA program does work."

Patient compliance is a problem with many treatments and if it significantly decreases the impact of one treatment verses another, then that has to be included (improperly excluding data points introduces bias into the analysis).

Side Note: There are research groups working on using nanoparticles (that can be ingested) to deliver vaccines to the colon, instead of direct intracolorectal (up the ass) delivery, because of worries of patient compliance (I read this one: http://www.ncbi.nlm.nih.gov/pubmed/22797811).

Comment: Re:Gotta have a plan (Score 2) 330

by Joe Torres (#44537813) Attached to: The Science of 12-Step Programs

and for some things, its very hard to set up an ethical and moral controlled scientific study ... actually achieving a clean methodology and such to study things that screw with people's lives is quite difficult.

There are well controlled studies for various diseases that are much more fatal than alcoholism. Yes, they are difficult and require hard work but modern medicine would never have gotten it this far without studies comparing different treatments (either in addition to or in replacement of existing ones).

In a case like this the best you can do is try to study people who have already elected for various treatments. And the 'anonymous' part of AA (and various other programs) just complicates it all.

I don't know if it is the best, but it is a great place to start. You would need a pretty large sample size to help minimize other variables, but large differences in outcomes should be apparent. Collecting simple outcome data (any demographics would be a plus) with random ID numbers should be possible.

"Unequivocally demonstrated" is a difficult bar to meet when its not actually legal to set up a properly controlled experiment.

Clearly demonstrating that one condition is equal to or significantly different from another may be difficult, but it should be expected when the results will directly impact the treatment of future patients. I do not know if it would be legal to have a "no treatment" group, but that is not the only kind of control group that can be used to try to answer the question.

Thanks for the reply.

I do completely agree that science is not easy (especially for scientists doing psychology research).

Comment: Re:Gotta have a plan (Score 2) 330

by Joe Torres (#44535257) Attached to: The Science of 12-Step Programs
People suspect that many things work and sometimes they are wrong.

"'no experimental studies unequivocally demonstrated the effectiveness of AA" in treating alcoholism." (https://en.wikipedia.org/wiki/Effectiveness_of_Alcoholics_Anonymous#Clinical_studies)

Well controlled scientific studies are great at answering these questions.

Comment: Not Antibodies (Score 5, Informative) 149

by Joe Torres (#42437105) Attached to: Panda Blood May Hold Potent Assailant Against Superbugs

Cathelicin-AM is an antimicrobial peptide not an antibody.

I just skimmed the paper (abstract: http://www.ncbi.nlm.nih.gov/pubmed/22101189), but it seems that the group was the first to find out that pandas produce this type of antimicrobial peptide (they are produced by other mammals and it seems that the sequence is similar to that of dogs). The peptide seems to be effective against multiple types of bacteria (Gram positive and Gram negative) and a couple strains of fungi. The researchers only tested the peptide in vitro, so it probably isn't known if purified peptide will be effective in vivo (they reported that it showed little lysis of human red blood cells though).

TL/DR: Don't pressure your doctor into giving you panda blood when you get sick.

Comment: Re:Any immunologists about? (Score 3, Informative) 50

by Joe Torres (#42029669) Attached to: Nanoparticles Stop Multiple Sclerosis In Mice

I only glanced through the paper and I have a fellowship application to finish, so I'll be quick with this response.

The process the researchers are trying to take advantage of is immune tolerance (https://en.wikipedia.org/wiki/Immune_tolerance). The authors state that the decrease in symptoms is partially due to the activity of regulatory T cells (https://en.wikipedia.org/wiki/Regulatory_T_cell). Regulatory T cells are a type of T cell that inhibits the immune response to certain types of antigens (foreign things that aren't harmful or parts of your self that your immune system shouldn't have responded to in the first place).

Viruses and bacteria (as well as cancer) can and do take advantage of immune tolerance (I'm not sure about this specific mechanism) in an attempt to avoid immune destruction and this is thought as a possible mechanism for the induction of autoimmune disease.

Comment: Surprising (Score 2) 73

by Joe Torres (#41637567) Attached to: Rejected Papers Get More Citations When Eventually Published
Something that surprised me was that "75% of all published papers appear in the journal to which they are first submitted."

I would be very interested in seeing the difference of this rate between junior faculty and senior faculty. With my limited sample size (and personal bias along with it), it has seemed that this number would be much lower for junior faculty. Possibly, junior faculty may be too eager to try to swing for the fences (Science and Nature) and miss (going down the ranks to PLOS ONE) while senior faculty already have favorite field-specific journals (where they may know editors) that will likely be accepted with revisions.

Comment: Re:ReadCube Cost (Score 2) 74

by Joe Torres (#41591591) Attached to: Start-Up Wants To Open Up Science Journals and Eliminate Paywalls
Easier said than done. Keep in mind that research articles do not only have one author (at least I haven't seen any recent ones in my field). Assistant professors, graduate students, post-docs, and even tenured professors (with the funding situation these days) do not always have the luxury (guaranteed funding and job opportunities/security) to choose to publish in a lower impact open-access journal even if they preferred to.

Personally, I try to encourage others to favor open-access journals and sometimes make articles available to others that don't have access (other scientists and even non-scientists that are simply interested in primary research). That being said, I think going RMS is a little too extreme at the moment. Thankfully, the quality of open-access journals is improving and power is slipping from the non-free publishers and this is something that they can't stop.

Comment: ReadCube Cost (Score 1) 74

by Joe Torres (#41591081) Attached to: Start-Up Wants To Open Up Science Journals and Eliminate Paywalls
"The library is charged under $6 for articles researchers decide to rent for a limited time and $11 or less (depending on the publication) for articles they buy. Researchers cannot yet print out the articles, and much like with iTunes, they cannot share the content with colleagues."

It is sad that renting articles and not being able to share them with colleagues/students almost seems like a deal compared to the current system. It is sicking to me that the publishing system gets in the way of scientific progress and selectively holds back faculty and students from smaller universities that can't afford access to high-impact journals.

Comment: Re:The numbers (Score 2) 123

by Joe Torres (#41529537) Attached to: Misconduct, Not Error, Is the Main Cause of Scientific Retractions
The first figure of the PNAS paper shows that less than 0.01% (maybe 0.008%) of all published papers are retracted for fraud or suspected fraud and it has been increasing since 1975 (maybe around 0.001%). The authors state that the current number is probably under-reporting because not all fraud is detected and retracted. It is possible that the 1975 numbers are less representative, since fraud might have been harder to detect (at least for duplicate publication and plagiarism).

Comment: From the Study's Abstract (Score 3, Interesting) 114

by Joe Torres (#41322639) Attached to: Scientists Themselves Play Large Role In Bad Reporting
They define spin as: "“spin” (specific reporting strategies, intentional or unintentional, emphasizing the beneficial effect of the experimental treatment)" They also mention: "We considered “spin” as being a focus on statistically significant results ... an interpretation of statistically nonsignificant results for the primary outcomes as showing treatment equivalence or comparable effectiveness; or any inadequate claim of safety or emphasis of the beneficial effect of the treatment." (emphasis added) I understand the last two, but the first point doesn't make any sense at all. You can't really make conclusions (you can, but scientists will not believe it) about statistically insignificant results. "Spin" can be good in some cases (maybe not at all in clinical research): a research group that studies DNA repair might state, "Our findings on the function of the yeast homolog of SLHDT in dsDNA break recognition may represent a novel target for cancer therapeutics." In this case, the research group doesn't study cancer at all and have no business at all (from their results) mentioning it, but this might convince a cancer researcher to consider reading the paper and possibly looking into doing a quick/cheap experiment targeting SLHDT and testing this claim.

Comment: Re:Ratios (Score 1) 74

by Joe Torres (#41001775) Attached to: Independent Labs To Verify High-Profile Research Papers
First, I'd like to clarify what I meant to say when I said risky. I think "unprecedented" would have been a better word. Peer review does a pretty good (depending on the journal) job of making sure a paper is internally consistent and, as long as the data isn't faked, valid enough to base future hypotheses on them. That being said, many papers will overstate their findings and make conclusions in their discussion section (where it is perfectly fine to put this stuff) that aren't entirely supported by their data. Scientists are expected to evaluate results critically and often don't agree with the conclusions of the papers, but the results (limited to the experimental system) are reliable for the most part. I would assume that most of the "landmark" papers would fit this description (results are reliable, but the conclusions could be crap). As for the slowing down scientific progress part: I think that if the standard of what is acceptable to publish (for disease-focus research) is that it has to work in human patients, then progress will be slowed down. I could be wrong, but how I see what the study concluded is like this: A "landmark" paper is published that identifies Compound X that inhibits a certain signalling pathway in a particular type of tumor (derived from a human cancer cell-line) and prevents an inbred strain of mice from dying (within a certain time-frame) after injecting a certain amount of the cancer cells in a particular place. The authors then conclude that the compound cures cancer. Compound X is then used in a clinical trial involving multiple human patients with tumors made up of a hetergenous cell population (with a unique tumor micro-environment) and is found to not significantly alter the disease outcome (which could be tumor size and not survival). Compound X is considered a failure and the "landmark" paper is considered crap.

Comment: Re:Statistical confirmation (Score 1) 74

by Joe Torres (#40999895) Attached to: Independent Labs To Verify High-Profile Research Papers
Wakefield's study wouldn't have been fixed with independent statistical analysis because the results were faked. I do agree that many scientists could use some help with statistics and it would probably be a good idea if certain journals had a statistician on staff that could re-analyze raw data as a part of the review process.

Nobody's gonna believe that computers are intelligent until they start coming in late and lying about it.

Working...