Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:"Just" 40 lightyears away? (Score 1) 69

For all intensive porpoises, 40 light years or 40,000,000 light years, it's all the same. It is unreachable. .

40 light years is perfectly "reachable" by a civilization that wants to get there. You just have to give up on the idea that it's reachable by you or me personally.

At a speed of .01c (difficult but probably achievable), it's a mere 4000 years away. Earth has had life for 3.5 billion years, and has had some version of "homo sapiens" for 300,000 years. With luck, the Earth may be able to support life for another 500 million years, maybe longer. There's plenty of time to putter back and forth to Trappist-1 multiple times.

Comment Re:No, we aren't (was Re:we're already doing this) (Score 1) 90

the very reason L-glucose passes through us untouched is the same reason a mirror bacterium would likely starve in a right-handed biosphere. Leave the biology to the scientists, and the click-bait distortions to the mainstream press, okay?

As multiple posters have pointed out... there are many varieties of bacteria that can grow using exclusively non-chiral molecules (and/or photosynthesis) as a food source. They don't need to eat amino acids, they can synthesize their own.

You're correct, of course, in saying that there is a big difference between making L-glucose and making an entire mirror organism. But there is a real, end-of-the-world danger here if they were to succeed.

Comment Re:Not Needed: Good Journals Known (Score 1) 74

Impact factor generally is a credibility factor or at least I do not know of any journal in my field where there is a low-credibility journal with a high impact factor, although there are some specialist journals - e.g. instrumentation - which are highly credible but with a low impact factor. Generally speaking though anyone in the field worth their salt will know which the good journals are and where a paper is published generally does have a large impact on how we regard its quality.

Impact factor seems like more of a measure of "this is important and consequential", rather than "this is free of fraud". Anyway, there have been multiple instances of fraudulent papers coming out in high-prestige journals with extremely high impact factors (Nature, for one).

I do not see a good way for a "credibility factor" to be calculated in an objective manner that would not have significant negative repurcussions e.g. counting the number of retractions would be bad since it would encourage journals never to retract papers.

Right, that's why retractions shouldn't count against you. If anything they should boost your score (if done in a timely/responsible fashion).

I don't know exactly how to "measure" fraudulent research or what criteria should be used-- but obviously the PNAS authors figured out a way, or they wouldn't have been able to do a "statistical analysis" of the problem.

Similarly even the best intitutes can hire rogue researchers - or more commonly have bad grad students or postdocs - and enouraging journals to accept anything from any researcher in a "respected" instistute to boost their credibility would be bad too. Also papers in many fields cannot and do not have a single "primary" author.

The credibility score of the institute would be calculated separately from the credibility score of the journal and of the researcher. That's why I suggested multiple scores. In other word, it wouldn't automatically boost the journal's score to publish results from a high-credibility institution-- except indirectly, by reducing the probability that they are publishing a fraudulent paper.

Look, there are all sorts of fine points to debate over, when it comes to exactly how to calculate scores. But that doesn't mean it's a bad idea.

Comment Re:Time to close the doors? (Score 3, Insightful) 74

No. The *correct* way to fix this is to resolve the root cause: How funding is awarded.

This is a big problem and it's going to take a multi-pronged approach to fix it. Your suggestion is good, but the OP's suggestion of an "accrediting agency" is also quite good (and somewhat easier to implement than yours).

Journals are already "ranked" according to their "impact factor", which is a number calculated based on how often their articles are cited by other articles; it would make sense to also calculate a "credibility factor", based on the number of (known or suspected) instances of fraud. Ideally, you would want to calculate three different credibility factors for each article: one based on the journal, one based on the institution they're from, and one based on the primary author. (Maybe add a fourth based on the secondary authors).

The beauty of that suggestion is that it wouldn't cost a fortune to implement-- you could set up a nonprofit agency to do it with only modest funding. Scientists would be falling all over themselves to work for that agency. Some of them would probably volunteer their time for free.

Of course, it would be nice to fix scientific funding as well, but that's going to take much more money and time (it ain't happening under #47).

A *third* potential strategy would be to start imposing criminal penalties-- both on the individual scientists and on their institutions-- for instances of outright fraud.

Comment "Real and imminent threat" (Score 1) 186

In my opinion, the researchers are preoccupied with the wrong question. "Awareness" of climate change is, at the end of the day, very nearly useless as a tool for actually preventing climate change. "Awareness" is the left-wing equivalent of "thoughts and prayers".

There's a whole list of reasons why human beings allow climate change to proceed despite the fact that we ought to know better. One of them is the fact that the threat is not, in fact, "imminent"; it's insidious and long-term. (Maybe you can name one way in which climate change has materially affected your quality of life, or will affect it in the next five years-- but in many cases, it's quite likely that you can't). Humans don't do well with assessment of risks that are insidious and long-term. A second reason is that most people are very poorly educated about what steps they can make that will significantly affect climate change. (For example, the impact of their diet on climate change rarely occurs to most people). A third reason is simple selfishness and the primacy of self-interest in making economic decisions-- the "tragedy of the commons". I could go on.

Comment Re:Is Anyone Deluded About This? (Score 2) 36

Pubmed (not "pub.med") doesn't require a subscription. You can access it free from any web browser.

The problem is that Pubmed only gives you the abstract, not the actual paper. It'll provide a *link* to the paper, but most of the papers are paywalled, and accessing the paper is stupidly expensive-- like $35-$45 per article, or thousands of dollars per year if you want a subscription to the journal itself.

Also, addressing the OP's point... I have no idea why in particular #47's administration is advancing this policy (it wouldn't surprise me if they had some unsavory reason), but "getting back at NIH" would not be a motive. Do you seriously think the NIH makes money off journal subscriptions? The only ones profiting are a handful of giant, useless scientific publishing companies.

Scientists (and those interested in science) have been complaining for years about the state of scientific publishing and asking for free access to taxpayer-funded research-- there have been multiple Slashdot articles on this topic.

Curiously, though, only 10% of the peer-reviewed articles on Pubmed seem to be funded by NIH. I just pulled that result from Google so maybe it's wrong (I'm too tired at the moment to attempt fact-checking).

Comment Re:Here in Illinois... (Score 1) 66

Pretty sure that's just for dispensing medicine or doing actual surgery. .

Yes, admittedly, that's true... the legal penalties I was talking about are the ones that would apply to an unlicensed MD, DO or NP.

Practicing therapy without a license is just... making conversation.

But that part isn't true, at least in Illinois. If you call yourself a "psychotherapist", you need to be licensed by the Illinois Department of Financial and Professional Regulation. And you can be charged with a crime if you disregard that (although the penalties might be less severe, I don't know what they are specifically).

There are workarounds to that rule, if you're very clear about the fact that the service you offer is not "psychotherapy" and clear about the fact that you're not a licensed therapist. You can say that you're a psychic who offers spiritual guidance, and I think you're allowed to call yourself a "life coach". But you can't offer "therapy" services or "counseling" services without a license.

I'm also convinced that a bad/incompetent therapist can be just as harmful to your mental health as a bad/incompetent psychiatrist. The idea of using AI for therapy is genuinely dangerous (I've posted on this topic before). The problem is that an AI "therapist" lacks the capacity for reality testing, and is likely to support and validate the patient's worldview, even if that worldview is bizarre, distorted or delusional.

Comment Here in Illinois... (Score 1) 66

...practicing medicine without a license is a Class 4 felony for a first offense, and a Class 3 felony for repeated offenders. We're talking multiple years of jail time. Violating HIPAA can also be a criminal offense in some circumstances.

That's on top of all the civil lawsuits.

Comment Re:Sounds mostly like a good idea ... (Score 1) 66

I hear you, but I'm scratching my head here, trying to figure out whether there is any way this makes sense. The basic function of a seatbelt is that it's a strap to restrain your forward moment during quick decelerations, so that your head doesn't hit the windshield. What difference does it make what direction the crash is in? What difference does it make what your weight is, or what the road conditions are? Maybe there's a good answer to these questions, maybe not.

It also seems like you're replacing one point of failure (the mechanical mechanism that makes the strap "catch" when it's pulled quickly) with about 10,000 different points of failure.

Comment Re:Solution - delayed key publishing (Score 4, Funny) 74

While my address is public information I don't need the police advertising things like I'm not home, the power is out, and a fallen tree branch busted open the back door. That's making my house a prime target for thieves, vandals, and squatters.

I'm hearing Morgan Freeman in my head... "Let me get this straight. You're a criminal, and you hear cops talking to each other about how they need to go check on a house. And your plan is to *rob* that house?"

Comment Re:Sounds mostly like a good idea ... (Score 2) 66

I can't imagine why you would need a motherf**cking seatbelt to receive "updates" at all, signed or not, opt-in or not. The seatbelt adjusts its settings based on a very modest amount of data (passenger weight, and apparently also road conditions-- although I'm not sure how the latter would be useful in adjusting a "seatbelt setting"). How complicated can that be? Are they expecting some major advances in seatbelt-setting algorithms to emerge in the next decade?

Also, since the sensors are right there in the car and so is the seatbelt, why does the seatbelt need to be part of the IoT?

Also, what happens in 20 years when the "Internet-connected" part of the seatbelt becomes hopelessly outdated and unusable?

Comment Re:The problem is obvious (Score 1) 49

Yes, I thought of Eliza when I made my comment. LLMs in their current state are far more convincing than Eliza, but they still have all the limitations that I described. I have no idea whether LLMs can advance to the point where they don't have those limitations anymore-- that's a whole other discussion. My point was to discuss the limitations they have now.

"The therapist's role is to create a safe space for the client to explore their thoughts and feelings, fostering self-awareness and personal growth"... OK. That's an accurate statement, as far as it goes. But the therapist often has to do *more* than just "create a safe space for the client to explore thoughts and feelings". The therapist has to identify when these thoughts are the product of a cognitive bias, when they're maladaptive, and even (in some cases) to identify when the thoughts are delusional. They also have to do a whole, long list of other stuff, as I'm sure you know.

Your description of Rogerian therapy (I won't call it a *definition* of Rogerian therapy, since you didn't claim that it was a definition) sounds to me like a description of "supportive" therapy. You listen, you make sure the patient feels listened-to, you make sure they feel "safe" and that they don't feel judged, you offer validation as appropriate and mirror the patient's affect when appropriate. But this is a description of *supportive* therapy only, and supportive therapy is a very, very limited and unambitious type of therapy. It's also a potentially *harmful* form of therapy if it is applied indiscriminately to all patients and all situations. "Your supervisor is mean to you, and your last supervisor was very mean to you and the one before that, too. I'm really sorry to hear that. It sounds like you've had a rough time with supervisors". (Or worse, "Of course, you feel frustrated. I think anyone would feel frustrated if the Pope was harassing them on Twitter".)

Comment Re:The problem is obvious (Score 1) 49

It could, in principle, be an excellent use. None of the AI engines are yet up to that, possibly because they haven't been properly trained. It certainly has the capabilities to be a good Rodgerian therapist, though, again, it would need to be differently trained. (That's not one of the more effective approaches, but it should be able to be done cheaply, which would allow widespread use. But it would need to be trained not to encourage harming either oneself or others...which isn't done by scraping the web.)

It has the capacity to deliver a good *parody* of a Rogerian therapist. In other words, it can be taught to use the technique of "reflection"-- repeating what the patient has said back to them, using different words.

But the thing is, a real therapist will use reflection with a specific purpose in mind. (Sometimes the purpose is to clarify what the patient has said, and make sure you understood it correctly; sometimes the purpose is to summarize a long statement into a short one; sometimes the purpose is to simply let both the therapist and patient stop and think about how strange the patient's statement was). The LLM doesn't have a "purpose". It's just blindly emulating a technique. It's like a carpenter who has learned to use a hammer and can hammer nails really well, but doesn't know that you are trying to build a set of bookshelves (or even understand what a bookshelf is).

Also, not to beat a dead horse, but LLMs notoriously lack either a "bullshit detector" or a "reality detector"-- both of which are essential equipment for a therapist. The LLM will "reflect" your statement, but it won't notice if the statement is implausible, if it's inconsistent with other statements you've made, or if your comment reflects some type of cognitive bias or problematic core belief. Hell, the LLM can't even tell whether your statement is delusional or not. If you say that the Pope is sending you secret messages on Twitter, it will probably take that message at face value.

Slashdot Top Deals

The next person to mention spaghetti stacks to me is going to have his head knocked off. -- Bill Conrad

Working...