Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Going for gold (Score 3, Interesting) 259

How about we put this much hyped AI to good use by employing it to automatically shut the door in such cases?

While we're at it, I could use an AI robot dishwasher that can actually clean pans, determining whether to use scouring powder and then use it or not, and position dishes itself, no more need for the user to carefully position everything, just dump the dirty dishes in the machine.

Too bad all the hoopla around AI was hype, and AI still can't do such simple things.

Comment Re:Holy shit, the logic fail here. (Score 1) 38

What you describe is essentially a form of bootstrapping, which is a legitimate statistical method. However, there are important limitations that cannot be overlooked.

First, the constructed data are still being created from real data. Ethics is not just about preserving patient privacy, although that is a very important aspect. It's also about taking into consideration how the data will be used. Does the patient consent to this use, and if they are unable to consent, how should this be taken into consideration? Medical science has not had a stellar track record with respect to ethical human experimentation (e.g., Henrietta Lacks, the Tuskegee syphilis study, MKUltra--and that's just in recent US history). There is a documented history of patient collected data being used in ways that those patients never even conceived, let alone anticipated or consented. Caution must be exercised whenever any such data is used, even indirectly.

Second, this kind of simulated data is problematic to analyze from a statistical perspective, and any biostatistician should be aware of this: there is no such thing as a free lunch. The problem of missing data--in actual patients!--is itself difficult to address, since methods to deal with missingness invariably rely on various strong assumptions about the nature of that missingness. So to make inferences on data that is entirely simulated is, at the very least, as problematic as analyzing partially missing data.

Third, the current state of LLMs, and their demonstrated tendency to distort or invent features from noise (which is arguably the primary mechanism by which they operate), is such that any inferences from LLM-generated data would be questionable and should not be considered statistically meaningful. It could be used for hypothesis generation, but it would not satisfy any kind of statistical review.

It all comes back to what I said in another comment: you can't have it both ways. If you can draw some statistically meaningful conclusion from the data, then that data came from real-world patients and must pass ethical review. If you don't need ethical review because the data didn't come from any real patient, then any inferences are dubious at best, and are most likely just fabrications that cannot pass confirmatory analysis.

Comment Re:Holy shit, the logic fail here. (Score 4, Insightful) 38

The purported claim is that "because the AI-generated data do not include data from actual humans, they do not need ethics review to use."

But if the data only represent actual patients in a "statistical" sense (whatever that means), how can the research be CERTAIN that it has captured appropriate signals or effects that are observed in such data? And I say this as a statistician who has over a decade of experience in statistical analysis of clinical trials.

There is a fundamental principle at work here, one that researchers cannot take the better part of both ways of the argument: any meaningful inference must be drawn on real world data, and if such data is taken from humans, it must pass an ethics board review. If one argues that AI-generated data doesn't need the latter because it is a fabrication, then it doesn't meet the standard for meaningful inference. If one argues that it does meet the standard, then no matter how the data was transformed from real-world patient sources, it requires ethics board review.

In biostatistics, we use models to analyze data to detect potential effects, draw hypotheses or make predictions, and test those hypotheses to make probabilistic statements--i.e., statistical inferences--about the validity of those hypotheses. This is done within a framework that obeys mathematical truth, so that as long as certain assumptions about the data are met, the results are meaningful. But what "statistically naive" people consistently fail to appreciate, especially in their frenzy to "leverage" AI everywhere, is that those assumptions are PRETTY FUCKING IMPORTANT and using an LLM to generate "new" data from existing, real-world data, is like making repeated photocopies of an original--placing one model on top of another model. LLMs will invent signals where none originally existed. LLMs will fail to capture signals where one actually existed.

Comment Check out the cognitive dissonance on this guy... (Score 2) 70

Feige "doesn't buy" that "superhero fatigue" is "real." However he does admit that 102 hours of Marvel content in six years was "too much" and that "the expansion[of Marvel content]...is certainly what devalued" the content. So which is it? Methinks he fears the "superhero fatigue" meme has become too sticky for his liking (and also for his checkbook,) so he's gonna handwave it away anyway even though he can't escape from the facts.

Comment Re:But not in the US (Score 4, Insightful) 228

In fact, it is UNETHICAL to use a placebo control in any clinical trial of an investigational product for which the existing standard of care already includes a product on the market.

In plain English, it is entirely unethical to give participants a placebo to test the efficacy of a new flu vaccine when we already have existing vaccines on the market. Doing so denies participants in the study from accessing effective treatment. If you have to test against a placebo, it will be impossible to recruit participants, because nobody will take the chance to receive placebo when they could just go to the pharmacy and get vaccinated.

There are only two possible explanations for such a position: either gross ignorance of basic scientific and ethical principles for conducting medical research in humans, or deliberate malicious intent to stop all research of investigational drugs. It doesn't actually matter which one is the reason. Both are entirely unacceptable.

The fact that a huge segment of the American population does not understand even the most basic scientific principles is the reason why many people will die needlessly.

Comment Re:Off Insulin onto immunosuppressants for life... (Score 4, Informative) 65

I agree that this therapy is not without significant risks, so it's not to be taken lightly.

That said, the long-term health outcomes of T1DM are also significant. So the way I see this development is that it is one more step on the path toward finding a durable, safe, and effective cure. And if approved, it may offer some patients another choice, one that of course should involve an informed discussion with competent healthcare providers.

It's important to keep in mind that healthcare is not a "one size fits all" thing. Two patients that have the same condition can respond very differently to the same therapy. Before the discovery of insulin, diabetics literally just...died. So on the path to understanding this relationship between the individual patient and the selected therapy, medical science can only offer a range of treatment options. At one time, humans believed in bloodletting, lobotomies, and arsenic to treat various illnesses. We built leper colonies. And in some places in the world, menstruation is still considered "dirty." We have made many advances, but there are still many more to be discovered.

Comment No QC not that surprising (Score 5, Insightful) 167

I once warned a manager at a smallish company that their fantasy of doing manufacturing would never happen partly because of a lack of QC and the lack of anybody with authority to shut a project down if it was not meeting spec. "We need that guy!" the manager said, and I came back, "If you had that guy you'd fire him the first time he told you something you didn't want to hear." Musk is in a pickle right now and I'm sure he really doesn't want to hear that the tank failed an X-ray inspection and the whole craft needs to be taken apart to make sure it's OK, thereby missing the launch window. And I'm sure the QC guy knows that. So as I warned that guy who once asked for my advice, this is what he got.

Comment Yet another Register hit piece (Score 4, Interesting) 240

I'd rather use a slower browser that honors the user's choice of extensions--in particular those that block malicious content and privacy-violating advertising trackers--than an ostensibly faster browser that is created by a company whose entire business model is to gather as much tracking data about you in order to sell it to advertisers.

There are alternatives to both Firefox and Chrome. But choosing to use Chrome because Firefox isn't perfect is either the height of idiocy, or being paid to promote Google products.

Comment My wife insisted she could not learn to touch type (Score 1) 191

She hunted and pecked through the beginnings of a reasonably successful career as a magazine copywriter back in the day. I tried to tell her it would be worth her while to spend a few hours with Mavis Beacon, but she insisted she had her way of doing things and that was that. Two index fingers, staring at the keyboard instead of the screen. Meanwhile, I was younger than her but did learn touch typing on a manual in high school. Anyway a year or so after I gave up trying to convince her to spend some time learning to type properly, I walked in on her as she was working and she was holding her hands in the home position, index fingers hovering above F and J, eyes on the screen, and doing a good 80 wpm as she pounded out copy. When I pointed this out she looked at her hands and said, "I don't know about any of that, I just adjusted what I was doing to get a little faster." Well, that's why they teach it that way, but some people gotta ice skate uphill, y'know.

Comment Some did (Score 2) 65

Jobs and Wozniak got rich off Apple, Gates and Balmer off Microsoft. Sinclair was already rich. Tandy, Commodore, Atari, and IBM had hugely popular machines but no "rock stars" single-handedly responsible for their development, and bad business decisions ultimately killed them. Similarly Coleco, which had a great chance to undercut the PC with the Adam and its cheap letter quality printer, but they were too ambitious and by the time they worked out their manufacturing problems the PC had taken root. But the PC killed the rest of the industry by killing itself, making the first clones possible which could run object code generated for other manufacturers' machines, which was Microsoft's second stage to orbit after providing Level II Basic for the TRS-80. It wasn't MIcrosoft's intent, but imagine what today's computer ecosystem would look like if all software was still architecture-specific and there were a dozen or more popular models to choose from.
--
Apple and the rest had room to grow because the big names like DEC, Data General, and even IBM were focused on business and saw them as toys. They bought and ate anything that looked like it might compete with them, such as the CP/M office systems which might be a credible threat to minicomputers like the DEC PDP series. That was another gap IBM threaded by being IBM.

Slashdot Top Deals

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...