Comment Really? (Score 2, Funny) 289
Posted by BeauHD on Wednesday November 26, 2025 @11:40AM from the language-doesn't-equal-intelligence dept
Don't you mean "from the well duh! dept"?
Posted by BeauHD on Wednesday November 26, 2025 @11:40AM from the language-doesn't-equal-intelligence dept
Don't you mean "from the well duh! dept"?
I never said Folding@home was exactly like modern AI algorithms that accomplish the same thing. I used it as an example of datasets being farmed out to many nodes for processing using specific algorithms.
Modern AIs like Alphafold3 are trained using these large datasets, possibly even including results from Folding@home, and applying modern LLM AI algorithms to find results.
The point I was making is that these AIs are trained on a specific set of datasets and targeted towards a specific purpose. Unlike general purpose AIs which are trained on the garbage heap of the Internet and only occasionally find a nugget of gold
Again tldr; AI trained on specific datasets: Useful. AI trained on the Internet: Crap
But what I am really impressed about is the fact that AI has made huge scientific discoveries.
Even if you don't use AI at all. Are you not impressed that AI solved protein folding (a Nobel prize problem)? Are you not impressed that AI is next used to discover new drugs that work exactly to the problem and because we know all human proteins, we can also let the AI test for side effects. And once the system is up and running, we can just let it solve all known deceases.
I am impressed that algorithms trained on specific datasets have improved to the point whereby they can find solutions to protein folding, or discover new drugs. The principle is not new however, it has just vastly improved in speed and efficiency
You may or may not recall Foldit@home or Seti@home which used a similar principle, but spread it out amongst many nodes.
These are instances of using large datasets from within a specific field where the algorithms of AI are useful and certainly impressive
What is not impressive however, are the LLMs trained on datasets taken from every corner of the Internet and then pretend to have all the answers. There is a large amount of slop/crap/bs on the Internet and these general purpose AIs are not trained to tell the difference between that and useful/factual data.
Sure if you want it to write a piece of fiction (though the output may not actually be worth reading), or to create a piece of artwork it will likely perform well. But if you're using it for serious work/research then much of the time you will get slop. Garbage In/Garbage Out
tldr; AI trained on specific datasets: Useful. AI trained on the Internet: Crap
What the hell is a Jiggawatt?!
97.94% of the energy required to power a Flux Capacitor fitted to a Delorean going at 88 miles per hour in order for it to leave 1955 and instantly arrive in 1985.
The Alexa division has never been profitable.
https://www.wsj.com/tech/amazo...
https://www.mobileworldlive.co...
You can bet that a part of the Alexa+ strategy is to turn that around. You are going to see tons of ads, those AI CPU cycles are super expensive.
War isn't hell
War is war and Hell is hell, and of the two, war is a lot worse.
How do you figure that, Hawkeye?
Easy, Father. Tell me, who goes to Hell?
Sinners, I believe.
Exactly. There are no innocent bystanders in Hell.
But war is chock full of them. Little kids, criples, old ladies.
In fact, except for a few of the brass, almost everybody involved is an innocent bystander.
Next up: An emacs clone
They already have an OS - what they need is an editor
...an article worth considering from Princeton University's Zeynep Tufekci:
We Were Badly Misled About the Event That Changed Our Lives
Since scientists began playing around with dangerous pathogens in laboratories, the world has experienced four or five pandemics, depending on how you count. One of them, the 1977 Russian flu, was almost certainly sparked by a research mishap. Some Western scientists quickly suspected the odd virus had resided in a lab freezer for a couple of decades, but they kept mostly quiet for fear of ruffling feathers.
Yet in 2020, when people started speculating that a laboratory accident might have been the spark that started the Covid-19 pandemic, they were treated like kooks and cranks. Many public health officials and prominent scientists dismissed the idea as a conspiracy theory, insisting that the virus had emerged from animals in a seafood market in Wuhan, China. And when a nonprofit called EcoHealth Alliance lost a grant because it was planning to conduct risky research into bat viruses with the Wuhan Institute of Virology â" research that, if conducted with lax safety standards, could have resulted in a dangerous pathogen leaking out into the world â" no fewer than 77 Nobel laureates and 31 scientific societies lined up to defend the organization.
So the Wuhan research was totally safe, and the pandemic was definitely caused by natural transmission â" it certainly seemed like consensus.
We have since learned, however, that to promote the appearance of consensus, some officials and scientists hid or understated crucial facts, misled at least one reporter, orchestrated campaigns of supposedly independent voices and even compared notes about how to hide their communications in order to keep the public from hearing the whole story. And as for that Wuhan laboratoryâ(TM)s research, the details that have since emerged show that safety precautions might have been terrifyingly lax.
Reader NZheretic points out that less than a year ago, Jim Allchin swore under oath that disclosing the Windows operating system source code could damage national security.
Rep. Curt Weldon : Thank you. Let me see if I can liven things up here in the last couple of minutes of the luncheon. First of all, I apologize for being late. And I thank Bob and the members of the caucus for inviting me here.
...
But the point is that when John Hamre briefed me, and gave me the three key points of this change, there are a lot of unanswered questions. He assured me that in discussions that he had had with people like Bill Gates and Gerstner from IBM that there would be, kind of a, I don't know whether it's a, unstated ability to get access to systems if we needed it., Now, I want to know if that is part of the policy, or is that just something that we are being assured of, that needs to be spoke. Because, if there is some kind of a tacit understanding, I would like to know what it is.
Because that is going to be subjected to future administrations, if it is not written down in a clear policy way. I want to know more about this end use certificate. In fact, sitting on the Cox Committee as I did, I saw the fallacy of our end use certificate that we were supposedly getting for HPCs going into China, which didn't work. So, I would like to know what the policies are. So, I guess what I would say is, I am happy that there seems to be a coming together. In fact, when I first got involved with NSA and DOD and CIS, and why can't you sit down with industry, and work this out. In fact, I called Gerstner, and I said, can't you IBM people, and can't you software people get together and find the middle ground, instead of us having to do legislation.
.
And the likeliest explanation is things connected with the GDPR "right to be forgotten":
Make it myself? But I'm a physical organic chemist!