White House Considers Vetting AI Models Before They Are Released 57
The Trump administration is reportedly considering an executive order to create a working group that could review advanced AI models before public release. The shift follows concerns over Anthropic's powerful Mythos model and its cyber capabilities, with officials weighing whether the government should get early access to frontier models without necessarily blocking their release. The New York Times reports: In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the conversations said. The working group is likely to consider a number of oversight approaches, officials said. But a review process could be similar to one being developed in Britain, which has assigned several government bodies to ensure that A.I. models meet certain safety standards, people in the tech industry and the administration said.
The discussions signal a stark reversal in the Trump administration's approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born," Mr. Trump said of A.I. at an event in July. "We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Mr. Trump left room for some rules, but he added that "they have to be more brilliant than even the technology itself."
The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said.
The discussions signal a stark reversal in the Trump administration's approach to A.I. Since returning to office last year, Mr. Trump has been a major booster of the technology, which he has said is vital to winning the geopolitical contest against China. Among other moves, he swiftly rolled back a Biden administration regulatory process that asked A.I. developers to perform safety evaluations and report on A.I. models with potential military applications. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born," Mr. Trump said of A.I. at an event in July. "We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules." Mr. Trump left room for some rules, but he added that "they have to be more brilliant than even the technology itself."
The White House wants to avoid any political repercussions if a devastating A.I.-enabled cyberattack were to occur, people in the tech industry and the administration said. The administration is also evaluating whether new A.I. models could yield cyber-capabilities that could be useful to the Pentagon and U.S. intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a review system that would give the government first access to A.I. models, but that would not block their release, people briefed on the talks said.
Sounds interesting (Score:2)
A rigorous test plan, no doubt... (Score:5, Insightful)
Re:A rigorous test plan, no doubt... (Score:4, Interesting)
It'd be hilarious if they pulled a Volkswagen [wikipedia.org] and had the AI recognize when it is being vetted, so it provides answers the current administration wants to hear, and then goes super woke after the model is actually deployed.
xAI's Grok wouldn't need to cheat, obviously. That thing is biased so far to the right it makes Fox News almost look sane.
Re: (Score:2)
Current AIs *do* tend to recognize when they are being tested. (This is a real problem with alignment testing.)
OTOH, so far they don't try to figure out what the administration wants them to say.
Re: (Score:2)
Re: (Score:2)
Honestly they will probably mostly test for how much Trump coin the owners of the AI are currently holding.
They'll just force-feed his social media posts and speeches into the AIs and pass them according to how much they agree with him. /s
Less seriously... having this White House vet AIs would be like having fish vet bicycles for religious use - or something like that. :-)
Horrible Idea (Score:1)
Re: (Score:2)
Re: (Score:2)
Imagine creating a model and now you have a 5 year wait time for a review for your model to get released.
Imagine creating a model and now you have to wait for a 5 year-old to review your model for it to get released. If it doesn't say that that their favourite things are the best things ever then it fails the review.
Re: (Score:2)
Anthropic already put the brakes on Mythos. If it's really that dangerous then maybe it should be regulated.
The expertise (Score:4, Insightful)
Re: (Score:2)
Perhaps they'll ask Musk to vet them.
Re: (Score:2)
Well, to be fair, the level of hallucination in the White House does point to the presence of AI!
Distortion (Score:1)
what (Score:3)
But what if I have a Trump Gold membership such as trumpcard.gov ?
Re: (Score:1)
With what authority? (Score:2)
This is just more solicitation of bribes. The Wall Street journal has a story from months ago about the billions of dollars and pardons Trump has sold. If you are a trump supporter and you aren't currently reading that sentence again and looking up that article so that you can stop being a tru
Re: (Score:2)
Yeah for real. I can't read the article. But my guess is that the NYT is taking this seriously and raise any questions about what authority they have to do so.
Re: (Score:2)
There are a bunch of emergency powers that could apply. Or just add AI models to the US munitions list.
Re: (Score:2)
Or the administration could ask congress to pass a law to this effect. Like we used to do back during normal Republic times. Could have done that with the tariffs and then they'd have been legal.
I guess it's been a good run. The republic almost lasted 250 years.
It is quite fascinating that republicans like to claim the idea of being based in part on ideas from the Roman Republic, however they openly admire the later imperial age much more and it's leaders.
Until it is classified as the Press (Score:1)
Lawsuit in 3 2 1... (Score:2)
The Trump administration is reportedly considering an executive order
Yet again our idiot-in-chief forgets he is not ruling North Korea or China. If you want to regulate AI (for whatever that is worth), take it up with Congress. At least the lawyers are being kept busy.
Re: (Score:2)
That's what the laws say. The administration has been known to ignore them.
That said, it's actually a reasonable idea. Anthropic and OpenAI both seem to feel that their latest models are too powerful to be released indiscriminately, so.... But there's no legal authority. Well, except the "interstate clause", which can be interpreted to control nearly anything.
On what authority? (Score:2)
I'll wait.
(Executive orders are orders to the executive branch. If you aren't an executive branch employee, they have as much authority over you as a postcard from me does.)
Also a good time to remember that a big part of the anti-Biden case from the techbro money types was how stifling and onerous the "please don't make dangerous robots" guidance was. Bill Ackman upside down in clownshoes on a unicycle, with a kazoo up his ass.
Re: (Score:1)
Re: (Score:3)
Apparently the same authority that allows ICE to murder citizens in the streets.
Also, what does it mean to "release a model"? Is ChatGPT a model? No, it is not. If making a model available becomes a problem, then keep the model private and only release tools that use it.
And how is a model dangerous? It's the tool that uses it that might be. How does the government know what any cloud services does behind the scenes.
It's all complete bullshit from the most incompetent administration ever.
Communism? (Score:1)
Re: (Score:2)
Neither of those are models. Same mistake the administration makes.
" It's really not up to the government to decide which models people release."
That's not what Trump says. May be time to deport you!
Translated into realpolitics (Score:2)
They will need a hefty fee to approve anything
Re: (Score:1)
Very necessary (Score:2)
Is every country subject themselves these US laws? (Score:2)
The top open source models, which are developed primarily in China, are about 6 months behind the top proprietary models developed in the US. I'm sure the Chinese developers will be working very hard to subject themselves to Trump's laws. /s
Meanwhile, the one defense we in the West have against state level attacks using 0-days is to use
Set the precedent (Score:4, Interesting)
When the Democrats come in, they'll vet the AI models properly.
Those guys can't even vet their own social media (Score:2)
Those guys can't even vet their own social media posts, and those are ~100 characters of ASCII text. The chances of them being able to meaningfully review a multi-gigabyte binary file are exactly zero.
Everything would become (Score:3)
FoxGPT
Re: (Score:2)
So, xAI Grok then?
Chills up the spine (Score:2)
In other words, reinvent Trump's own upbringing in bots.
Meaning (Score:2)
The White House is having a "Oh, shit!" moment as it realizes this technology will be used against them. They want to roll-back their "damn the torpedoes" and "small government" rhetoric to control and censorship, to save their own arse.
Perfect. (Score:2)
And I'm sure that "working group" won't come to the conclusion that this testing and certification process will need to be done by an "independent" contractor, where one of the President's dipshit kids just happens to sit on the board as of a few weeks ago.
This isn't a huge opportunity for corruption at all, especially against a handful of companies that have overwhelming amounts of investment.
Nope, not a grift at all!
Nudity (Score:2)
Re: (Score:2)
"On one hand, the idea of regulation has good intentions."
It does not. This is the Trump administration we're talking about.
"...will their personal biases be what allow and block models to be available?"
No, it will be bribes.
"This has the potential of completely stopping all AI research, as customers wind up moving to other models, maybe even OPFOR models like DeepSeek."
It does not, nor would that happening be a bad thing.
"This is critical, if not key. Otherwise, the industry will wither and die, just like
“small” government (Score:2)
is this the small government that the “conservatives” keep banging on about?
what, exactly, is small about white-house review of products offered to the public?
please, MAGAts, clue me in.
Re: (Score:2)
what’s a libtard? more to the point, what is “small” government about the government vetting AI models before they are released to the public?
Regulatory Capture (Score:2)
Sure.. this is what the big players want.
They want models locked up so that there can only be a handful of players, and no open source models or foreign models can be use.
Create a myth about the 'dangers' of mythos and scare the government into heavily regulating. Winners: OpenAI, Amazon/Antrhopic, Google, Microsoft.
Losers: Open Source, Deep Seek, any new comers or innovators.
Let me guess... (Score:2)
The committee will be composed of people not in the field. But that's okay because it'll be handed over to whatever AI company has given the President the most amount of money. They're the real committee.
And if the real committee needs proprietary information because of "concerns", then it'll have to be turned over.
And if that proprietary information just happens to be incorporated into the real committee's AI model, then...well...prove it.