Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI

Mira Murati's Stealth AI Lab Launches Its First Product (wired.com) 33

An anonymous reader quotes a report from Wired: Thinking Machines Lab,a heavily funded startup cofounded by prominent researchers from OpenAI, has revealed its first product -- a tool called Tinker that automates the creation of custom frontier AI models. "We believe [Tinker] will help empower researchers and developers to experiment with models and will make frontier capabilities much more accessible to all people," said Mira Murati, cofounder and CEO of Thinking Machines, in an interview with WIRED ahead of the announcement.

Big companies and academic labs already fine-tune open source AI models to create new variants that are optimized for specific tasks, like solving math problems, drafting legal agreements, or answering medical questions. Typically, this work involves acquiring and managing clusters of GPUs and using various software tools to ensure that large-scale training runs are stable and efficient. Tinker promises to allow more businesses, researchers, and even hobbyists to fine-tune their own AI models by automating much of this work.

Essentially, the team is betting that helping people fine-tune frontier models will be the next big thing in AI. And there's reason to believe they might be right. Thinking Machines Lab is helmed by researchers who played a core role in the creation of ChatGPT. And, compared to similar tools on the market, Tinker is more powerful and user friendly, according to beta testers I spoke with. Murati says that Thinking Machines Lab hopes to demystify the work involved in tuning the world's most powerful AI models and make it possible for more people to explore the outer limits of AI. "We're making what is otherwise a frontier capability accessible to all, and that is completely game-changing," she says. "There are a ton of smart people out there, and we need as many smart people as possible to do frontier AI research."
"There's a bunch of secret magic, but we give people full control over the training loop," OpenAI veteran John Schulman says. "We abstract away the distributed training details, but we still give people full control over the data and the algorithms."
News

VP.NET Publishes SGX Enclave Code: Zero-Trust Privacy You Can Actually Verify 12

VP.NET has released the source code for its Intel SGX enclave on GitHub, allowing anyone to build the enclave and verify its mrenclave hash matches what's running on the servers. This takes "don't trust, verify" from marketing to reality, making privacy claims testable all the way down to hardware-enforced execution.

A move like this could set a new benchmark for transparency in privacy tech.

Comment Re:That is Abuse of the Computer Fraud and Abuse A (Score 1) 86

I don't think there needs to be any legally binding contract?

This is basically the "just because the door is unlocked doesn't mean you can help yourself to my toolshed" that they hit people who access systems with all the time.

Comment Re:"is to empower" (Score 1) 13

Pretty sure they just talked to an LLM to shit out that description and didn't read the output (who wants to).

Given my experience in the industry in 2025 they probably would have been reprimanded if someone in management caught wind that they were making blog entries without an LLM writing them.

Comment Re:Profit (Score 1) 42

It's going to be really funny when all of these companies who fired half their staff get the pot turned up to "we need to be profitable" pricing.

Yes, it will probably still be cheaper than a human being, especially a western one. But OpenAI would be stupid not to charge 5 digits to replace a 6 digit office worker.

AI

McDonald's AI Hiring Bot Exposed Millions of Applicants' Data To Hackers 25

An anonymous reader quotes a report from Wired: If you want a job at McDonald's today, there's a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and resume, directs them to a personality test, and occasionally makes them "go insane" by repeatedly misunderstanding their most basic questions. Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants -- including all the personal information they shared in those conversations -- with tricks as straightforward as guessing the username and password "123456."

On Wednesday, security researchers Ian Carroll and Sam Curryrevealedthat they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with along track record of independent security testing, discovered that simple web-based vulnerabilities -- including guessing one laughably weak password -- allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers.

Carroll says he only discovered that appalling lack of security around applicants' information because he was intrigued by McDonald's decision to subject potential new hires to an AI chatbot screener and personality test. "I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more," says Carroll. "So I started applying for a job, and then after 30 minutes, we had full access to virtually every application that's ever been made to McDonald's going back years."
Paradox.ai confirmed the security findings, acknowledging that only a small portion of the accessed records contained personal data. The company stated that the weak-password account ("123456") was only accessed by the researchers and no one else. To prevent future issues, Paradox is launching a bug bounty program. "We do not take this matter lightly, even though it was resolved swiftly and effectively," Paradox.ai's chief legal officer, Stephanie King, told WIRED in an interview. "We own this."

In a statement to WIRED, McDonald's agreed that Paradox.ai was to blame. "We're disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai. As soon as we learned of the issue, we mandated Paradox.ai to remediate the issue immediately, and it was resolved on the same day it was reported to us," the statement reads. "We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection."
News

VP.net Promises "Cryptographically Verifiable Privacy" (torrentfreak.com) 36

TorrentFreak spotlights VP.net, a brand-new service from Private Internet Access founder Andrew Lee (the guy who gifted Linux Journal to Slashdot) that eliminates the classic "just trust your VPN" problem by locking identity-mapping and traffic-handling inside Intel SGX enclaves. The company promises 'cryptographically verifiable privacy' by using special hardware 'safes' (Intel SGX), so even the provider can't track what its users are up to.

The design goal is that no one, not even the VPN company, can link "User X" to "Website Y."

Lee frames it as enabling agency over one's privacy:

"Our zero trust solution does not require you to trust us - and that's how it should be. Your privacy should be up to your choice - not up to some random VPN provider in some random foreign country."

The team behind VP.net includes CEO Matt Kim as well as arguably the first Bitcoin veterans Roger Ver and Mark Karpeles.

Ask Slashdot: Now that there's a VPN where you don't have to "just trust the provider" - arguably the first real zero-trust VPN - are trust based VPNs obsolete?

Comment Re:They're plenty motivated (Score 5, Informative) 132

Exactly. I've been paying MLB like $130 a year to get (most) of the baseball games every year and it feels fine. It only works because the team I mostly follow is out of market where I live, so never blacked out for that reason.

It loses value slowly as they try to double dip by selling those games I've already paid for to other networks and services exclusively -- Apple TV here, Roku there, a Fox or ESPN exclusive national game. It's infurating to pay for all the games and not get all the games.

Slashdot Top Deals

Real Users find the one combination of bizarre input values that shuts down the system for days.

Working...