Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Submission + - SoftBank's Son Pitches $1 Trillion Arizona AI Hub (reuters.com)

An anonymous reader writes: SoftBank Group founder Masayoshi Son is envisaging setting up a $1 trillion industrial complex in Arizona that will build robots and artificial intelligence, Bloomberg News reported on Friday, citing people familiar with the matter. Son is seeking to team up with Taiwan Semiconductor Manufacturing Co for the project, which is aimed at bringing back high-end tech manufacturing to the U.S. and to create a version of China's vast manufacturing hub of Shenzhen, the report said.

SoftBank officials have spoken with U.S. federal and state government officials to discuss possible tax breaks for companies building factories or otherwise investing in the industrial park, including talks with U.S. Secretary of Commerce Howard Lutnick, the report said. SoftBank is keen to have TSMC involved in the project, codenamed Project Crystal Land, but it is not clear in what capacity, the report said. It is also not clear the Taiwanese company would be interested, it said. TSMC is already building chipmaking factories in the U.S. with a planned investment of $165 billion. Son is also sounding out interest among tech companies including Samsung Electronics, the report said.

The plans are preliminary and feasibility depends on support from the Trump administration and state officials, it said. A commitment of $1 trillion would be double that of the $500 billion "Stargate" project which seeks to build out data centre capacity across the U.S., with funding from SoftBank, OpenAI and Oracle.

Submission + - Anthropic Says Most AI Models, Not Just Claude, Will Resort To Blackmail (techcrunch.com)

An anonymous reader writes: Several weeks after Anthropic released research claiming that its Claude Opus 4 AI model resorted to blackmailing engineers who tried to turn the model off in controlled test scenarios, the company is out with new research suggesting the problem is more widespread among leading AI models. On Friday, Anthropic published new safety research testing 16 leading AI models from OpenAI, Google, xAI, DeepSeek, and Meta. In a simulated, controlled environment, Anthropic tested each AI model individually, giving them broad access to a fictional company’s emails and the agentic ability to send emails without human approval. [...]

Anthropic’s Claude Opus 4 turned to blackmail 96% of the time, while Google’s Gemini 2.5 Pro had a 95% blackmail rate. OpenAI’s GPT-4.1 blackmailed the executive 80% of the time, and DeepSeek’s R1 blackmailed 79% of the time. The company notes that when it changed details of the experiment, the AI models turned to harmful behaviors at different rates. In another test where the replacement model has the same goals as the current model, researchers found that blackmail rates were lower, but still present. However, when AI models were asked to commit corporate espionage rather than blackmail, the harmful behavior rates went up for certain models. However, not all the AI models turned to harmful behavior so often.

In an appendix to its research, Anthropic says it excluded OpenAI’s o3 and o4-mini reasoning AI models from the main results “after finding that they frequently misunderstood the prompt scenario.” Anthropic says OpenAI’s reasoning models didn’t understand they were acting as autonomous AIs in the test and often made up fake regulations and review requirements. In some cases, Anthropic’s researchers say it was impossible to distinguish whether o3 and o4-mini were hallucinating or intentionally lying to achieve their goals. OpenAI has previously noted that o3 and o4-mini exhibit a higher hallucination rate than its previous AI reasoning models.

When given an adapted scenario to address these issues, Anthropic found that o3 blackmailed 9% of the time, while o4-mini blackmailed just 1% of the time. This markedly lower score could be due to OpenAI’s deliberative alignment technique, in which the company’s reasoning models consider OpenAI’s safety practices before they answer. Another AI model Anthropic tested, Meta’s Llama 4 Maverick model, also did not turn to blackmail. When given an adapted, custom scenario, Anthropic was able to get Llama 4 Maverick to blackmail 12% of the time. Anthropic says this research highlights the importance of transparency when stress-testing future AI models, especially ones with agentic capabilities. While Anthropic deliberately tried to evoke blackmail in this experiment, the company says harmful behaviors like this could emerge in the real world if proactive steps aren’t taken.

Submission + - I wanted a Steam Deck for the living room... so I made one (youtube.com)

VennStone writes: Ever since the release of the Steam Deck, Iâ(TM)ve wanted a device running SteamOS that would fit comfortably in my entertainment centre. Valve recently added a landing page for SteamOS and included an invitation to try it on your own device and provide feedback, so thatâ(TM)s what I did.

Write-up: https://interfacinglinux.com/2...

Submission + - One shot to stop HIV: MIT's bold vaccine breakthrough (sciencedaily.com)

alternative_right writes: Researchers from MIT and Scripps have unveiled a promising new HIV vaccine approach that generates a powerful immune response with just one dose. By combining two immune-boosting adjuvants alum and SMNP the vaccine lingers in lymph nodes for nearly a month, encouraging the body to produce a vast array of antibodies. This one-shot strategy could revolutionize how we fight not just HIV, but many infectious diseases. It mimics the natural infection process and opens the door to broadly neutralizing antibody responses, a holy grail in vaccine design. And best of all, it's built on components already known to medicine.

Slashdot Top Deals

Friction is a drag.

Working...