Communications

Alphabet Spins Off Laser-Based Internet Project Taara From 'Moonshot' Unit (ft.com) 22

Alphabet is spinning out Taara, a laser-based internet company from its X "moonshot" incubator, securing backing from Series X Capital while retaining a minority stake.

Taara's technology transmits data at 20 gigabits per second over 20km by firing pencil-width light beams between traffic light-sized terminals, extending traditional fiber-optic networks with minimal construction costs.

Based in Sunnyvale, California, the company operates in 12 countries, including India and parts of Africa, where it created a 5km laser link over the Congo River between Brazzaville and Kinshasa. The two-dozen-strong team partners with telecommunications firms like Bharti Airtel and T-Mobile to extend core fiber-optic networks to remote locations or dense urban areas.

Taara originated from Project Loon, which was shut down in 2021 after facing regulatory challenges. The company is developing silicon photonic chips to replace mirrors and lenses in its terminals and potentially enable multiple connections from a single transmitter.
EU

European Tech Firms Push EU for 'Buy European' Tech Mandate (techcrunch.com) 66

More than 80 signatories representing about 100 European tech organizations have urged EU leaders to take "radical action" to reduce reliance on foreign digital infrastructure, according to a letter sent to European Commission President Ursula von der Leyen.

The coalition, including Airbus, Proton, and OVHCloud, warns Europe "will lose out on digital innovation" and become almost completely dependent on non-European technologies "in less than three years at current rates."

The group calls for public procurement requirements mandating European-made tech solutions, development of common standards, and creation of a "Sovereign Infrastructure Fund" for capital-intensive areas like chips and quantum computing. "Our reliance on non-European technologies will become almost complete in less than three years at current rates," the letter states, citing concerns over U.S. technological dominance following recent comments from Vice President JD Vance criticizing European regulations.
Intel

Intel's Stock Jumps 18.8% - But What's In Its Future? (msn.com) 47

Intel's stock jumped nearly 19% this week. "However, in the past year through Wednesday's close, Intel stock had fallen 53%," notes Investor's Business Daily: The appointment of Lip-Bu Tan as CEO is a "good start" but Intel has significant challenges, Morgan Stanley analyst Joseph Moore said in a client note. Those challenges include delays in its server chip product line, a very competitive PC chip market, lack of a compelling AI chip offering, and over $10 billion in losses in its foundry business over the past 12 months. There is "no quick fix" for those issues, he said.
"There are things you can do," a Columbia business school associate professor tells the Wall Street Journal in a video interview, "but it's going to be incremental, and it's going to be extremely risky... They will try to be competitive in the foundry manufacturing space," but "It takes very aggressive investments."

Meanwhile, TSMC is exploring a joint venture where they'd operate Intel's factories, even pitching the idea to AMD, Nvidia, Broadcam, and Qualcomm, according to Reuters. (They add that Intel "reported a 2024 net loss of $18.8 billion, its first since 1986," and talked to multiple sources "familiar with" talks about Intel's future). Multiple companies have expressed interest in buying parts of Intel, but two of the four sources said the U.S. company has rejected discussions about selling its chip design house separately from the foundry division. Qualcomm has exited earlier discussions to buy all or part of Intel, according to those people and a separate source. Intel board members have backed a deal and held negotiations with TSMC, while some executives are firmly opposed, according to two sources.
"They say Lip-Bu Tan is the best hope to fix Intel — if Intel can be fixed at all," writes the Wall Street Journal: He brings two decades of semiconductor industry experience, relationships across the sector, a startup mindset and an obsession with AI...and basketball. He also comes with tricky China business relationships, underscoring Silicon Valley's inability to sever itself from one of America's top adversaries... [Intel's] stock has lost two-thirds of its value in four short years as Intel sat out the AI boom...

Manufacturing chips is an enormous expense that Intel can't currently sustain, say industry leaders and analysts. Former board members have called for a split-up. But a deal to sell all or part of Intel to competitors seems to be off the table for the immediate future, according to bankers. A variety of early-stage discussions with Broadcom, Qualcomm, GlobalFoundries and TSMC in recent months have failed to go anywhere, and so far seem unlikely to progress. The company has already hinted at a more likely outcome: bringing in outside financial backers, including customers who want a stake in the manufacturing business...

Tan has likely no more than a year to turn the company around, said people close to the company. His decades of investing in startups and running companies — he founded a multinational venture firm and was CEO of chip design company Cadence Design Systems for 13 years — provide indications of how Tan will tackle this task in the early days: by cutting expenses, moving quickly and trying to turn Intel back into an engineering-first company. "In areas where we are behind the competition, we need to take calculated risks to disrupt and leapfrog," Tan said in a note to Intel employees on Wednesday. "And in areas where our progress has been slower than expected, we need to find new ways to pick up the pace...."

Many take this culture reset to also mean significant cuts at Intel, which already shed about 15,000 jobs last year. "He is brave enough to adjust the workforce to the size needed for the business today," said Reed Hundt, a former Intel board member who has known Tan since the 1990s.

AI

Google Claims Gemma 3 Reaches 98% of DeepSeek's Accuracy Using Only One GPU 58

Google says its new open-source AI model, Gemma 3, achieves nearly the same performance as DeepSeek AI's R1 while using just one Nvidia H100 GPU, compared to an estimated 32 for R1. ZDNet reports: Using "Elo" scores, a common measurement system used to rank chess and athletes, Google claims Gemma 3 comes within 98% of the score of DeepSeek's R1, 1338 versus 1363 for R1. That means R1 is superior to Gemma 3. However, based on Google's estimate, the search giant claims that it would take 32 of Nvidia's mainstream "H100" GPU chips to achieve R1's score, whereas Gemma 3 uses only one H100 GPU.

Google's balance of compute and Elo score is a "sweet spot," the company claims. In a blog post, Google bills the new program as "the most capable model you can run on a single GPU or TPU," referring to the company's custom AI chip, the "tensor processing unit." "Gemma 3 delivers state-of-the-art performance for its size, outperforming Llama-405B, DeepSeek-V3, and o3-mini in preliminary human preference evaluations on LMArena's leaderboard," the blog post relates, referring to the Elo scores. "This helps you to create engaging user experiences that can fit on a single GPU or TPU host."

Google's model also tops Meta's Llama 3's Elo score, which it estimates would require 16 GPUs. (Note that the numbers of H100 chips used by the competition are Google's estimate; DeepSeek AI has only disclosed an example of using 1,814 of Nvidia's less-powerful H800 GPUs to server answers with R1.) More detailed information is provided in a developer blog post on HuggingFace, where the Gemma 3 repository is offered.
Businesses

70% of Large VMware Customers Bought Broadcom's Biggest Bundle (theregister.com) 30

Broadcom's VMware acquisition has significantly boosted revenue, largely driven by high-priced VMware Cloud Foundation bundles adopted by the majority of its top customers. The Register reports: Broadcom's acquisition of VMware appears to be a big success, on the balance sheet at least, after the company announced a big majority of its top 10,000 customers have decided to acquire its Cloud Foundation stack and posted strong growth. The chips-and-code company today announced its results for the quarter ended February 2nd, its first for FY 2025. Revenue of $14.92 billion represented 25 percent year-on-year growth. Net income of $5.5 billion was a 315 percent increase on the result from Q1 2024.

Broadcom no longer breaks out VMware revenue: sales of Virtzilla's wares are all now lumped into its infrastructure software business unit, which posted $6.7 billion revenue for Q1, up from $4.55 billion for the same quarter last year. Direct comparisons of those numbers are not wise as Broadcom owned VMware for four fifths of Q1 2024. Consider, instead, the $1.97 billion Q4 2023 and $7.6 billion FY 2023 software revenue that Broadcom recorded before it acquired VMware.

Know, also, that Broadcom's software sales grew by just three percent in FY 2023 and four percent in FY 2022. That slow growth means the jump from $1.97 billion software revenue in Q4 2023 to $6.7 billion in Q1 2025 is likely due to VMware, which in its last quarter as an independent company reported $3.4 billion revenue. It therefore looks a lot like Broadcom has added around $1 billion to quarterly VMware revenue in a little over a year.

Open Source

China To Publish Policy To Boost RISC-V Chip Use Nationwide (reuters.com) 24

AmiMoJo writes: China plans to issue guidance to encourage the use of open-source RISC-V chips nationwide for the first time, Reuters reports, citing two sources briefed on the matter, as Beijing accelerates efforts to curb the country's dependence on Western-owned technology.

The policy guidance on boosting the use of RISC-V chips could be released as soon as this month, although the final date could change, the sources said. It is being drafted jointly by eight government bodies, including the Cyberspace Administration of China, China's Ministry of Industry and Information Technology, the Ministry of Science and Technology, and the China National Intellectual Property Administration, they added.

Medicine

World's First 'Synthetic Biological Intelligence' Runs On Living Human Cells 49

Australian company Cortical Labs has launched the CL1, the world's first commercial "biological computer" that merges human brain cells with silicon hardware to form adaptable, energy-efficient neural networks. New Atlas reports: Known as a Synthetic Biological Intelligence (SBI), Cortical's CL1 system was officially launched in Barcelona on March 2, 2025, and is expected to be a game-changer for science and medical research. The human-cell neural networks that form on the silicon "chip" are essentially an ever-evolving organic computer, and the engineers behind it say it learns so quickly and flexibly that it completely outpaces the silicon-based AI chips used to train existing large language models (LLMs) like ChatGPT.

"Today is the culmination of a vision that has powered Cortical Labs for almost six years," said Cortical founder and CEO Dr Hon Weng Chong. "We've enjoyed a series of critical breakthroughs in recent years, most notably our research in the journal Neuron, through which cultures were embedded in a simulated game-world, and were provided with electrophysiological stimulation and recording to mimic the arcade game Pong. However, our long-term mission has been to democratize this technology, making it accessible to researchers without specialized hardware and software. The CL1 is the realization of that mission." He added that while this is a groundbreaking step forward, the full extent of the SBI system won't be seen until it's in users' hands.

"We're offering 'Wetware-as-a-Service' (WaaS)," he added -- customers will be able to buy the CL-1 biocomputer outright, or simply buy time on the chips, accessing them remotely to work with the cultured cell technology via the cloud. "This platform will enable the millions of researchers, innovators and big-thinkers around the world to turn the CL1's potential into tangible, real-word impact. We'll provide the platform and support for them to invest in R&D and drive new breakthroughs and research." These remarkable brain-cell biocomputers could revolutionize everything from drug discovery and clinical testing to how robotic "intelligence" is built, allowing unlimited personalization depending on need. The CL1, which will be widely available in the second half of 2025, is an enormous achievement for Cortical -- and as New Atlas saw recently with a visit to the company's Melbourne headquarters -- the potential here is much more far-reaching than Pong. [...]
AI

TSMC Pledges To Spend $100 Billion On US Chip Facilities (techcrunch.com) 67

An anonymous reader quotes a report from TechCrunch: Chipmaker TSMC said that it aims to invest "at least" $100 billion in chip manufacturing plants in the U.S. over the next four years as part of an effort to expand the company's network of semiconductor factories. President Donald Trump announced the news during a press conference Monday. TSMC's cash infusion will fund the construction of several new facilities in Arizona, C. C. Wei, chairman and CEO of TSMC, said during the briefing. "We are going to produce many AI chips to support AI progress," Wei said.

TSMC previously pledged to pour $65 billion into U.S.-based fabrication plants and has received up to $6.6 billion in grants from the CHIPS Act, a major Biden administration-era law that sought to boost domestic semiconductor production. The new investment brings TSMC's total investments in the U.S. chip industry to around $165 billion, Trump said in prepared remarks. [...] TSMC, the world's largest contract chip maker, already has several facilities in the U.S., including a factory in Arizona that began mass production late last year. But the company currently reserves its most sophisticated facilities for its home country of Taiwan.

Intel

Former Intel CEO Barrett Calls for Board Dismissal and Gelsinger's Return (fortune.com) 23

Former Intel CEO Craig Barrett urged the rehiring of Pat Gelsinger, who was abruptly fired two months ago, arguing he should "finish the job he has aptly handled over the past few years."

"Pat Gelsinger did a great job resuscitating the technology development team," Barrett wrote, criticizing the company's current leadership under "a CFO and a product manager." He suggested firing the Intel board rather than splitting the company.

Barrett's comments come in response to proposals from four former board members advocating for Intel's separation into design and manufacturing businesses. Barrett dismissed these board members as "two academics and two former government bureaucrats" lacking semiconductor industry expertise.

The former CEO praised Intel's technological resurgence under Gelsinger, noting its capabilities now match industry leader TSMC's 2nm technology, with additional advances in imaging technology and backside power delivery to complex chips. "Intel is backâ"from a technology point of view," Barrett wrote, arguing the best path forward is building on current momentum rather than organizational restructuring that would disrupt the company's 100,000-plus employees across multiple continents.
Intel

Nvidia and Broadcom Testing Chips on Intel Manufacturing Process (reuters.com) 14

Nvidia and Broadcom are conducting manufacturing tests using Intel's advanced 18A chip production process, according to Reuters, signaling potential confidence in the struggling chipmaker's contract manufacturing ambitions. The previously unreported tests could lead to significant manufacturing contracts for Intel, whose foundry business has suffered delays and lacks major chip designer customers.

AMD is also evaluating Intel's 18A technology, which competes with Taiwan's dominant TSMC, according to the report. The current tests focus on determining capabilities of Intel's process rather than running complete chip designs. Intel faces additional setbacks, with qualification of critical intellectual property for 18A taking longer than expected, potentially delaying some customer chip production until mid-2026.
Microsoft

Microsoft Urges Trump To Overhaul Curbs on AI Chip Exports (wsj.com) 30

Microsoft is pushing the Trump administration to loosen and simplify a new system that would restrict the sales of cutting-edge U.S. artificial-intelligence chips to much of the world. From a report: In a blog post that is scheduled to be released Thursday, Microsoft will call for Trump's team to ease the limits on chips that can be used in data centers for training AI models so they no longer apply to a group of U.S. allies including India, Switzerland and Israel [non-paywalled source], company officials said. Those countries are in the second tier of a three-tier system that underpins the export controls.

Microsoft says the unintended consequence of that proposed system would be that allies facing limited U.S. chip supply would turn to China to get the tech infrastructure they need. China is using the proposed rule to argue to other countries that it would be a better long-term partner for AI infrastructure than the U.S., Microsoft President Brad Smith said in an interview. "Their message is these countries can't rely on the U.S., but China is willing to provide what they need," he said. "That is not good for American business or American foreign policy."

AI

Jensen Huang: AI Has To Do '100 Times More' Computation Now Than When ChatGPT Was Released 32

In an interview with CNBC's Jon Fortt on Wednesday, Nvidia CEO Jensen Huang said next-gen AI will need 100 times more compute than older models as a result of new reasoning approaches that think "about how best to answer" questions step by step. From a report: "The amount of computation necessary to do that reasoning process is 100 times more than what we used to do," Huang told CNBC's Jon Fortt in an interview on Wednesday following the chipmaker's fourth-quarter earnings report. He cited models including DeepSeek's R1, OpenAI's GPT-4 and xAI's Grok 3 as models that use a reasoning process.

Huang pushed back on that idea in the interview on Wednesday, saying DeepSeek popularized reasoning models that will need more chips. "DeepSeek was fantastic," Huang said. "It was fantastic because it open sourced a reasoning model that's absolutely world class." Huang said that company's percentage of revenue in China has fallen by about half due to the export restrictions, adding that there are other competitive pressures in the country, including from Huawei.

Developers will likely search for ways around export controls through software, whether it be for a supercomputer, a personal computer, a phone or a game console, Huang said. "Ultimately, software finds a way," he said. "You ultimately make that software work on whatever system that you're targeting, and you create great software." Huang said that Nvidia's GB200, which is sold in the United States, can generate AI content 60 times faster than the versions of the company's chips that it sells to China under export controls.
AI

Inception Emerges From Stealth With a New Type of AI Model 16

Inception, a Palo Alto-based AI company founded by Stanford professor Stefano Ermon, claims to have developed a novel diffusion-based large language model (DLM) that significantly outperforms traditional LLMs in speed and efficiency. "Inception's model offers the capabilities of traditional LLMs, including code generation and question-answering, but with significantly faster performance and reduced computing costs, according to the company," reports TechCrunch. From the report: Ermon hypothesized generating and modifying large blocks of text in parallel was possible with diffusion models. After years of trying, Ermon and a student of his achieved a major breakthrough, which they detailed in a research paper published last year. Recognizing the advancement's potential, Ermon founded Inception last summer, tapping two former students, UCLA professor Aditya Grover and Cornell professor Volodymyr Kuleshov, to co-lead the company. [...]

"What we found is that our models can leverage the GPUs much more efficiently," Ermon said, referring to the computer chips commonly used to run models in production. "I think this is a big deal. This is going to change the way people build language models." Inception offers an API as well as on-premises and edge device deployment options, support for model fine-tuning, and a suite of out-of-the-box DLMs for various use cases. The company claims its DLMs can run up to 10x faster than traditional LLMs while costing 10x less. "Our 'small' coding model is as good as [OpenAI's] GPT-4o mini while more than 10 times as fast," a company spokesperson told TechCrunch. "Our 'mini' model outperforms small open-source models like [Meta's] Llama 3.1 8B and achieves more than 1,000 tokens per second."
Hardware

Framework Moves Into Desktops, 2-In-1 Laptops (tomshardware.com) 57

At its "Second Gen" event today, Framework detailed three new computers: an updated Framework Laptop 13 with AMD Ryzen AI 300, a 4.5-liter Mini-ITX desktop powered by Ryzen AI Max, and a colorful, convertible Framework Laptop 12 designed with students in mind. The latter is defined by Framework as a "defining product." Tom's Hardware reports: Framework Desktop: The Framework Desktop is a 4.5L Mini-ITX machine using AMD's Ryzen AI Max "Strix Halo" chips with Radeon 8060S graphics. While this is a mobile chip, Framework says putting it in a desktop chassis gets it to 120W sustained power and 140W boost "while staying quiet and cool." Framework says this should allow 1440p gaming on intense titles, as well as workstation-class projects and local AI. [...] The base model, with a Ryzen AI Max 385 and 32GB of RAM, starts at $1,099, while the top-end machine with a Ryzen AI Max+ 395 with 128GB of RAM begins at $1,999. Framework is only doing "DIY" editions here, so you'll have to get your own storage drive and bring your own operating system (the company is calling it "the easiest PC you'll ever build"). The mainboard on its own will be available from $799. Pre-orders are open now, and Framework expects to ship sometime in Q3.

Framework Laptop 12: The Laptop 12 is designed to bring the flexibility from the Laptop 12 but make it smaller, cheaper, and in more colors (with an optional stylus to match). These machines are made of ABS plastic molded in thermoplastic polyurethane, all around a metal frame. Framework says that it's "our easiest product ever to repair," but that more information on that will come closer to its launch in mid-2025. I'm really looking forward to this repair guide. It comes in five colorways: lavender, sage, gray, black, and bubblegum. The laptop will come with 13th Gen Intel Core i3 and i5 processors, which aren't the latest, but better than entry-level junk. You'll get up to 48GB of RAM, 2TB of storage, and Wi-Fi 6E. It has a 1920 x 1200 touch screen that the company claims will surpass 400 nits of brightness. There's no pricing information yet, and Framework says there's more to share on pricing and specs later in the year. Pre-orders will open in April ahead of the mid-year launch.

Framework Laptop 13: The Framework Laptop 13 is getting a significant refresh with AMD Ryzen AI 300 Series. It doesn't look all that different on the outside, with a 13.5-inch design that largely resembles the one from way back in 2021. But there are new features. Beyond the processors, the Framework Laptop 3 is getting bumped up to Wi-Fi 7 and is getting a new thermal system, a "next-generation" keyboard, and new colorways for the Expansion Cards and bezels (though I still don't know why you would want a bezel in anything other than black). [...] The new Framework Laptop 13 with AMD Ryzen AI 300 starts at $899 for a DIY Edition without storage or an OS, and $1,099 for a pre-built model. If you're buying the mainboard to put in an old system, that's $449. (Framework is keeping the Ryzen 7040 systems around starting at $749). No word for now on any new Intel models.

Android

Google, Qualcomm Will Support 8 Years of Android Updates (9to5google.com) 19

An anonymous reader quotes a report from 9to5Google: Starting with the Snapdragon 8 Elite, Qualcomm will offer device manufacturers (OEMs) the "ability to provide support for up to eight consecutive years of Android software and security updates." Qualcomm today announced a "program" in partnership with Google: "What this means is that support for platform software included in this program will be made available to OEMs for eight consecutive years, including both Android OS and kernel upgrades, without requiring significant changes or upgrades to the platform and OEM code on the device (a separation commonly referred as 'Project Treble' or the 'vendor implementation'). While kernel changes will require updating kernel mode drivers, the vendor code can remain unchanged while the software support is being provided."

This program specifically includes "two upgrades to the mobile platform's Android Common Kernel (ACK) to support the eight-year window." It's ultimately up to manufacturers to update their devices, but the bottleneck going forward won't be the chip. Qualcomm today notes how the extended software support it's providing can "lower costs for OEMs interested in supporting their devices longer." The first devices to benefit are Snapdragon 8 Elite-powered smartphones launching with Android 15. Notably, the program runs for the "next five generations" of SoCs, including Snapdragon 8 and 7-series chips launching "later this year." Older chipsets will not benefit from this program.

AI

DeepSeek Accelerates AI Model Timeline as Market Reacts To Low-Cost Breakthrough (reuters.com) 25

Chinese AI startup DeepSeek is speeding up the release of its R2 model following the success of January's R1, which outperformed many US competitors at a fraction of the cost and triggered a $1 trillion-plus market selloff. The Hangzhou-based firm had planned a May release but now wants R2 out "as early as possible," Reuters reported Tuesday.

The upcoming model promises improved coding capabilities and reasoning in multiple languages beyond English. DeepSeek's competitive advantage stems from its parent company High-Flyer's early investment in computing power, including two supercomputing clusters acquired before U.S. export bans on advanced Nvidia chips. The second cluster, Fire-Flyer II, comprised approximately 10,000 Nvidia A100 chips. DeepSeek's cost-efficiency comes from innovative architecture choices like Mixture-of-Experts (MoE) and multihead latent attention (MLA).

According to Bernstein analysts, DeepSeek's pricing was 20-40 times cheaper than OpenAI's equivalent models. The competitive pressure has already forced OpenAI to cut prices and release a scaled-down model, while Google's Gemini has introduced discounted access tiers.
Microsoft

Microsoft Trims More CPUs From Windows 11 Compatibility List (theregister.com) 95

Microsoft has updated its CPU compatibility list for Windows 11 24H2, excluding pre-11th-generation Intel processors for OEMs building new PCs. The Register reports: Windows 11 24H2 has been available to customers for months, yet Microsoft felt compelled in its February update to confirm that builders, specifically, must use Intel's 11th-generation or later silicon when building brand new PCs to run its most recent OS iteration. "These processors meet the design principles around security, reliability, and the minimum system requirements for Windows 11," Microsoft says.

Intel's 11th-generation chips arrived in 2020 and were discontinued last year. It would be surprising, if not unheard of, for OEMs to build machines with unsupported chips. Intel has already transitioned many pre-11th generation chips to "a legacy software support model," so Microsoft's decision to omit the chips from the OEM list is understandable. However, this could be seen as a creeping problem. Chips made earlier than that were present very recently, in the list of supported Intel processors for Windows 11 22H2 and 23H2.

This new OEM list may add to worries of some users looking at the general hardware compatibility specs for Windows 11 and wondering if the latest information means that even the slightly newer hardware in their org's fleet will soon no longer meet the requirements of Microsoft's flagship operating system. It's a good question, and the answer -- currently -- appears to be that those "old" CPUs are still suitable. Microsoft has a list of hardware compatibility requirements that customers can check, and they have not changed much since the outcry when they were first published.

Robotics

China's Electric-Vehicle-To-Humanoid-Robot Pivot (technologyreview.com) 37

"[O]ur intrepid China reporter, Caiwei Chen, has identified a new trend unfolding within China's tech scene: Companies that were dominant in electric vehicles are betting big on translating that success into developing humanoid robots," writes MIT Technology Review's James O'Donnell. "I spoke with her about what she found out and what it might mean for Trump's policies and the rest of the globe..." An anonymous reader quotes an excerpt from the report: Your story looks at electric-vehicle makers in China that are starting to work on humanoid robots, but I want to ask about a crazy stat. In China, 53% of vehicles sold are either electric or hybrid, compared with 8% in the US. What explains that?

Price is a huge factor -- there are countless EV brands competing at different price points, making them both affordable and high-quality. Government incentives also play a big role. In Beijing, for example, trading in an old car for an EV gets you 10,000 RMB (about $1,500), and that subsidy was recently doubled. Plus, finding public charging and battery-swapping infrastructure is much less of a hassle than in the US.

You open your story noting that China's recent New Year Gala, watched by billions of people, featured a cast of humanoid robots, dancing and twirling handkerchiefs. We've covered how sometimes humanoid videos can be misleading. What did you think?

I would say I was relatively impressed -- the robots showed good agility and synchronization with the music, though their movements were simpler than human dancers'. The one trick that is supposed to impress the most is the part where they twirl the handkerchief with one finger, toss it into the air, and then catch it perfectly. This is the signature of the Yangko dance, and having performed it once as a child, I can attest to how difficult the trick is even for a human! There was some skepticism on the Chinese internet about how this was achieved and whether they used additional reinforcement like a magnet or a string to secure the handkerchief, and after watching the clip too many times, I tend to agree.

President Trump has already imposed tariffs on China and is planning even more. What could the implications be for China's humanoid sector?

Unitree's H1 and G1 models are already available for purchase and were showcased at CES this year. Large-scale US deployment isn't happening yet, but China's lower production costs make these robots highly competitive. Given that 65% of the humanoid supply chain is in China, I wouldn't be surprised if robotics becomes the next target in the US-China tech war.

In the US, humanoid robots are getting lots of investment, but there are plenty of skeptics who say they're too clunky, finicky, and expensive to serve much use in factory settings. Are attitudes different in China?

Skepticism exists in China too, but I think there's more confidence in deployment, especially in factories. With an aging population and a labor shortage on the horizon, there's also growing interest in medical and caregiving applications for humanoid robots.

DeepSeek revived the conversation about chips and the way the US seeks to control where the best chips end up. How do the chip wars affect humanoid-robot development in China?

Training humanoid robots currently doesn't demand as much computing power as training large language models, since there isn't enough physical movement data to feed into models at scale. But as robots improve, they'll need high-performance chips, and US sanctions will be a limiting factor. Chinese chipmakers are trying to catch up, but it's a challenge.

Data Storage

Sandisk Puts Petabyte SSDs On the Roadmap (tomshardware.com) 28

SanDisk aims to produce petabyte-scale SSDs through its new UltraQLC platform, though the company has not specified a release timeline. The technology, it said, combines SanDisk's BICS 8 QLC 3D NAND with a proprietary 64-channel controller featuring hardware accelerators that offload storage functions from firmware to reduce latency and improve reliability.

The initial UltraQLC drives will use 2Tb NAND chips to reach 128TB capacities, with future iterations targeting 256TB, 512TB, and eventually 1PB as higher-density NAND becomes available. The controller dynamically adjusts power based on workload and employs an advanced bus multiplexer to handle increased data loads from high-density QLC stacks, the company said.
Supercomputing

The IRS Is Buying an AI Supercomputer From Nvidia (theintercept.com) 150

According to The Intercept, the IRS is set to purchase an Nvidia SuperPod AI supercomputer to enhance its machine learning capabilities for tasks like fraud detection and taxpayer behavior analysis. From the report: With Elon Musk's so-called Department of Government Efficiency installing itself at the IRS amid a broader push to replace federal bureaucracy with machine-learning software, the tax agency's computing center in Martinsburg, West Virginia, will soon be home to a state-of-the-art Nvidia SuperPod AI computing cluster. According to the previously unreported February 5 acquisition document, the setup will combine 31 separate Nvidia servers, each containing eight of the company's flagship Blackwell processors designed to train and operate artificial intelligence models that power tools like ChatGPT. The hardware has not yet been purchased and installed, nor is a price listed, but SuperPod systems reportedly start at $7 million. The setup described in the contract materials notes that it will include a substantial memory upgrade from Nvidia.

Though small compared to the massive AI-training data centers deployed by companies like OpenAI and Meta, the SuperPod is still a powerful and expensive setup using the most advanced technology offered by Nvidia, whose chips have facilitated the global machine-learning spree. While the hardware can be used in many ways, it's marketed as a turnkey means of creating and querying an AI model. Last year, the MITRE Corporation, a federally funded military R&D lab, acquired a $20 million SuperPod setup to train bespoke AI models for use by government agencies, touting the purchase as a "massive increase in computing power" for the United States.

How exactly the IRS will use its SuperPod is unclear. An agency spokesperson said the IRS had no information to share on the supercomputer purchase, including which presidential administration ordered it. A 2024 report by the Treasury Inspector General for Tax Administration identified 68 different AI-related projects underway at the IRS; the Nvidia cluster is not named among them, though many were redacted. But some clues can be gleaned from the purchase materials. "The IRS requires a robust and scalable infrastructure that can handle complex machine learning (ML) workloads," the document explains. "The Nvidia Super Pod is a critical component of this infrastructure, providing the necessary compute power, storage, and networking capabilities to support the development and deployment of large-scale ML models."

The document notes that the SuperPod will be run by the IRS Research, Applied Analytics, and Statistics division, or RAAS, which leads a variety of data-centric initiatives at the agency. While no specific uses are cited, it states that this division's Compliance Data Warehouse project, which is behind this SuperPod purchase, has previously used machine learning for automated fraud detection, identity theft prevention, and generally gaining a "deeper understanding of the mechanisms that drive taxpayer behavior."

Slashdot Top Deals