Youtube

YouTube Expands AI Deepfake Detection To Politicians, Government Officials, and Journalists 43

YouTube is expanding its AI deepfake detection tools to a pilot group of politicians, government officials, and journalists, allowing them to identify and request removal of unauthorized AI-generated videos impersonating them. TechCrunch reports: The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life.

With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. [...] [Leslie Miller, YouTube's vice president of Government Affairs and Public Policy] explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness.

To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.
Government

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws (9to5linux.com) 168

System76 isn't the only one criticizing new age-verification laws. The blog 9to5Linux published an "informal" look at other discussions in various Linux communities. Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. "Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response," said Jon Seager, VP Engineering at Canonical. "The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical."

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.

Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization "has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet."

And there's another problem. "Many of these mandates imagine technology that does not currently exist." Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

These burdens fall particularly heavily on developers who aren't at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users' and developers' right to free expression, their digital liberties, privacy, and ability to create and use open platforms...

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

Encryption

Ireland Wants To Give Its Cops Spyware, Ability To Crack Encrypted Messages (theregister.com) 48

The Irish government is planning to bolster its police's ability to intercept communications, including encrypted messages, and provide a legal basis for spyware use. From a report: The Communications (Interception and Lawful Access) Bill is being framed as a replacement for the current legislation that governs digital communication interception. The Department of Justice, Home Affairs, and Migration said in an announcement this week the existing Postal Packets and Telecommunications Messages (Regulation) Act 1993 "predates the telecoms revolution of the last 20 years."

As well as updating laws passed more than two decades ago, the government was keen to emphasize that a key ambition for the bill is to empower law enforcement to intercept of all forms of communications. The Bill will bring communications from IoT devices, email services, and electronic messaging platforms into scope, "whether encrypted or not."

In a similar way to how certain other governments want to compel encrypted messaging services to unscramble packets of interest, Ireland's announcement also failed to explain exactly how it plans to do this. However, it promised to implement a robust legal framework, alongside all necessary privacy and security safeguards, if these proposals do ultimately become law. It also vowed to establish structures to ensure "the maximum possible degree of technical cooperation between state agencies and communication service providers."/i

Social Networks

Supreme Court Hacker Posted Stolen Government Data On Instagram (techcrunch.com) 12

An anonymous reader quotes a report from TechCrunch: Last week, Nicholas Moore, 24, a resident of Springfield, Tennessee, pleaded guilty to repeatedly hacking into the U.S. Supreme Court's electronic document filing system. At the time, there were no details about the specifics of the hacking crimes Moore was admitting to. On Friday, a newly filled document -- first spotted by Court Watch's Seamus Hughes -- revealed more details about Moore's hacks. Per the filing, Moore hacked not only into the Supreme Court systems, but also the network of AmeriCorps, a government agency that runs stipend volunteer programs, and the systems of the Department of Veterans Affairs, which provides healthcare and welfare to military veterans.

Moore accessed those systems using stolen credentials of users who were authorized to access them. Once he gained access to those victims' accounts, Moore accessed and stole their personal data and posted some online to his Instagram account: @ihackthegovernment. In the case of the Supreme Court victim, identified as GS, Moore posted their name and "current and past electronic filing records." [...] According to the court document, Moore faces a maximum sentence of one year in prison and a maximum fine of $100,000.

Censorship

US Bars Five Europeans It Says Pressured Tech Firms To Censor American Viewpoints Online (apnews.com) 169

An anonymous reader quotes a report from the Associated Press: The State Department announced Tuesday it was barring five Europeans it accused of leading efforts to pressure U.S. tech firms to censor or suppress American viewpoints. The Europeans, characterized by Secretary of State Marco Rubio as "radical" activists and "weaponized" nongovernmental organizations, fell afoul of a new visa policy announced in May to restrict the entry of foreigners deemed responsible for censorship of protected speech in the United States. "For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose," Rubio posted on X. "The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship."

The five Europeans were identified by Sarah Rogers, the under secretary of state for public diplomacy, in a series of posts on social media. [...] The five Europeans named by Rogers are: Imran Ahmed, chief executive of the Centre for Countering Digital Hate; Josephine Ballon and Anna-Lena von Hodenberg, leaders of HateAid, a German organization; Clare Melford, who runs the Global Disinformation Index; and former EU Commissioner Thierry Breton, who was responsible for digital affairs. Rogers in her post on X called Breton, a French business executive and former finance minister, the "mastermind" behind the EU's Digital Services Act, which imposes a set of strict requirements designed to keep internet users safe online. This includes flagging harmful or illegal content like hate speech. She referred to Breton warning Musk of a possible "amplification of harmful content" by broadcasting his livestream interview with Trump in August 2024 when he was running for president.

Social Networks

Like Australia, Denmark Plans to Severely Restrict Social Media Use for Teenagers (apnews.com) 92

"As Australia began enforcing a world-first social media ban for children under 16 years old this week, Denmark is planning to follow its lead," reports the Associated Press, "and severely restrict social media access for young people." The Danish government announced last month that it had secured an agreement by three governing coalition and two opposition parties in parliament to ban access to social media for anyone under the age of 15. Such a measure would be the most sweeping step yet by a European Union nation to limit use of social media among teens and children.

The Danish government's plans could become law as soon as mid-2026. The proposed measure would give some parents the right to let their children access social media from age 13, local media reported, but the ministry has not yet fully shared the plans... [A] new "digital evidence" app, announced by the Digital Affairs Ministry last month and expected to launch next spring, will likely form the backbone of the Danish plans. The app will display an age certificate to ensure users comply with social media age limits, the ministry said.

The article also notes Malaysia "is expected to ban social media accounts for people under the age of 16 starting at the beginning of next year, and Norway is also taking steps to restrict social media access for children and teens.

"China — which manufacturers many of the world's digital devices — has set limits on online gaming time and smartphone time for kids."
Censorship

Taiwan Cries Censorship As Government Bans Rednote (taipeitimes.com) 38

Longtime Slashdot reader hackingbear writes: Taiwan's government has ordered a one-year block of a popular, mainland Chinese-owned social media app Xiaohongshu, also known as The Little RedNote, citing its failure to cooperate with authorities over fraud-related concerns. Taiwan's Ministry of the Interior on Thursday cited Xiaohongshu's, which does not have business presence on the island, refusal to cooperate with authorities as the basis for the ban, claiming that the platform has been linked to more than 1,700 fraud-related cases that resulted in financial losses of 247.7 million Taiwanese dollars ($7.9 million). "Due to the inability to obtain necessary data in accordance with the law, law enforcement authorities have encountered significant obstacles in investigations, creating a de facto legal vacuum," the ministry said in a statement.

Chinese Nationalist Party (KMT), Taiwan's opposition party, Chairwoman Cheng Li-wun decried the government plan to suspend access to Chinese social media platform Xiaohongshu for one year as censorship. "Many people online are already asking 'How to climb over the firewall to access Xiaohongshu,'" Cheng posted on social media. Meta was facing fines earlier this year for failing to disclose information on individuals who funded advertisements on its social media platforms, marking the second such penalty in Taiwan for violating the anti-fraud act. "Meta failed to fully disclose information regarding who paid for the advertisement and who benefited from it," Depute Minister Lin of Ministry of Digital Affairs said at a news conference on June 18.

If MODA decides to impose the fine, it would mark the second such penalty against Meta in Taiwan, following a NT$1 million ($33,381) fine issued in May for violating the Fraud Crime Hazard Prevention Act by failing to disclose information on individuals who commissioned and funded two Facebook advertisements. Meta's Threads were also included in the regulatory framework following nearly 1,900 fraud-related reports associated with the platform, with 718 confirmed as scams. Xiaohongshu has surged in popularity among young Taiwanese in recent years, amassing 3 million users in the island of 23 million.

Google

Singapore Orders Apple, Google To Prevent Government Spoofing on Messaging Platforms (reuters.com) 8

An anonymous reader shares a report: Singapore's police have ordered Apple and Google to prevent the spoofing of government agencies on their messaging platforms, the home affairs ministry said on Tuesday. The order under the nation's Online Criminal Harms Act came after the police observed scams on Apple's iMessage and Google Messages purporting to be from companies such as the local postal service SingPost. While government agencies have registered with a local SMS registry so only they can send messages with the "gov.sg" name, this does not currently apply to the iMessage and Google Messages platforms.
China

Dutch Hand Back Control of Chinese-Owned Chipmaker Nexperia (bloomberg.com) 12

An anonymous reader quotes a report from Bloomberg: The Dutch government suspended its powers over chipmaker Nexperia, restoring control to its Chinese owner (paywalled; alternative source) and defusing a standoff with Beijing that had begun to hamper automotive production around the world. The order that gave the Netherlands powers to block or revise decisions at Nexperia was dropped as "a show of goodwill," Economic Affairs Minister Vincent Karremans said Wednesday in a post on social media site X.

Bloomberg had reported earlier this month that the Netherlands was prepared to take the step if chip deliveries from the company's site in China could be confirmed. The move marks a significant de-escalation of a dispute that underscored the global nature of supply chains and highlighted Beijing's growing leverage. Even though Nexperia's chips aren't advanced and the company only operates one facility in China, the spat disrupted automakers from Honda Motor Co. to Volkswagen AG.

The reversal by the Dutch government was set in motion after a breakthrough in talks earlier that involved Chinese and Dutch officials, with input from Germany, the European Union as well as the US. To help resolve the stalemate, Beijing agreed to loosen export restrictions from Nexperia's Chinese plant, the largest of its kind in the world. The Dutch economic affairs ministry sent a delegation to Beijing this week to negotiate a "mutually agreeable solution," according to a ministry statement.

Social Networks

Denmark's Government Aims To Ban Access To Social Media For Children Under 15 (apnews.com) 35

An anonymous reader quotes a report from the Associated Press: Denmark's government on Friday announced an agreement to ban access to social media for anyone under 15, ratcheting up pressure on Big Tech platforms as concerns grow that kids are getting too swept up in a digitized world of harmful content and commercial interests. The move would give some parents -- after a specific assessment -- the right to let their children access social media from age 13.

It wasn't immediately clear how such a ban would be enforced: Many tech platforms already restrict pre-teens from signing up. Officials and experts say such restrictions don't always work. Such a measure would be among the most sweeping steps yet by a European Union government to limit use of social media among teens and younger children, which has drawn concerns in many parts of an increasingly online world.
"We've given the tech giants so many chances to stand up and to do something about what is happening on their platforms. They haven't done it," said Caroline Stage, Denmark's minister for digital affairs. "So now we will take over the steering wheel and make sure that our children's futures are safe."

"I can assure you that Denmark will hurry, but we won't do it too quickly because we need to make sure that the regulation is right and that there is no loopholes for the tech giants to go through," Stage said.
Security

Proton Mail Suspended Journalist Accounts At Request of Cybersecurity Agency (theintercept.com) 77

An anonymous reader quotes a report from The Intercept: The company behind the Proton Mail email service, Proton, describes itself as a "neutral and safe haven for your personal data, committed to defending your freedom." But last month, Proton disabled email accounts belonging to journalists reporting on security breaches of various South Korean government computer systems following a complaint by an unspecified cybersecurity agency. After a public outcry, and multiple weeks, the journalists' accounts were eventually reinstated -- but the reporters and editors involved still want answers on how and why Proton decided to shut down the accounts in the first place.

Martin Shelton, deputy director of digital security at the Freedom of the Press Foundation, highlighted that numerous newsrooms use Proton's services as alternatives to something like Gmail "specifically to avoid situations like this," pointing out that "While it's good to see that Proton is reconsidering account suspensions, journalists are among the users who need these and similar tools most." Newsrooms like The Intercept, the Boston Globe, and the Tampa Bay Times all rely on Proton Mail for emailed tip submissions. Shelton noted that perhaps Proton should "prioritize responding to journalists about account suspensions privately, rather than when they go viral." On Reddit, Proton's official account stated that "Proton did not knowingly block journalists' email accounts" and that the "situation has unfortunately been blown out of proportion."

The two journalists whose accounts were disabled were working on an article published in the August issue of the long-running hacker zine Phrack. The story described how a sophisticated hacking operation -- what's known in cybersecurity parlance as an APT, or advanced persistent threat -- had wormed its way into a number of South Korean computer networks, including those of the Ministry of Foreign Affairs and the military Defense Counterintelligence Command, or DCC. The journalists, who published their story under the names Saber and cyb0rg, describe the hack as being consistent with the work of Kimsuky, a notorious North Korean state-backed APT sanctioned by the U.S. Treasury Department in 2023. As they pieced the story together, emails viewed by The Intercept show that the authors followed cybersecurity best practices and conducted what's known as responsible disclosure: notifying affected parties that a vulnerability has been discovered in their systems prior to publicizing the incident.
Phrack said the account suspensions created a "real impact to the author. The author was unable to answer media requests about the article." Phrack noted that the co-authors were already working with affected South Korean organizations on responsible disclosure and system fixes. "All this was denied and ruined by Proton," Phrack stated.

Phrack editors said that the incident leaves them "concerned what this means to other whistleblowers or journalists. The community needs assurance that Proton does not disable accounts unless Proton has a court order or the crime (or ToS violation) is apparent."
News

Video Platform Kick Investigated Over Streamer's Death (bbc.com) 47

French prosecutors have opened an investigation into the Australian video platform Kick over the death of a content creator during a live stream. From a report: Raphael Graven -- also known as Jean Pormanove -- was found dead in a residence near the city of Nice last week. He was known for videos in which he endured apparent violence and humiliation. The Paris prosecutor said the investigation would look into whether Kick knowingly broadcast "videos of deliberate attacks on personal integrity."

The BBC has approached Kick for comment. A spokesperson for the platform previously said the company was "urgently reviewing" the circumstances around Mr Graven's death. The prosecutor's investigation will also seek to determine whether Kick complied with the European Union's Digital Services Act, and the obligation on platforms to notify the authorities if the life or safety of individuals is in question. In a separate announcement, France's minister for digital affairs, Clara Chappaz, said the government would sue the platform for "negligence" over its failure to block "dangerous content", according to the AFP news agency.

Businesses

US Signals Intention To Rethink Job H-1B Lottery (theregister.com) 162

The US Department of Homeland Security (DHS) and the US Citizenship and Immigration Services (USCIS) intend to reevaluate how H-1B visas are issued, according to a regulatory filing. From a report: The notice, filed on Thursday with the US Office of Management and Budget's Office of Information and Regulatory Affairs (OIRA), seeks the statutory review of a proposed rule titled "Weighted Selection Process for Registrants and Petitioners Seeking To File Cap-Subject H-1B Petitions."

Once the review is complete, which could be a matter of days or weeks, the text of the rule is expected to be published in the US Federal Register. Based on the rule title, it appears the government intends to change the system for allocating H-1B visas the current lottery to some system that will favor applicants who meet specified criteria, possibly related to skills.

The H-1B visa program, which reached its Fiscal 2026 cap on Friday, allows skilled guest workers to come work in the US. As of 2019, there were about 600,000 H-1B workers in the US, according to USCIS. The foreign worker program is beloved by technology companies, ostensibly to hire talent not readily available from American workers. But H-1B -- along with the Optional Practical Training (OPT) program -- has long been criticized for making it easier to undercut US worker wages, limiting labor rights for immigrants, and for persistent abuse of the rules by outsourcing companies.

Government

California AI Policy Report Warns of 'Irreversible Harms' 52

An anonymous reader quotes a report from Time Magazine: While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." [...]

"Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While "currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm," the report says.

While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually "reduce compliance burdens on developers and avoid a patchwork approach" by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It "steers clear" of some of the more divisive provisions of SB 1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report.

Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to "reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving," says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. "The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the "substantial expertise inside industry," Singer says, but "also underscores the importance of methods of independently verifying safety claims."
Government

CISA Loses Another Senior Exec (theregister.com) 34

An anonymous reader quotes a report from The Register: The US Cybersecurity and Infrastructure Security Agency has lost another senior leader: executive director Bridget Bean departed on Wednesday. Bean, who served as the de facto agency boss for five months between former CISA director Jen Easterly's departure in January and Madhu Gottumukkala's appointment to the deputy director post last month, said she was "officially retiring from Federal service once again" in a LinkedIn post. "My time at CISA has been truly remarkable," she wrote. "Having had the privilege to serve as the Senior Official Performing the Duties of Director of CISA for 5 months has been a profound honor."

CISA's executive leadership page now lists Gottumukkala as its acting director, and the agency remains without a Senate-confirmed leader. President Trump nominated Sean Plankey to serve as the agency's director, and his nomination is scheduled for consideration (PDF) by the Senate's Homeland Security and Governmental Affairs Committee today. However, his appointment still requires a full Senate vote. Senator Ron Wyden (D-OR) has said he will continue to block Plankey's confirmation until CISA releases an unclassified report on American telecommunications networks' weak security.

At the time of her departure, Bean had spent three and a half years with CISA and more than three decades with the federal government, including a job as the Federal Emergency Management Agency's third-ranking official. Before accepting the executive director post, she was CISA's first chief integration officer. In this position, she "led the integration of the agency's operations and ensured CISA's frontline of regional staff seamlessly supported the critical infrastructure that Americans rely on every hour of every day," according to her bio on the agency's website. [...] Bean's retirement comes during a talent exodus from CISA -- and other federal government agencies -- with some folks getting fired and others taking the Trump administration's buyout offer to resign from public service. As of May 30, the heads of five of CISA's six operational divisions and six of its 10 regional offices had left the agency, and around 1,000 people, nearly one-third of its total staff, have reportedly left CISA since Trump took office.

Government

CISA Budget Faces Possible $500 Million Cut (theregister.com) 50

President Trump's proposed 2026 budget seeks to cut nearly $500 million from CISA, accusing the agency of prioritizing censorship over cybersecurity and election protection. "The proposed cuts -- which are largely symbolic at this stage as they need to be approved by Congress -- are framed as a purge of the so-called 'censorship industrial complex,' a term the White House uses to describe CISA's work countering misinformation," reports The Register. From the report: In its fiscal 2024 budget request, the agency had asked [PDF] for a total of just over $3 billion to safeguard the nation's online security across both government and private sectors. The enacted budget that year was about $34 million lower than the previous year's. Now, a deep cut has been proposed [PDF], as the Trump administration decries the agency's past work tackling the spread of misinformation on the web by America's enemies, as well as the agency's efforts safeguarding election security. [...]

"The budget eliminates programs focused on so-called misinformation and propaganda as well as external engagement offices such as international affairs," it reads [PDF]. "These programs and offices were used as a hub in the censorship industrial complex to violate the First Amendment, target Americans for protected speech, and target the President. CISA was more focused on censorship than on protecting the nation's critical systems, and put them at risk due to poor management and inefficiency, as well as a focus on self-promotion."

Google

Google Says DOJ Breakup Would Harm US In 'Global Race With China' (cnbc.com) 55

Google has argued in court that the U.S. Department of Justice's proposal to break up its Chrome and Android businesses would weaken national security and harm the country's position in the global AI race, particularly against China. CNBC reports: The remedies trial in Washington, D.C., follows a judge's ruling in August that Google has held a monopoly in its core market of internet search, the most-significant antitrust ruling in the tech industry since the case against Microsoft more than 20 years ago. The Justice Department has called for Google to divest its Chrome browser unit and open its search data to rivals.

Google said in a blog post on Monday that such a move is not in the best interest of the country as the global battle for supremacy in artificial intelligence rapidly intensifies. In the first paragraph of the post, Google named China's DeepSeek as an emerging AI competitor. The DOJ's proposal would "hamstring how we develop AI, and have a government-appointed committee regulate the design and development of our products," Lee-Anne Mulholland, Google's vice president of regulatory affairs, wrote in the post. "That would hold back American innovation at a critical juncture. We're in a fiercely competitive global race with China for the next generation of technology leadership, and Google is at the forefront of American companies making scientific and technological breakthroughs."

United Kingdom

UK Laws Are Not 'Fit For Social Media Age' (independent.co.uk) 48

An anonymous reader quotes a report from the New York Times: British laws restricting what the police can say about criminal cases are "not fit for the social media age (source paywalled; alternative source)," a government committee said in a report released Monday in Britain that highlighted how unchecked misinformation stoked riots last summer. Violent disorder, fueled by the far right, affected several towns and cities for days after a teenager killed three girls on July 29 at a Taylor Swift-themed dance class in Southport, England. In the hours after the stabbings, false claims that the attacker was an undocumented Muslim immigrant spread rapidly online. In a report looking into the riots, a parliamentary committee said a lack of information from the authorities after the attack "created a vacuum where misinformation was able to grow." The report blamed decades-old British laws, aimed at preventing jury bias, that stopped the police from correcting false claims. By the time the police announced the suspect was British-born, those false claims had reached millions.

The Home Affairs Committee, which brings together lawmakers from across the political spectrum, published its report after questioning police chiefs, government officials and emergency workers over four months of hearings. Axel Rudakubana, who was sentenced to life in prison for the attack, was born and raised in Britain by a Christian family from Rwanda. A judge later found there was no evidence he was driven by a single political or religious ideology, but was obsessed with violence. [...] The committee's report acknowledged that it was impossible to determine "whether the disorder could have been prevented had more information been published." But it concluded that the lack of information after the stabbing "created a vacuum where misinformation was able to grow, further undermining public confidence," and that the law on contempt was not "fit for the social media age."

AI

Hollywood Urges Trump To Not Let AI Companies 'Exploit' Copyrighted Works (variety.com) 105

An anonymous reader quotes a report from Variety: More than 400 Hollywood creative leaders signed an open letter to the Trump White House's Office of Science and Technology Policy, urging the administration to not roll back copyright protections at the behest of AI companies. The filmmakers, writers, actors, musicians and others -- which included Ben Stiller, Mark Ruffalo, Cynthia Erivo, Cate Blanchett, Cord Jefferson, Paul McCartney, Ron Howard and Taika Waititi -- were submitting comments for the Trump administration's U.S. AI Action Plan. The letter specifically was penned in response to recent submissions to the Office of Science and Technology Policy from OpenAI and Google, which asserted that U.S. copyright law allows (or should allow) allow AI companies to train their system on copyrighted works without obtaining permission from (or compensating) rights holders.

"We firmly believe that America's global AI leadership must not come at the expense of our essential creative industries," the letter says in part. The letter claims that "AI companies are asking to undermine this economic and cultural strength by weakening copyright protections for the films, television series, artworks, writing, music and voices used to train AI models at the core of multibillion-dollar corporate valuations." [...] The letter says Google and OpenAI "are arguing for a special government exemption so they can freely exploit America's creative and knowledge industries, despite their substantial revenues and available funds. There is no reason to weaken or eliminate the copyright protections that have helped America flourish."
You can read the full statement and list of signatories here.

The letter was issued in response to recent submissions from OpenAI (PDF) and Google (PDF) claiming that U.S. law allows, or should allow, AI companies to train their programs on copyrighted works under the fair use legal doctrine.
China

OpenAI Warns Limiting AI Access To Copyrighted Content Could Give China Advantage 74

OpenAI has warned the U.S. government that restricting AI models from learning from copyrighted material would threaten America's technological leadership against China, according to a proposal submitted [PDF] to the Office of Science and Technology Policy for the AI Action Plan.

In its March 13 document, OpenAI argues its AI training aligns with fair use doctrine, saying its models don't replicate works but extract "patterns, linguistic structures, and contextual insights" without harming commercial value of original content. "If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over. America loses, as does the success of democratic AI," OpenAI stated.

The Microsoft-backed startup criticized European and UK approaches that allow copyright holders to opt out of AI training, claiming these restrictions hinder innovation, particularly for smaller companies with limited resources. The proposal comes as China-based DeepSeek recently released an AI model with capabilities comparable to American systems despite development at a fraction of the cost.

Slashdot Top Deals