AI

50% of Consumers Prefer Brands That Avoid GenAI Content (nerds.xyz) 31

Slashdot reader BrianFagioli writes: According to the research firm Gartner, 50% of U.S. consumers say they would prefer to do business with brands that avoid using GenAI in consumer facing content such as advertising and promotional messaging. The survey of 1,539 Americans, conducted in October 2025, also found growing skepticism about the reliability of online information, with 61% saying they frequently question whether information they use for everyday decisions is trustworthy... Gartner found that 68% of consumers often wonder whether the content they see online is real, while fewer people now rely on intuition alone to judge credibility [only 27%]. Instead, more consumers are actively verifying information and checking sources.
Gartner's senior principal analyst offered suggests discretion for brands trying to use AI. "The brands that win will be the ones that use AI in ways customers can immediately recognize as helpful, while being transparent about when AI is used, what it's doing, and giving customers a clear choice to opt out."
The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

DRM

FSF Says Nintendo's New DRM Allows Them to Remotely Render User Devices 'Permanently Unusable' (fsf.org) 61

"In the lead up to its Switch 2 console release, Nintendo updated its user agreement," writes the Free Software Foundation, warning that Nintendo now claims "broad authority to make consoles owned by its customers permanently unusable."

"Under Nintendo's most aggressive digital restrictions management (DRM) update to date, game console owners are now required to give Nintendo the unilateral right to revoke access to games, security updates, and the Internet, at its sole discretion." The new agreement states: "You acknowledge that if you fail to comply with [Nintendo's restrictions], Nintendo may render the Nintendo Account Services and/or the applicable Nintendo device permanently unusable in whole or in part...."

There are probably other reasons that Nintendo has and will justify bricking game consoles, but here are some that we have seen reported:

— "Tampering" with hardware or software in pretty much any way;
— Attempting to play a back-up game;
— Playing a "used" game; or
— Use of a third-party game or accessory...


Nintendo's promise to block a user from using their game console isn't just an empty threat: it has already been wielded against many users. For example, within a month of the Switch 2's release, one user unknowingly purchased an open-box return that had been bricked, and despite functional hardware, it was unusable for many games. In another case, a user installing updates for game cartridges purchased via a digital marketplace had their console disabled. Though it's unclear exactly why they were banned, it's possible that the cartridge's previous owner made a copy and an online DRM check determined that the current and previous owner's use were both "fraudulent." The user only had their console released through appealing to Nintendo directly and providing evidence of their purchase, a laborious process.

Nintendo's new console banning spree is just one instance of the threat that nonfree software and DRM pose to users. DRM is but one injustice posed by nonfree software, and the target of the FSF's Defective by Design campaign. Like with all software, users ought to be able to freely copy, study, and modify the programs running on their devices. Proprietary software developers actively oppose and antagonize their users. In the case of Nintendo, this means punishing legitimate users and burdening them with proving that their use is "acceptable." Console users shouldn't have to tread so carefully with a console that they own, and should they misstep, beg Nintendo to allow them to use their consoles again.

Government

White House Prepares Executive Order To Block State AI Laws (politico.com) 81

An anonymous reader quotes a report from Politico: The White House is preparing to issue an executive order as soon as Friday that tells the Department of Justice and other federal agencies to prevent states from regulating artificial intelligence, according to four people familiar with the matter and a leaked draft of the order obtained by POLITICO. The draft document, confirmed as authentic by three people familiar with the matter, would create an "AI Litigation Task Force" at the DOJ whose "sole responsibility" would be to challenge state AI laws.

Government lawyers would be directed to challenge state laws on the grounds that they unconstitutionally regulate interstate commerce, are preempted by existing federal regulations or otherwise at the attorney general's discretion. The task force would consult with administration officials, including the special adviser for AI and crypto -- a role currently occupied by tech investor David Sacks.

The executive order, in the draft obtained by POLITICO, would also empower Commerce Secretary Howard Lutnick to publish a review of "onerous" state AI laws within 90 days and restrict federal broadband funds to states whose AI laws are found to be objectionable. It would direct the Federal Trade Commission to investigate whether state AI laws that "require alterations to the truthful outputs of AI models" are blocked by the FTC Act. And it would order the Federal Communications Commission to begin work on a reporting and disclosure standard for AI models that would preempt conflicting state laws.

Submission + - White House Prepares Executive Order To Block State AI Laws (politico.com)

An anonymous reader writes: The White House is preparing to issue an executive order as soon as Friday that tells the Department of Justice and other federal agencies to prevent states from regulating artificial intelligence, according to four people familiar with the matter and a leaked draft of the order obtained by POLITICO. The draft document, confirmed as authentic by three people familiar with the matter, would create an “AI Litigation Task Force” at the DOJ whose “sole responsibility” would be to challenge state AI laws.

Government lawyers would be directed to challenge state laws on the grounds that they unconstitutionally regulate interstate commerce, are preempted by existing federal regulations or otherwise at the attorney general’s discretion. The task force would consult with administration officials, including the special adviser for AI and crypto — a role currently occupied by tech investor David Sacks.

The executive order, in the draft obtained by POLITICO, would also empower Commerce Secretary Howard Lutnick to publish a review of “onerous” state AI laws within 90 days and restrict federal broadband funds to states whose AI laws are found to be objectionable. It would direct the Federal Trade Commission to investigate whether state AI laws that “require alterations to the truthful outputs of AI models” are blocked by the FTC Act. And it would order the Federal Communications Commission to begin work on a reporting and disclosure standard for AI models that would preempt conflicting state laws.

Communications

ISPs Created So Many Fees That FCC Will Kill Requirement To List Them All (arstechnica.com) 110

FCC Chairman Brendan Carr says Internet service providers shouldn't have to list every fee they charge. From a report: Responding to a request from cable and telecom lobby groups, he is proposing to eliminate a rule that requires ISPs to itemize various fees in broadband price labels that must be made available to consumers.

The rule took effect in April 2024 after the FCC rejected ISPs' complaints that listing every fee they created would be too difficult. The rule applies specifically to recurring monthly fees "that providers impose at their discretion, i.e., charges not mandated by a government."

ISPs could comply with the rule either by listing the fees or by dropping the fees altogether and, if they choose, raising their overall prices by a corresponding amount. But the latter option wouldn't fit with the strategy of enticing customers with a low advertised price and hitting them with the real price on their monthly bills. The broadband price label rules were created to stop ISPs from advertising misleadingly low prices.

This week, Carr scheduled an October 28 vote on a Notice of Proposed Rulemaking (NPRM) that proposes eliminating several of the broadband-label requirements. One of the rules in line for removal requires ISPs to "itemize state and local passthrough fees that vary by location." The FCC would seek public comment on the plan before finalizing it.

China

Pentagon Can Call DJI a Chinese Military Company, Court Rules (theverge.com) 47

DJI has lost its lawsuit against the U.S. Department of Defense, failing to remove its designation as a Chinese Military Company. US District Court Judge Paul Friedman ruled the Pentagon has broad discretion to make such designations, finding sufficient evidence that DJI qualifies as a "military-civil fusion contributor" based on its recognition by China's National Development and Reform Commission as a National Enterprise Technology Center. The designation provides DJI substantial government benefits including cash subsidies, special financial support and tax benefits.

The judge rejected several of the DoD's other claims for insufficient evidence and noted the department confused two different Chinese industrial zones when attempting to prove DJI's factories were in state-sponsored areas. DJI faces a total import ban on new products this December and US customs has already stopped many consumer drone shipments. The company says it is evaluating legal options.
Privacy

NYT To Start Searching Deleted ChatGPT Logs After Beating OpenAI In Court (arstechnica.com) 33

An anonymous reader quotes a report from Ars Technica: Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs "indefinitely," including deleted and temporary chats. But Sidney Stein, the US district judge reviewing OpenAI's request, immediately denied OpenAI's objections. He was seemingly unmoved by the company's claims that the order forced OpenAI to abandon "long-standing privacy norms" and weaken privacy protections that users expect based on ChatGPT's terms of service. Rather, Stein suggested that OpenAI's user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now.

The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content. A spokesperson told Ars that OpenAI plans to "keep fighting" the order, but the ChatGPT maker seems to have few options left. They could possibly petition the Second Circuit Court of Appeals for a rarely granted emergency order that could intervene to block Wang's order, but the appeals court would have to consider Wang's order an extraordinary abuse of discretion for OpenAI to win that fight.

In the meantime, OpenAI is negotiating a process that will allow news plaintiffs to search through the retained data. Perhaps the sooner that process begins, the sooner the data will be deleted. And that possibility puts OpenAI in the difficult position of having to choose between either caving to some data collection to stop retaining data as soon as possible or prolonging the fight over the order and potentially putting more users' private conversations at risk of exposure through litigation or, worse, a data breach. [...]

Both sides are negotiating the exact process for searching through the chat logs, with both parties seemingly hoping to minimize the amount of time the chat logs will be preserved. For OpenAI, sharing the logs risks revealing instances of infringing outputs that could further spike damages in the case. The logs could also expose how often outputs attribute misinformation to news plaintiffs. But for news plaintiffs, accessing the logs is not considered key to their case -- perhaps providing additional examples of copying -- but could help news organizations argue that ChatGPT dilutes the market for their content. That could weigh against the fair use argument, as a judge opined in a recent ruling that evidence of market dilution could tip an AI copyright case in favor of plaintiffs.

Submission + - NYT To Start Searching Deleted ChatGPT Logs After Beating OpenAI In Court (arstechnica.com)

An anonymous reader writes: Last week, OpenAI raised objections in court, hoping to overturn a court order requiring the AI company to retain all ChatGPT logs "indefinitely," including deleted and temporary chats. But Sidney Stein, the US district judge reviewing OpenAI's request, immediately denied OpenAI's objections. He was seemingly unmoved by the company's claims that the order forced OpenAI to abandon "long-standing privacy norms" and weaken privacy protections that users expect based on ChatGPT's terms of service. Rather, Stein suggested that OpenAI's user agreement specified that their data could be retained as part of a legal process, which Stein said is exactly what is happening now.

The order was issued by magistrate judge Ona Wang just days after news organizations, led by The New York Times, requested it. The news plaintiffs claimed the order was urgently needed to preserve potential evidence in their copyright case, alleging that ChatGPT users are likely to delete chats where they attempted to use the chatbot to skirt paywalls to access news content. A spokesperson told Ars that OpenAI plans to "keep fighting" the order, but the ChatGPT maker seems to have few options left. They could possibly petition the Second Circuit Court of Appeals for a rarely granted emergency order that could intervene to block Wang's order, but the appeals court would have to consider Wang's order an extraordinary abuse of discretion for OpenAI to win that fight.

In the meantime, OpenAI is negotiating a process that will allow news plaintiffs to search through the retained data. Perhaps the sooner that process begins, the sooner the data will be deleted. And that possibility puts OpenAI in the difficult position of having to choose between either caving to some data collection to stop retaining data as soon as possible or prolonging the fight over the order and potentially putting more users' private conversations at risk of exposure through litigation or, worse, a data breach. [...]

Both sides are negotiating the exact process for searching through the chat logs, with both parties seemingly hoping to minimize the amount of time the chat logs will be preserved. For OpenAI, sharing the logs risks revealing instances of infringing outputs that could further spike damages in the case. The logs could also expose how often outputs attribute misinformation to news plaintiffs. But for news plaintiffs, accessing the logs is not considered key to their case—perhaps providing additional examples of copying—but could help news organizations argue that ChatGPT dilutes the market for their content. That could weigh against the fair use argument, as a judge opined in a recent ruling that evidence of market dilution could tip an AI copyright case in favor of plaintiffs.

Communications

Supreme Court Rejects Challenge To FCC Broadband Subsidy Program (nbcnews.com) 58

The Supreme Court ruled Friday that the FCC's Universal Service Fund can continue operating, rejecting claims that the program's funding mechanism violates the Constitution. In a 6-3 decision written by Justice Elena Kagan, the court found that Congress did not exceed its authority when it enacted the 1996 law establishing the fund and that the FCC could delegate administration to a private corporation. The Universal Service Fund subsidizes telecommunications services for low-income consumers, rural health care providers, schools and libraries through fees generally passed on to customers that raise billions of dollars annually.

The program is administered by the Universal Service Administrative Company, a nonprofit the FCC designated to run the fund. Conservative advocacy group Consumers' Research challenged the structure, arguing that "a private company is taxing Americans in amounts that total billions of dollars every year, under penalty of law, without true governmental accountability."

The Fifth Circuit Court of Appeals ruled in favor of Consumers' Research, prompting the FCC to petition the Supreme Court for review. Kagan wrote that Congress "sufficiently guided and constrained the discretion that it lodged with the FCC to implement the universal-service contribution scheme," adding that the FCC "retained all decision-making authority within that sphere." She concluded that "nothing in those arrangements, either separately or together, violates the Constitution." The challengers argued the program violates the "nondelegation doctrine," a conservative legal theory that says Congress has limited powers to delegate its lawmaking authority to the executive branch.
AI

Tesla Begins Driverless Robotaxi Service in Austin, Texas (theguardian.com) 110

With no one behind the steering wheel, a Tesla robotaxi passes Guero's Taco Bar in Austin Texas, making a right turn onto Congress Avenue.

Today is the day Austin became the first city in the world to see Tesla's self-driving robotaxi service, reports The Guardian: Some analysts believe that the robotaxis will only be available to employees and invitees initially. For the CEO, Tesla's rollout is slow. "We could start with 1,000 or 10,000 [robotaxis] on day one, but I don't think that would be prudent," he told CNBC in May. "So, we will start with probably 10 for a week, then increase it to 20, 30, 40."

The billionaire has said the driverless cars will be monitored remotely... [Posting on X.com] Musk said the date was "tentatively" 22 June but that this launch date would be "not real self-driving", which would have to wait nearly another week... Musk said he planned to have one thousand Tesla robotaxis on Austin roads "within a few months" and then he would expand to other cities in Texas and California.

Musk posted on X that riders on launch day would be charged a flat fee of $4.20, according to Reuters. And "In recent days, Tesla has sent invites to a select group of Tesla online influencers for a small and carefully monitored robotaxi trial..." As the date of the planned robotaxi launch approached, Texas lawmakers moved to enact rules on autonomous vehicles in the state. Texas Governor Greg Abbott, a Republican, on Friday signed legislation requiring a state permit to operate self-driving vehicles. The law does not take effect until September 1, but the governor's approval of it on Friday signals state officials from both parties want the driverless-vehicle industry to proceed cautiously... The law softens the state's previous anti-regulation stance on autonomous vehicles. A 2017 Texas law specifically prohibited cities from regulating self-driving cars...

The law requires autonomous-vehicle operators to get approval from the Texas Department of Motor Vehicles before operating on public streets without a human driver. It also gives state authorities the power to revoke permits if they deem a driverless vehicle "endangers the public," and requires firms to provide information on how police and first responders can deal with their driverless vehicles in emergency situations. The law's requirements for getting a state permit to operate an "automated motor vehicle" are not particularly onerous but require a firm to attest it can safely operate within the law... Compliance remains far easier than in some states, most notably California, which requires extensive submission of vehicle-testing data under state oversight.

Tesla "planned to operate only in areas it considered the safest," according to the article, and "plans to avoid bad weather, difficult intersections, and will not carry anyone below the age of 18."

More details from UPI: To get started using the robotaxis, users must download the Robotaxi app and use their Tesla account to log in, where it then functions like most ridesharing apps...

"Riders may not always be delivered to their intended destinations or may experience inconveniences, interruptions, or discomfort related to the Robotaxi," the company wrote in a disclaimer in its terms of service. "Tesla may modify or cancel rides in its discretion, including for example due to weather conditions." The terms of service include a clause that Tesla will not be liable for "any indirect, consequential, incidental, special, exemplary, or punitive damages, including lost profits or revenues, lost data, lost time, the costs of procuring substitute transportation services, or other intangible losses" from the use of the robotaxis.

Their article includes a link to the robotaxi's complete Terms of Service: To the fullest extent permitted by law, the Robotaxi, Robotaxi app, and any ride are provided "as is" and "as available" without warranties of any kind, either express or implied... The Robotaxi is not intended to provide transportation services in connection with emergencies, for example emergency transportation to a hospital... Tesla's total liability for any claim arising from or relating to Robotaxi or the Robotaxi app is limited to the greater of the amount paid by you to Tesla for the Robotaxi ride giving rise to the claim, and $100... Tesla may modify these Terms in our discretion, effective upon posting an updated version on Tesla's website. By using a Robotaxi or the Robotaxi app after Tesla posts such modifications, you agree to be bound by the revised Terms.
China

Chinese Robotaxis Have Government Black Boxes, Approach US Quality (forbes.com) 43

An anonymous reader quotes a report from Forbes: Robotaxi development is speeding at a fast pace in China, but we don't hear much about it in the USA, where the news focuses mostly on Waymo, with a bit about Zoox, Motional, May, trucking projects and other domestic players. China has 4 main players with robotaxi service, dominated by Baidu (the Chinese Google.) A recent session at last week's Ride AI conference in Los Angeles revealed some details about the different regulatory regime in China, and featured a report from a Chinese-American YouTuber who has taken on a mission to ride in the different vehicles.

Zion Maffeo, deputy general counsel for Pony.AI, provided some details on regulations in China. While Pony began with U.S. operations, its public operations are entirely in China, and it does only testing in the USA. Famously it was one of the few companies to get a California "no safety driver" test permit, but then lost it after a crash, and later regained it. Chinese authorities at many levels keep a close watch over Chinese robotaxi companies. They must get approval for all levels of operation which control where they can test and operate, and how much supervision is needed. Operation begins with testing with a safety driver behind the wheel (as almost everywhere in the world,) with eventual graduation to having the safety driver in the passenger seat but with an emergency stop. Then they move to having a supervisor in the back seat before they can test with nobody in the vehicle, usually limited to an area with simpler streets.

The big jump can then come to allow testing with nobody in the vehicle, but with full time monitoring by a remote employee who can stop the vehicle. From there they can graduate to taking passengers, and then expanding the service to more complex areas. Later they can go further, and not have full time remote monitoring, though there do need to be remote employees able to monitor and assist part time. Pony has a permit allowing it to have 3 vehicles per remote operator, and has one for 15 vehicles in process, but they declined comment on just how many vehicles they actually have per operator. Baidu also did not respond to queries on this. [...] In addition, Chinese jurisdictions require that the system in a car independently log any "interventions" by safety drivers in a sort of "black box" system. These reports are regularly given to regulators, though they are not made public. In California, companies must file an annual disengagement report, but they have considerable leeway on what they consider a disengagement so the numbers can't be readily compared. Chinese companies have no discretion on what is reported, and they may notify authorities of a specific objection if they wish to declare that an intervention logged in their black box should not be counted.
On her first trip, YouTuber Sophia Tung found Baidu's 5th generation robotaxi to offer a poor experience in ride quality, wait time, and overall service. However, during a return trip she tried Baidu's 6th generation vehicle in Wuhan and rated it as the best among Chinese robotaxis, approaching the quality of Waymo.

Submission + - Chinese Robotaxis Have Government Black Boxes, Approach US Quality (forbes.com)

An anonymous reader writes: Robotaxi development is speeding at a fast pace in China, but we don’t hear much about it in the USA, where the news focuses mostly on Waymo, with a bit about Zoox, Motional, May, trucking projects and other domestic players. China has 4 main players with robotaxi service, dominated by Baidu (the Chinese Google.) A recent session at last week’s Ride AI conference in Los Angeles revealed some details about the different regulatory regime in China, and featured a report from a Chinese-American Youtuber who has taken on a mission to ride in the different vehicles.

Zion Maffeo, deputy general counsel for Pony.AI, provided some details on regulations in China. While Pony began with U.S. operations, its public operations are entirely in China, and it does only testing in the USA. Famously it was one of the few companies to get a California “no safety driver” test permit, but then lost it after a crash, and later regained it. Chinese authorities at many levels keep a close watch over Chinese robotaxi companies. They must get approval for all levels of operation which control where they can test and operate, and how much supervision is needed. Operation begins with testing with a safety driver behind the wheel (as almost everywhere in the world,) with eventual graduation to having the safety driver in the passenger seat but with an emergency stop. Then they move to having a supervisor in the back seat before they can test with nobody in the vehicle, usually limited to an area with simpler streets.

The big jump can then come to allow testing with nobody in the vehicle, but with full time monitoring by a remote employee who can stop the vehicle. From there they can graduate to taking passengers, and then expanding the service to more complex areas. Later they can go further, and not have full time remote monitoring, though there do need to be remote employees able to monitor and assist part time. Pony has a permit allowing it to have 3 vehicles per remote operator, and has one for 15 vehicles in process, but they declined comment on just how many vehicles they actually have per operator. Baidu also did not respond to queries on this. [...] In addition, Chinese jurisdictions require that the system in a car independently log any “interventions” by safety drivers in a sort of “black box” system. These reports are regularly given to regulators, though they are not made public. In California, companies must file an annual disengagement report, but they have considerable leeway on what they consider a disengagement so the numbers can’t be readily compared. Chinese companies have no discretion on what is reported, and they may notify authorities of a specific objection if they wish to declare that an intervention logged in their black box should not be counted.

There are strong arguments against such strict reporting. Safety drivers are told to intervene when they have any doubt, which means they will frequently intervene when not necessary. Because companies with mandatory reporting of all interventions want to keep their number down, they may, even unconsciously, discourage interventions. They also don’t want to have to count things like bathroom breaks which have no bearing on safety, leading to the wrong incentive. On the other hand, giving companies full leeway on what counts led to essentially useless reports in California. The right answer is hard. This more strict regulation reportedly also has its own Chinese “flavor” and personal relationships are also important to get permits and deploy. Even so, it’s not slowing things down much, if at all.

Role Playing (Games)

After DDOS Attacks, Blizzard Rolls Back Hardcore WoW Deaths For the First Time (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: World of Warcraft Classic's Hardcore mode has set itself apart from the average MMO experience simply by making character death permanent across the entire in-game realm. For years, Blizzard has not allowed any appeals or rollbacks for these Hardcore mode character deaths, even when such deaths came as the direct result of a server disconnection or gameplay bug. Now, Blizzard says it's modifying that policy somewhat in response to a series of "unprecedented distributed-denial-of-service (DDOS) attacks" undertaken "with the singular goal of disrupting players' experiences." The World of Warcraft developer says it may now resurrect Classic Hardcore characters "at our sole discretion" when those deaths come "in a mass event which we deem inconsistent with the integrity of the game." WoW's Classic Hardcore made it a hotspot for streamers, especially members of the OnlyFangs Guild, who embraced the challenge that one mistake could end a character's run. However, as Ars Technica reports, a series of DDOS attacks timed with their major livestreamed raids led to character deaths and widespread frustration, prompting streamer sodapoppin to declare the guild's end.

Blizzard responded by updating its Hardcore policy to resurrect characters lost specifically to DDOS attacks. "Recently, we have experienced unprecedented distributed-denial-of-service (DDOS) attacks that impacted many Blizzard game services, including Hardcore realms, with the singular goal of disrupting players' experiences," WoW Classic Associate Production Director Clay Stone wrote in a public message. "As we continue our work to further strengthen the resilience of WoW realms and our rapid response time, we're taking steps to resurrect player-characters that were lost as a result of these attacks."

Submission + - After DDOS Attacks, Blizzard Rolls Back Hardcore WoW Deaths For the First Time (arstechnica.com)

An anonymous reader writes: World of Warcraft Classic's Hardcore mode has set itself apart from the average MMO experience simply by making character death permanent across the entire in-game realm. For years, Blizzard has not allowed any appeals or rollbacks for these Hardcore mode character deaths, even when such deaths came as the direct result of a server disconnection or gameplay bug. Now, Blizzard says it's modifying that policy somewhat in response to a series of "unprecedented distributed-denial-of-service (DDOS) attacks" undertaken "with the singular goal of disrupting players’ experiences." The World of Warcraft developer says it may now resurrect Classic Hardcore characters "at our sole discretion" when those deaths come "in a mass event which we deem inconsistent with the integrity of the game."
Biotech

Theranos Founder Elizabeth Holmes' Fraud Convictions Upheld (msnbc.com) 101

"Elizabeth Holmes' fraud conviction has been upheld by a federal appellate panel," writes Slashdot reader ClickOnThis. MSNBC reports: A three-judge panel of the 9th U.S. Circuit Court of Appeals on Monday affirmed the convictions, sentences and nine-figure restitution ordered against both Holmes and Theranos president, Ramesh "Sunny" Balwani. [...] Theranos was supposedly going to revolutionize medical laboratory testing with the ability to run fast, accurate and affordable tests with just a drop of blood from a finger prick. "But the vision sold by Holmes and Balwani was nothing more than a mirage," 9th Circuit Judge Jacqueline H. Nguyen wrote (PDF) for the panel, adding that the "grandiose achievements touted by Holmes and Balwani were half-truths and outright lies."

Holmes was convicted of crimes related to fraud against investors while the jury acquitted her or hung on other counts. Balwani was convicted on all counts at his trial. The federal panel rejected a slew of arguments from both defendants, including that their trials featured improper testimony from Theranos employees. While the ruling is a major setback for the defendants, they can further appeal to a fuller panel of 9th Circuit judges and the Supreme Court, which generally has broad discretion over whether to accept cases for review.

Submission + - How Life Aboard A Navy Aircraft Carrier Changed When High-Speed Internet Arrived (twz.com)

SonicSpike writes: As it battled Houthi threats around the Red Sea last year, the USS Abraham Lincoln (CVN-72) also served as a testbed for vastly increasing the level of internet connectivity aboard the Navy’s deployed ships. Now we are learning specific details about how this mammoth change in at-sea connectedness impacted everything from how sailors went about their lives during a grueling deployment to how the ship and its air wing brought its firepower to bear on the enemy.

The F-35 Joint Strike Fighters assigned to the carrier offer a case in point for what more shipboard bandwidth — provided by commercial providers like Starlink and OneWeb — can mean at the tactical level. Jets with the embarked Marine Fighter Attack Squadron 314 took on critical mission data file updates in record time last fall due to the carrier’s internet innovations, a capability that is slated to expand across the fleet.

“This file offers intelligence updates and design enhancements that enable pilots to identify and counter threats in specific operational environments,” the Navy said in an October release announcing the feat. “The update incorporated more than 100 intelligence changes and multiple design improvements, significantly enhancing the aircraft’s survivability and lethality.”

During Lincoln’s cruise, White was transferring at download speeds of 1 gigabyte per second, with 200 megabytes on the upload, he said, provided to the 5,000 sailors on board for personal and work use.

White said there was not one equipment failure aboard Lincoln related to connectivity in the past two years, and that 780 terabytes of data was transferred during the five-and-a-half month cruise.

“I set a goal for a petabyte, but I missed that,” he said. “So there’s room for my relief to excel.”

Lincoln averaged four to eight terabytes of transferred data a day, 50 times greater than the fleet’s current capabilities. His team managed 7,000 IP addresses, with two full-time system administrators, one on during the day and one at night.

To be sure, the system was turned off at the commander’s discretion, particularly when Lincoln was in some of the Red Sea’s weapons engagement zone, and its use always took a backseat to the mission.

“We are not going to get into the details, but this is not counter-detectable,” Lincoln’s commanding officer, Capt. Pete “Repete” Riebe, told WEST attendees. “They did not know our location from what we were using. Now, when we went deep into the weapons engagement zone, we turned it all off. We turned the email traffic off, we turned the WiFi off.”

“Sailors being up on their WiFi, being connected to home, is really what made that doable in this day and age,” he said.

White said the average age of an embarked Lincoln sailor was 20.8, and Riebe noted that to attract young people into service, the Navy needs to recognize the innate connection they have to their devices.

“The next generation of sailors grew up with a cell phone in their hand, and they are uncomfortable without it,” Riebe said. “I don’t necessarily like that, but that’s reality, and if we want to compete for the best folks coming into the Navy, we need to offer them bandwidth at sea.”

Having better connectivity also helped with the ship’s administrative functions, Riebe said, making medical, dental and other work far easier than they have been in the past.

“All of that requires bandwidth, and [White] provided it to the ship, and we’re able to make the ship run more smoothly, more efficiently,” he said.

A sailor who can FaceTime with his family back home carries less non-Navy stress with them as they focus on the life-or-death duties at hand, White said.

“What we tried to do was enable a safe space for those online connections, to allow sailors to continue their continuity of life,” he said. “When it’s time to turn those connections off, the sailors are ready to run to the fire. They are ready to run to the fight, and that is what we saw on Abraham Lincoln.”

This beefed-up bandwidth allowed 38 sailors to witness the birth of their child, while others were able to watch their kids’ sporting events, White said. Several crew members pursued doctorate and master’s degrees while deployed due to better internet, while others were able to deal with personal or legal issues they had left behind back home. One officer was able to commission his wife remotely from the ship.

AT&T

AT&T Promises Bill Credits For Future Outages (arstechnica.com) 19

An anonymous reader quotes a report from Ars Technica: AT&T, following last year's embarrassing botched update that kicked every device off its wireless network and blocked over 92 million phone calls, is now promising full-day bill credits to mobile customers for future outages that last at least 60 minutes and meet certain other criteria. A similar promise is being made to fiber customers for unplanned outages lasting at least 20 minutes, but only if the customer uses an AT&T-provided gateway. The "AT&T Guarantee" announced today has caveats that can make it possible for a disruption to not be covered. AT&T says the promised mobile bill credits are "for wireless downtime lasting 60 minutes or more caused by a single incident impacting 10 or more towers."

The full-day bill credits do not include a prorated amount for the taxes and fees imposed on a monthly bill. The "bill credit will be calculated using the daily rate customer is charged for wireless service only (excludes taxes, fees, device payments, and any add-on services," AT&T said. If an outage lasts more than 24 hours, a customer will receive another full-day bill credit for each additional day. If only nine or fewer AT&T towers aren't functioning, a customer won't get a credit even if they lose service for an hour. The guarantee kicks in when a "minimum 10 towers [are] out for 60 or more minutes resulting from a single incident," and the customer "was connected to an impacted tower at the time the outage occurs," and "loses service for at least 60 consecutive minutes as a result of the outage."

The guarantee "excludes events beyond the control of AT&T, including but not limited to, natural disasters, weather-related events, or outages caused by third parties." AT&T says it will determine "in its sole discretion" whether the disruption is "a qualifying" network outage. "Consumers will automatically receive a bill credit equaling a full day of service and we'll reach out to our small business customers with options to help make it right," AT&T said. When there's an outage, AT&T said it will "notify you via e-mail or SMS to inform you that you've been impacted. Once the interruption has been resolved, we'll contact you with details about your bill credit." If AT&T fails to provide the promised credit for any reason, customers will have to call AT&T or visit an AT&T store.

To qualify for the similar fiber-outage promise, "customers must use AT&T-provided gateways," the firm said. There are other caveats that can prevent a home Internet customer from getting a bill credit. AT&T said the fiber-outage promise "excludes events beyond the control of AT&T, including but not limited to, natural disasters, weather-related events, loss of service due to downed or cut cable wires at a customer residence, issues with wiring inside customer residence, and power outages at customer premises. Also excludes outages resulting from planned maintenance." AT&T notes that some residential fiber customers in multi-dwelling units "have an account with AT&T but are not billed by AT&T for Internet service." In the case of outages, these customers would not get bill credits but would be given the option to redeem a reward card that's valued at $5 or more.

Submission + - AT&T Promises Bill Credits For Future Outages (arstechnica.com)

An anonymous reader writes: AT&T, following last year's embarrassing botched update that kicked every device off its wireless network and blocked over 92 million phone calls, is now promising full-day bill credits to mobile customers for future outages that last at least 60 minutes and meet certain other criteria. A similar promise is being made to fiber customers for unplanned outages lasting at least 20 minutes, but only if the customer uses an AT&T-provided gateway. The "AT&T Guarantee" announced today has caveats that can make it possible for a disruption to not be covered. AT&T says the promised mobile bill credits are "for wireless downtime lasting 60 minutes or more caused by a single incident impacting 10 or more towers."

The full-day bill credits do not include a prorated amount for the taxes and fees imposed on a monthly bill. The "bill credit will be calculated using the daily rate customer is charged for wireless service only (excludes taxes, fees, device payments, and any add-on services," AT&T said. If an outage lasts more than 24 hours, a customer will receive another full-day bill credit for each additional day. If only nine or fewer AT&T towers aren't functioning, a customer won't get a credit even if they lose service for an hour. The guarantee kicks in when a "minimum 10 towers [are] out for 60 or more minutes resulting from a single incident," and the customer "was connected to an impacted tower at the time the outage occurs," and "loses service for at least 60 consecutive minutes as a result of the outage."

The guarantee "excludes events beyond the control of AT&T, including but not limited to, natural disasters, weather-related events, or outages caused by third parties." AT&T says it will determine "in its sole discretion" whether the disruption is "a qualifying" network outage. "Consumers will automatically receive a bill credit equaling a full day of service and we'll reach out to our small business customers with options to help make it right," AT&T said. When there's an outage, AT&T said it will "notify you via e-mail or SMS to inform you that you've been impacted. Once the interruption has been resolved, we'll contact you with details about your bill credit." If AT&T fails to provide the promised credit for any reason, customers will have to call AT&T or visit an AT&T store.

To qualify for the similar fiber-outage promise, "customers must use AT&T-provided gateways," the firm said. There are other caveats that can prevent a home Internet customer from getting a bill credit. AT&T said the fiber-outage promise "excludes events beyond the control of AT&T, including but not limited to, natural disasters, weather-related events, loss of service due to downed or cut cable wires at a customer residence, issues with wiring inside customer residence, and power outages at customer premises. Also excludes outages resulting from planned maintenance." AT&T notes that some residential fiber customers in multi-dwelling units "have an account with AT&T but are not billed by AT&T for Internet service." In the case of outages, these customers would not get bill credits but would be given the option to redeem a reward card that's valued at $5 or more.

Social Networks

Tech Platforms Diverge on Erasing Criminal Suspects' Digital Footprints (nytimes.com) 99

Social media giants confronted a familiar dilemma over user content moderation after murder suspect Luigi Mangione's arrest in the killing of UnitedHealthcare's CEO on Monday, highlighting the platforms' varied approaches to managing digital footprints of criminal suspects.

Meta quickly removed Mangione's Facebook and Instagram accounts under its "dangerous organizations and individuals" policy, while his account on X underwent a brief suspension before being reinstated with a premium subscription. LinkedIn maintained his profile, stating it did not violate platform policies. His Reddit account was suspended in line with the platform's policy on high-profile criminal suspects, while his Goodreads profile fluctuated between public and private status.

The New York Times adds: When someone goes from having a private life to getting public attention, online accounts they intended for a small circle of friends or acquaintances are scrutinized by curious strangers -- and journalists.

In some cases, these newly public figures or their loved ones can shut down the accounts or make them private. Others, like Mr. Mangione, who has been charged with murder, are cut off from their devices, leaving their digital lives open for the public's consumption. Either way, tech companies have discretion in what happens to the account and its content. Section 230 of the Communications Decency Act protects companies from legal liability for posts made by users.

Slashdot Top Deals