Republicans

Republicans Drop Trump-Ordered Block On State AI Laws From Defense Bill 78

An anonymous reader quotes a report from Ars Technica: A Donald Trump-backed push has failed to wedge a federal measure that would block states from passing AI laws for a decade into the National Defense Authorization Act (NDAA). House Majority Leader Steve Scalise (R-La.) told reporters Tuesday that a sect of Republicans is now "looking at other places" to potentially pass the measure. Other Republicans opposed including the AI preemption in the defense bill, The Hill reported, joining critics who see value in allowing states to quickly regulate AI risks as they arise.

For months, Trump has pressured the Republican-led Congress to block state AI laws that the president claims could bog down innovation as AI firms waste time and resources complying with a patchwork of state laws. But Republicans have continually failed to unite behind Trump's command, first voting against including a similar measure in the "Big Beautiful" budget bill and then this week failing to negotiate a solution to pass the NDAA measure. [...]

"We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes," Trump wrote on Truth Social last month. "If we don't, then China will easily catch us in the AI race. Put it in the NDAA, or pass a separate Bill, and nobody will ever be able to compete with America." If Congress bombs the assignment to find another way to pass the measure, Trump will likely release an executive order to enforce the policy. Republicans in Congress had dissuaded Trump from releasing a draft of that order, requesting time to find legislation where they believed an AI moratorium could pass.
"The controversial proposal had faced backlash from a nationwide, bipartisan coalition of state lawmakers, parents, faith leaders, unions, whistleblowers, and other public advocates," the NDAA, a bipartisan group that lobbies for AI safety laws, said in a press release.

This "widespread and powerful" movement "clapped back" at Republicans' latest "rushed attempt to sneak preemption through Congress," Brad Carson, ARI's president, said, because "Americans want safeguards that protect kids, workers, and families, not a rules-free zone for Big Tech."
Transportation

US Probes Reports Waymo Self-Driving Cars Illegally Passed School Buses 19 Times (reuters.com) 96

U.S. regulators are pressing Waymo for answers after Texas officials reported 19 instances of its self-driving cars illegally passing stopped school buses, including cases that occurred after Waymo claimed to have deployed a software fix. Longtime Slashdot reader BrendaEM shares the report from Reuters: In a November 20 letter posted by NHTSA, the Austin Independent School District said five incidents occurred in November after Waymo said it had made software updates to resolve the issue and asked the company to halt operations around schools during pick-up and drop-off times until it could ensure the vehicles would not violate the law. "We cannot allow Waymo to continue endangering our students while it attempts to implement a fix," a lawyer for the school district wrote, citing one incident involving a Waymo that was "recorded driving past a stopped school bus only moments after a student crossed in front of the vehicle, and while the student was still in the road."

The letter prompted NHTSA to ask Waymo on November 24 if it would comply with the request to cease self-driving operations during student pick-up and drop-off times, adding: "Was an appropriate software fix implemented or developed to mitigate this concern? And if so, does Waymo plan to file a recall for the fix?" The school district told Reuters on Thursday that Waymo refuses to halt operations around schools and said another incident involving a self-driving car and an actively loading school bus occurred on December 1, which "indicates that those programming changes did not resolve the issue or our concerns."

In a statement, Waymo did not answer why it had refused to halt operations around Austin schools or answer if it would issue a recall. "We're deeply invested in safe interaction with school buses. We swiftly implemented software updates to address this and will continue to rapidly improve," Waymo said. NHTSA said in a letter to Waymo on Wednesday that it was demanding answers to a series of questions by January 20 about incidents involving school buses and details of software updates to address safety concerns.

Printer

Plane Crashed After 3D-Printed Part Collapsed (bbc.com) 99

A light aircraft crashed in Gloucestershire after a 3D-printed plastic air-induction elbow softened from engine heat and collapsed, cutting power during final approach and causing the plane to undershoot the runway. Investigators say the part was made from "inappropriate material" and safety actions will be taken in the future regarding 3D printed parts. The BBC reports: Following an "uneventful local flight", the AAIB report said the pilot advanced the throttle on the final approach to the runway, and realized the engine had suffered a complete loss of power. "He managed to fly over a road and a line of bushes on the airfield boundary, but landed short and struck the instrument landing system before coming to rest at the side of the structure," the report read.

It was revealed the part had been installed during a modification to the fuel system and collapsed due to its 3D-printed plastic material softening when exposed to heat from the engine. The Light Aircraft Association (LAA) said it now intends to take safety actions in response to the accident, including a "LAA Alert" regarding the use of 3D-printed parts that will be sent to inspectors.

EU

EU Hits Meta With Antitrust Probe Over Plans To Block AI Rivals From WhatsApp 3

The EU has opened an antitrust investigation into Meta over a new WhatsApp policy that could block rival AI assistants from accessing the platform. Complaints from smaller AI developers triggered the probe, which could lead to fines of up to 10% of Meta's global revenue if the company is found to have abused its dominance. Reuters reports: EU antitrust chief Teresa Ribera said the move was to prevent dominant firms from "abusing their power to crowd out innovative competitors." She added interim measures could be imposed to block Meta's new WhatsApp AI policy rollout. "AI markets are booming in Europe and beyond," she said. "This is why we are investigating if Meta's new policy might be illegal under competition rules, and whether we should act quickly to prevent any possible irreparable harm to competition in the AI space."

A WhatsApp spokesperson called the claims "baseless," adding that the emergence of chatbots on its platforms had put a "strain on our systems that they were not designed to support," a reference to AI systems from other providers. "Still, the AI space is highly competitive and people have access to the services of their choice in any number of ways, including app stores, search engines, email services, partnership integrations, and operating systems."
Security

Microsoft 'Mitigates' Windows LNK Flaw Exploited As Zero-Day (bleepingcomputer.com) 25

joshuark shares a report from BleepingComputer: Microsoft has silently "mitigated" a high-severity Windows LNK vulnerability exploited by multiple state-backed and cybercrime hacking groups in zero-day attacks. Tracked as CVE-2025-9491, this security flaw allows attackers to hide malicious commands within Windows LNK files, which can be used to deploy malware and gain persistence on compromised devices. However, the attacks require user interaction to succeed, as they involve tricking potential victims into opening malicious Windows Shell Link (.lnk) files. Thus some element of social engineering, and user technically naive and gullibility such as thinking Windows is secure is required. [...]

As Trend Micro threat analysts discovered in March 2025, the CVE-2025-9491 was already being widely exploited by 11 state-sponsored groups and cybercrime gangs, including Evil Corp, Bitter, APT37, APT43 (also known as Kimsuky), Mustang Panda, SideWinder, RedHotel, Konni, and others. Microsoft told BleepingComputer in March that it would "consider addressing" this zero-day flaw, even though it didn't "meet the bar for immediate servicing." ACROS Security CEO and 0patch co-founder Mitja Kolsek found, Microsoft has silently changed LNK files in the November updates in an apparent effort to mitigate the CVE-2025-9491 flaw. After installing last month's updates, users can now see all characters in the Target field when opening the Properties of LNK files, not just the first 260. As the movie the Ninth Gate stated: "silentium est aurum"

AI

30% of Doctors In UK Use AI Tools In Patient Consultations, Study Finds (theguardian.com) 80

An anonymous reader quotes a report from the Guardian: Almost three in 10 GPs in the UK are using AI tools such as ChatGPT in consultations with patients, even though it could lead to them making mistakes and being sued, a study reveals. The rapid adoption of AI to ease workloads is happening alongside a "wild west" lack of regulation of the technology, which is leaving GPs unaware which tools are safe to use. That is the conclusion of research by the Nuffield Trust thinktank, based on a survey of 2,108 family doctors by the Royal College of GPs about AI and on focus groups of GPs.

Ministers hope that AI can help reduce the delays patients face in seeing a GP. The study found that more and more GPs were using AI to produce summaries of appointments with patients, assisting their diagnosis of the patient's condition and routine administrative tasks. In all, 598 (28%) of the 2,108 survey respondents said they were already using AI. More male (33%) than female (25%) GPs have used it and far more use it in well-off than in poorer areas.

It is moving quickly into more widespread use. However, large majorities of GPs, whether they use it or not, worry that practices that adopt it could face "professional liability and medico-legal issues," and "risks of clinical errors" and problems of "patient privacy and data security" as a result, the Nuffield Trust's report says. [...] In a blow to ministerial hopes, the survey also found that GPs use the time it saves them to recover from the stresses of their busy days rather than to see more patients. "While policymakers hope that this saved time will be used to offer more appointments, GPs reported using it primarily for self-care and rest, including reducing overtime working hours to prevent burnout," the report adds.

Open Source

Valve Reveals Its the Architect Behind a Push To Bring Windows Games To Arm (theverge.com) 44

An anonymous reader quotes a report from The Verge's Sean Hollister If you wrote off the Steam Frame as yet another VR headset few will want to wear, I guarantee you're not alone. But the Steam Frame isn't just a headset; it's a Trojan horse that contains the tech gamers need to play Steam games on the next Samsung Galaxy, the next Google Pixel, perhaps Arm gaming notebooks to come. I know, because I'm already using that tech on my Samsung Galaxy. There is no official Android version of Hollow Knight: Silksong, one of the best games of 2025, but that doesn't have to stop you anymore. Thanks to a stack of open-source technologies, including a compatibility layer called Proton and an emulator called Fex, games that were developed for x86-based Windows PCs can now run on Linux-based phones with the Arm processor architecture. With Proton, the Steam Deck could already do the Windows-to-Linux part; now, Fex is bridging x86 and Arm, too.

This stack is what powers the Steam Frame's own ability to play Windows games, of course, and it was widely reported that Valve is using the open-source Fex emulator to make it happen. What wasn't widely reported: Valve is behind Fex itself. In an interview, Valve's Pierre-Loup Griffais, one of the architects behind SteamOS and the Steam Deck, tells The Verge that Valve has been quietly funding almost all the open-source technologies required to play Windows games on Arm. And because they're open-source, Valve is effectively shepherding a future where Arm phones, laptops, and desktops could freely do the same. He says the company believes game developers shouldn't be wasting time porting games if there's a better way.

Remember when the Steam Deck handheld showed that a decade of investment in Linux could make Windows gaming portable? Valve paid open-source developers to follow their passions to help achieve that result. Valve has been guiding the effort to bring games to Arm in much the same way: In 2016 and 2017, Griffais tells me, the company began recruiting and funding open-source developers to bring Windows games to Arm chips. Fex lead developer Ryan Houdek tells The Verge he chatted with Griffais himself at conferences those years and whipped up the first prototype in 2018. He tells me Valve pays enough that Fex is his full-time job. "I want to thank the people from Valve for being here from the start and allowing me to kickstart this project," he recently wrote.

AT&T

AT&T and Verizon Are Fighting Back Against T-Mobile's Easy Switch Tool (tmo.report) 23

AT&T and Verizon are blocking T-Mobile's new "Switching Made Easy" tool that scans their customer accounts to recommend comparable plans. AT&T is also suing, alleging T-Mobile used bots to scrape over 100 fields of sensitive customer data. From The Mobile Report: According to a lawsuit, which AT&T has shared directly with us, T-Mobile updated the T-Life app's scraping abilities three separate times in an attempt to bypass AT&T's detection. Essentially, T-Mobile and AT&T have been in a game of cat and mouse. Not only that, but AT&T alleges that T-Mobile is intentionally hiding the fact that it's their scraper accessing an account, and essentially pretends to be an end user while doing so. Apparently, T-Mobile's scraping bot tries its best to appear as a generic web browser.

AT&T sent T-Mobile a cease and desist letter on November 24th demanding T-Mobile stop the scraping process. T-Mobile responded two days later refusing, stating that the process was legal because "customers themselves ... log into their own wireless account." On November 26th, AT&T says they detected T-Mobile is no longer scraping the AT&T website, and instead asks users to upload a pdf of their bill or enter some info manually. They note, however, that at the time the app still appeared to scrape Verizon accounts. The lawsuit further explains that AT&T reached out to Apple with the claim that T-Mobile's T-Life app is also violating the App Store Review Guidelines. T-Mobile responded to this complaint as well, making similar claims that the scraping process does not violate those guidelines. [...]

According to AT&T, the T-Life app collects way more information than is necessary for a simple carrier switch. The company alleges T-Mobile grabs over 100 separate bits of info from a customer's account, including info about other users on the account and other services not related to wireless service. It's also worth noting that, apparently, T-Mobile is storing this information, not just using it temporarily, even if the customer doesn't end up switching. T-Mobile has responded to our request for comment, and says that actually, AT&T is wrong about the facts, and Easy Switch is safe and secure...

Privacy

India Pulls Its Preinstalled iPhone App Demand 15

India has withdrawn its order requiring Apple and other smartphone makers to preinstall the government's Sanchar Saathi app after public backlash and privacy concerns. AppleInsider reports: On November 28, the India Ministry of Communication issued a secret directive to Apple and other smartphone manufacturers, requiring the preinstallation of a government-backed app. Less than a week later, the order has been rescinded. The withdrawal on Wednesday means Apple doesn't have to preload the Sanchar Saathi app onto iPhones sold in the country, in a way that couldn't be "disabled or restricted." [...]

In pulling back from the demand, the government insisted that the app had an "increasing acceptance" among citizens. There was a tenfold spike of new user registrations on Tuesday alone, with over 600,000 new users made aware of the app from the public debacle. India Minister of Communications Jyotiraditya Scindia took a moment to insist that concerns the app could be used for increased surveillance were unfounded. "Snooping is neither possible nor will it happen" with the app, Scindia claimed.

"This is a welcome development, but we are still awaiting the full text of the legal order that should accompany this announcement, including any revised directions under the Cyber Security Rules, 2024," said the Internet Freedom Foundation. It is treating the news with "cautious optimism, not closure," until formalities conclude. However, while promising, the backdown doesn't stop India from retrying something similar or another tactic in the future.
AI

Zig Quits GitHub, Says Microsoft's AI Obsession Has Ruined the Service 68

The Zig Software Foundation has quit GitHub after years of unresolved GitHub Actions bugs -- including a "safe_sleep" script that could spin forever and cripple CI runners. Zig leadership puts the blame on Microsoft's growing AI-first priorities and declining engineering quality. Other open-source developers are voicing similar frustrations. The Register reports: The drama began in April 2025 when GitHub user AlekseiNikiforovIBM started a thread titled "safe_sleep.sh rarely hangs indefinitely." GitHub addressed the problem in August, but didn't reveal that in the thread, which remained open until Monday. That timing appears notable. Last week, Andrew Kelly, president and lead developer of the Zig Software Foundation, announced that the Zig project is moving to Codeberg, a non-profit git hosting service, because GitHub no longer demonstrates commitment to engineering excellence.

One piece of evidence he offered for that assessment was the "safe_sleep.sh rarely hangs indefinitely" thread. "Most importantly, Actions has inexcusable bugs while being completely neglected," Kelly wrote. "After the CEO of GitHub said to 'embrace AI or get out', it seems the lackeys at Microsoft took the hint, because GitHub Actions started 'vibe-scheduling' -- choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked."
AI

Amazon To Use Nvidia Tech In AI Chips, Roll Out New Servers 7

AWS is deepening its partnership with Nvidia by adopting "NVLink Fusion" in its upcoming Trainium4 AI chips. "The NVLink technology creates speedy connections between different kinds of chips and is one of Nvidia's crown jewels," notes Reuters. From the report: Nvidia has been pushing to sign up other chip firms to adopt its NVLink technology, with Intel, Qualcomm and now AWS on board. The technology will help AWS build bigger AI servers that can recognize and communicate with one another faster, a critical factor in training large AI models, in which thousands of machines must be strung together. As part of the Nvidia partnership, customers will have access to what AWS is calling AI Factories, exclusive AI infrastructure inside their own data centers for greater speed and readiness.

Separately, Amazon said it is rolling out new servers based on a chip called Trainium3. The new servers, available on Tuesday, each contain 144 chips and have more than four times the computing power of AWS's previous generation of AI, while using 40% less power, Dave Brown, vice president of AWS compute and machine learning services, told Reuters. Brown did not give absolute figures on power or performance, but said AWS aims to compete with rivals -- including Nvidia -- based on price.
"Together, Nvidia and AWS are creating the compute fabric for the AI industrial revolution - bringing advanced AI to every company, in every country, and accelerating the world's path to intelligence," Nvidia CEO Jensen Huang said in a statement.
The Almighty Buck

Zillow Drops Climate Risk Scores After Agents Complained of Lost Sales 69

Zillow has removed climate risk scores from over a million home listings after real estate agents argued the data was scaring off buyers. TechCrunch reports: Zillow first added the data to the site in September 2024, saying that more than 80% of buyers consider climate risks when purchasing a new home. But last month, following objections from the California Regional Multiple Listing Service (CRMLS), Zillow removed the listings' climate scores. In their place is a subtle link to their records at First Street, the climate risk analytic startup that provides the data.

"When buyers lack access to clear climate-risk information, they make the biggest financial decision of their lives while flying blind," First Street spokesperson Matthew Eby told TechCrunch via email. "The risk doesn't go away; it just moves from a pre-purchase decision into a post-purchase liability." First Street's climate risk scores first appeared on Realtor.com in 2020, where they remain. They also still appear on Redfin and and Homes.com. The New York-based startup has raised more than $50 million from investors including General Catalyst, Congruent Ventures, and Galvanize Climate Solutions, according to PitchBook.

Art Carter, the CRMLS CEO, told The New York Times that "displaying the probability of a specific home flooding this year or within the next five years can have a significant impact on the perceived desirability of that property." He also questioned the accuracy of First Street's data, saying he didn't think that areas which haven't flooded in the last 40 to 50 years were likely to flood in the next five.
AI

Apple AI Chief Retiring After Siri Failure 21

Apple's longtime AI chief John Giannandrea is retiring, with former Microsoft and Google AI leader Amar Subramanya stepping in to take over. MacRumors notes the retirement comes after the company's repeated delays in delivering its revamped Siri and internal turmoil that led to an AI team exodus. From the report: Giannandrea will serve as an advisor between now and 2026, with former Microsoft AI researcher Amar Subramanya set to take over as vice president of AI. Subramanya will report to Apple engineering chief Craig Federighi, and will lead Apple Foundation Models, ML research, and AI Safety and Evaluation. Subramanya was previously corporate vice president of AI at Microsoft, and before that, he spent 16 years at Google. He was head of engineering for Google's Gemini Assistant, and Apple says that he has "deep expertise" in both AI and ML research that will be important to "Apple's ongoing innovation and future Apple Intelligence features."

Some of the teams that Giannandrea oversaw will move to Sabih Khan and Eddy Cue, such as AI Infrastructure and Search and Knowledge. Khan is Apple's new Chief Operating Officer who took over for Jeff Williams earlier this year. Cue has long overseen Apple services. [...] Apple said that it is "poised to accelerate its work in delivering intelligent, trusted, and profoundly personal experiences" with the new AI team.
"We are thankful for the role John played in building and advancing our AI work, helping Apple continue to innovate and enrich the lives of our users," said Apple CEO Tim Cook in a statement. "AI has long been central to Apple's strategy, and we are pleased to welcome Amar to Craig's leadership team and to bring his extraordinary AI expertise to Apple. In addition to growing his leadership team and AI responsibilities with Amar's joining, Craig has been instrumental in driving our AI efforts, including overseeing our work to bring a more personalized Siri to users next year."
Privacy

Flock Uses Overseas Gig Workers To Build Its Surveillance AI (404media.co) 12

An anonymous reader quotes a report from 404 Media: Flock, the automatic license plate reader and AI-powered camera company, uses overseas workers from Upwork to train its machine learning algorithms, with training material telling workers how to review and categorize footage including images people and vehicles in the United States, according to material reviewed by 404 Media that was accidentally exposed by the company. The findings bring up questions about who exactly has access to footage collected by Flock surveillance cameras and where people reviewing the footage may be based. Flock has become a pervasive technology in the US, with its cameras present in thousands of communities that cops use every day to investigate things like carjackings. Local police have also performed numerous lookups for ICE in the system.

Companies that use AI or machine learning regularly turn to overseas workers to train their algorithms, often because the labor is cheaper than hiring domestically. But the nature of Flock's business -- creating a surveillance system that constantly monitors US residents' movements -- means that footage might be more sensitive than other AI training jobs. [...] Broadly, Flock uses AI or machine learning to automatically detect license plates, vehicles, and people, including what clothes they are wearing, from camera footage. A Flock patent also mentions cameras detecting "race." It included figures on "annotations completed" and "annotator tasks remaining in queue," with annotations being the notes workers add to reviewed footage to help train AI algorithms. Tasks include categorizing vehicle makes, colors, and types, transcribing license plates, and "audio tasks." Flock recently started advertising a feature that will detect "screaming." The panel showed workers sometimes completed thousands upon thousands of annotations over two day periods. The exposed panel included a list of people tasked with annotating Flock's footage. Taking those names, 404 Media found some were located in the Philippines, according to their LinkedIn and other online profiles.

Many of these people were employed through Upwork, according to the exposed material. Upwork is a gig and freelance work platform where companies can hire designers and writers or pay for "AI services," according to Upwork's website. The tipsters also pointed to several publicly available Flock presentations which explained in more detail how workers were to categorize the footage. It is not clear what specific camera footage Flock's AI workers are reviewing. But screenshots included in the worker guides show numerous images from vehicles with US plates, including in New York, Michigan, Florida, New Jersey, and California. Other images include road signs clearly showing the footage is taken from inside the US, and one image contains an advertisement for a specific law firm in Atlanta.

Social Networks

Austria's Rebel Nuns Refuse To Give Up Instagram To Stay In Their Convent (npr.org) 48

Three Austrian nuns in their 80s who escaped a care home and reclaimed their old convent are refusing the church's offer to stay because it requires them to quit Instagram, stop speaking to the press, and avoid legal counsel -- conditions they call a gag order. Their standoff with church authorities has now escalated to the Vatican as the nuns continue posting to their 185,000 followers. NPR reports: Before the church authorities moved the nuns into care almost two years ago, the local abbey and Archdiocese of Salzburg acquired the convent. The sisters say they were not aware they were signing away what they understood to be their lifelong right to remain in the cloister. On Friday, their superior, Provost Markus Grasl from Reichersberg Abbey, announced that the sisters can stay. But his offer comes with conditions: The nuns must cease all social media activities, stop talking to the press and forgo seeking legal advice. The nuns have rejected the proposal, and now Grasl has called on the Vatican to intercede.

In a statement released Friday, the nuns said the provost's offer is nothing short of a gag order. Speaking via Instagram, Sister Regina said, "We can't agree to this deal. Without the media, we'd have been silenced." Sister Bernadette told Instagram followers: "We need to resolve this but any agreement we reach must be in accordance with God's will and shaped by human reason." [...] The provost's proposed agreement -- which NPR has seen -- also bans laypeople from entering the cloisters, including the sisters' helpers, many of whom they've known for decades and on whom the nuns now depend for help.

Speaking to NPR on Monday, the provost's spokesperson, crisis PR manager Harald Schiffl, said that the provost does not understand why the nuns reject his offer and that, in response, he has requested the Vatican authorities responsible for religious orders to step in. The Vatican has not commented on the situation. So while they await news from Rome, the sisters continue to follow the papal Instagram account. Schiffl says the terms relating to the nuns' social media use are reasonable: "The abbey wishes to discontinue the sisters' social media accounts because what they show has very little to do with real religious life."

Education

Singapore Extends Secondary School Smartphone Ban To Cover Entire School Day (straitstimes.com) 6

Singapore's Ministry of Education has announced that secondary school students will be banned from using smartphones and smartwatches throughout the entire school day starting January 2026, extending current restrictions beyond regular lesson time to cover recess, co-curricular activities, and supplementary lessons. Under the new guidelines, students must store their phones in designated areas like lockers or keep them in their school bags.

Smartwatches also fall under the ban because they enable messaging and social media access, which the ministry says can lead to distractions and reduced peer interaction. Schools may allow exceptions where necessary. Some secondary schools adopted these tighter rules after they were announced for primary schools in January 2025, and the ministry reports improved student well-being and more physical interaction during breaks at those schools. The ministry is also moving the default sleep time for school-issued personal learning devices from 11pm to 10.30pm starting January.
Businesses

Top Consultancies Freeze Starting Salaries as AI Threatens 'Pyramid' Model (ft.com) 54

Major consulting firms including McKinsey, Boston Consulting Group and Bain have frozen starting salaries for the third consecutive year as AI reshapes how these companies think about their traditional reliance on large cohorts of junior analysts. Job offers for 2026 show undergraduate packages holding steady at $135,000-$140,000 and MBA packages at $270,000-$285,000, according to Management Consulted. The Big Four -- Deloitte, EY, KPMG, and PwC -- haven't raised starting pay since 2022.

The industry's classic "pyramid" structure, built on thousands of entry-level employees who crunch data and assemble PowerPoint decks, faces pressure as AI automates much of that work. Two senior executives at Big Four firms estimated that UK graduate recruitment would fall by about half in the coming year. PwC has already cut graduate hiring in 2025 and said in October it would miss a target to add 100,000 employees globally by 2026 -- a goal set five years ago before generative AI's rollout.
Cloud

Amazon and Google Announce Resilient 'Multicloud' Networking Service Plus an Open API for Interoperability (reuters.com) 21

Their announcement calls it "more than a multicloud solution," saying it's "a step toward a more open cloud environment. The API specifications developed for this product are open for other providers and partners to adopt, as we aim to simplify global connectivity for everyone."

Amazon and Google are introducing "a jointly developed multicloud networking service," reports Reuters. "The initiative will enable customers to establish private, high-speed links between the two companies' computing platforms in minutes instead of weeks." The new service is being unveiled a little over a month after an Amazon Web Services outage on October 20 disrupted thousands of websites worldwide, knocking offline some of the internet's most popular apps, including Snapchat and Reddit. That outage will cost U.S. companies between $500 million and $650 million in losses, according to analytics firm Parametrix.
Google and Amazon are promising "high resiliency" through "quad-redundancy across physically redundant interconnect facilities and routers," with both Amazon and Google continuously watching for issues. (And they're using MACsec encryption between the Google Cloud and AWS edge routers, according to Sunday's announcement: As organizations increasingly adopt multicloud architectures, the need for interoperability between cloud service providers has never been greater. Historically, however, connecting these environments has been a challenge, forcing customers to take a complex "do-it-yourself" approach to managing global multi-layered networks at scale.... Previously, to connect cloud service providers, customers had to manually set up complex networking components including physical connections and equipment; this approach required lengthy lead times and coordinating with multiple internal and external teams. This could take weeks or even months. AWS had a vision for developing this capability as a unified specification that could be adopted by any cloud service provider, and collaborated with Google Cloud to bring it to market.

Now, this new solution reimagines multicloud connectivity by moving away from physical infrastructure management toward a managed, cloud-native experience.

Reuters points out that Salesforce "is among the early users of the new approach, Google Cloud said in a statement."
AI

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality (msn.com) 124

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers.

The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population...

In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

AI

AI Can Already Do the Work of 12% of America's Workforce, Researchers Find (msn.com) 59

An anonymous reader shared this report from CBS News: Artificial intelligence can do the work currently performed by nearly 12% of America's workforce, according to a recentstudy from the Massachusetts Institute of Technology. The researchers, relying on a metric called the "Iceberg Index" that measures a job's potential to be automated, conclude that AI already has the cognitive and technical capacity to handle a range of tasks in technology, finance, health care and professional services. The index simulated how more than 150 million U.S. workers across nearly 1,000 occupations interact and overlap with AI's abilities...

AI is also already doingsome of the entry-level jobsthat have historically been reserved for recent college graduates or relatively inexperienced workers, the report notes. "AI systems now generate more than a billion lines of code each day, prompting companies to restructure hiring pipelines and reduce demand for entry-level programmers," the researchers wrote. "These observable changes in technology occupations signal a broader reorganization of work that extends beyond software development."

"The study doesn't seek to shed light on how many workers AI may already have displaced or could supplant in the future," the article points out.

"To what extent such tools take over job functions performed by people depends on a number of factors, including individual businesses' strategy, societal acceptance and possible policy interventions, the researchers note."

Slashdot Top Deals