Programming

Fewer US College Students Major in CS. More Choose Data Science, Engineering (yahoo.com) 26

"From 2008 to 2024, the number of four-year computer science degrees granted rose about fivefold..." reports the Washington Post. Then in 2025 CS suddenly dropped from the fourth-largest undergraduate major to sixth, they report (citing data from the nonprofit National Student Clearinghouse, which compiles numbers from 97% of U.S. universities.

The 54,000-student drop was "the biggest one-year drop of any major discipline going back to at least 2020." But what major are they choosing instead? Sarah Karamarkovich, a research associate with the National Student Clearinghouse, pointed to an explanation from the data that we had overlooked. Enrollments in two interdisciplinary majors, data analytics and data science, topped a combined 35,000 in the fall of 2025. That was up from a few hundred when those disciplines were broken out into their own majors in 2020. Those relatively new categories reflect colleges' zeal to create specialized majors, including in AI, data science, robotics and cybersecurity. Some of those disciplines may be counted in the national enrollment data as computer science. Others are not.

The numbers suggest that some of the disappearing computer science majors didn't flee so much as they splintered into related disciplines.... The 8 percent decline in computer science majors last fall was nearly mirrored by a 7.3 percent increase in engineering majors, according to the National Student Clearinghouse data. Within engineering, mechanical and electrical engineering major enrollments increased by the largest absolute amounts — a jump of 11 percent and 14 percent, respectively.

Movies

Only Half of Americans Went To a Movie Theater In 2025, Study Finds (variety.com) 162

A Pew Research Center survey found that only 53% of U.S. adults went to a movie theater in the past year, while 7% said they've never seen a movie in a theater at all. "The findings reflected a domestic box office still fighting to regain its footing since the COVID-19 pandemic, when ticket sales collapsed 81% in 2020 due to theater closures," reports Variety. From the report: In 2025, moviegoers in the U.S. and Canada bought 769.2 million tickets, less than half of the all-time peak of roughly 1.6 billion tickets sold in 2002, according to data from Nash Information Services. However, an August 2025 study field by NRG/National Research Group showed that 77% of Americans ages 12-74 went to see at least one movie in a theater in the previous 12 months.

Box office revenue peaked at an inflation-adjusted $16.4 billion in 2002, and annual ticket revenue held relatively steady through the 2000s and 2010s before falling to under $3 billion in 2020 when theaters closed for months. Last year, U.S. theaters sold just over $9 billion worth of tickets, per media analytics firm Comscore. The number represents a recovery, but nowhere near a full one, as ticket sales have been lagging around 20% below pre-pandemic levels.

The Courts

AI Startup Sues Ex-CEO Saying He Took 41GB of Email, Lied On Resume (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Hayden AI, a San Francisco startup that makes spatial analytics tools for cities worldwide, has sued its co-founder and former CEO, alleging that he stole a large quantity of proprietary information in the days leading up to his ouster from the company in September 2024. In a lawsuit filed late last month in San Francisco Superior Court but only made public this week, Hayden AI claims that former CEO Chris Carson undertook what it called "numerous fraudulent actions," which include "forged board signatures, unauthorized stock sales, and improper allocation of personal expenses." [...] Hayden AI, which is worth $464 million according to an estimated valuation on PitchBook, has asked the court to impose preliminary injunctive relief, requiring Carson to either return or destroy the data he allegedly stole. Specifically, the lawsuit alleges that Carson secretly sold over $1.2 million in company stock, forged board signatures, and copied 41GB of proprietary company emails before being fired in September 2024. The complaint also claims Carson fabricated key parts of his resume, including a PhD and military service. It's a "carefully constructed fraud," says Hayden AI.

"That is a lie," the complaint states. "Carson does not hold a PhD from Waseda or any other university. In 2007, he was not obtaining a PhD but was operating 'Splat Action Sports,' a paintball equipment business in a Florida strip mall."
Businesses

Accenture Acquires Ookla, Downdetector As Part of $1.2 Billion Deal (theregister.com) 15

Accenture is acquiring Downdetector parent company Ookla from Ziff Davis in a $1.2 billion deal to bolster its network analytics and visibility tools for telecoms, hyperscalers, and enterprises. "The deal, which will transfer all of Ziff Davis's Connectivity division to Accenture, includes Ookla's Speedtest, Ekahau, and RootMetrics," notes The Register reports: "Modern networks have evolved from simple infrastructure into business-critical platforms," said Accenture CEO Julie Sweet in a canned statement. "Without the ability to measure performance, organizations cannot optimize experience, revenue, or security." Ookla is meant to let them do just that.

Data captured at the network and device layer are used to enhance fraud prevention in banking, smart homes monitoring, and traffic optimization in retail, Accenture said. Ookla's platform, which lets user's test their own connectivity speed, captures more than 1,000 attributes per test, and provides the foundation for those analytics, Accenture said.

AI

Anthropic's Claude Passes ChatGPT, Now #1, on Apple's 'Top Apps' Chart After Pentagon Controversy (engadget.com) 36

"Anthropic may have lost out on doing business with the US government," reports Engadget, "but it's gained enough popularity to earn the number one spot on the App Store's Top Free Apps leaderboard."

Anthropic's Claude AI assistant had already leaped to the #2 slot on Apple's chart by late Friday," CNBC reported Saturday: The rise in popularity suggests that Anthropic is benefiting from its presence in news headlines, stemming from its refusal to have its models used for mass domestic surveillance or for fully autonomous weapons... OpenAI's ChatGPT sat at No. 1 on the App Store rankings on Saturday, while Google's Gemini was at No. 3... On Jan. 30, [Claude] was ranked No. 131 in the U.S., and it bounced between the top 20 and the top 50 for much of February, according to data from analytics company Sensor Tower... [And Friday night, for 85.3 million followers] pop singer Katy Perry posted a screenshot of Anthropic's Pro subscription for consumers, with a heart superimposed over it.
Sunday Engadget reported Anthropic's "very public spat" with the Pentagon "led to a wave of user support that finally allowed Claude to dethrone OpenAI's ChatGPT on the App Store as the most downloaded free app."

. Friday Anthropic posted "We are deeply grateful to our users, and to the industry peers, policymakers, veterans, and members of the public who have voiced their support in recent days. Thank you. "
AI

OpenAI's Lead Is Contracting as AI Competition Intensifies (bigtechnology.com) 28

OpenAI's rivals are cutting into ChatGPT's lead. From a report: The top chatbot's market share fell from 69.1% to 45.3% between January 2025 and January 2026 among daily U.S. users of its mobile app. Gemini, in the same time period, rose from 14.7% to 25.1% and Grok rose from 1.6% to 15.2%.

The data, obtained by Big Technology from mobile insights firm Apptopia, indicates the chatbot race has tightened meaningfully over the past year with Google's surge showing up in the numbers. Overall, the chatbot market increased 152% since last January, according to Apptopia, with ChatGPT exhibiting healthy download growth.

On desktop and mobile web, a similar pattern appears, according to analytics firm Similarweb. Visits to ChatGPT went from 3.8 billion to 5.7 billion between January 2025 and January 2026, a 50% increase, while visits to Gemini went from 267.7 million to 2 billion, a 647% increase. ChatGPT is still far and away the leader in visits, but it has company in the race now.

The Media

Is Google Prioritizing YouTube and X Over News Publishers on Discover? (pressgazette.co.uk) 32

Earlier this month, the media site Press Gazette reported that now Google "is increasingly prioritising AI summaries, X posts and Youtube videos" on its "Discover" feed (which appears on the leftmost homescreen page of many Android phones and the Google app's homepage).

"The changes could be devastating for publishers who rely heavily on Discover for referral traffic. And it looks set to accelerate a global trend of declining traffic to publishers from both Google search and Discover." Xavi Beumala from website analytics platform Marfeel warned in a research update: "Google Discover is no longer a publisher-first surface. It's becoming an AI platform with YouTube and X absorbing real estate that once went to newsrooms..." [They warn later that "This is not a marginal UI experiment. It is a reallocation of feed real estate away from links and toward inline Youtube plays and generated summaries."] Google says it prioritises "helpful, reliable, people-first content". Unlike Google News, there is no requirement that Google Discover showcases bona fide publisher websites.

In recent months fake news stories published by fraudulent website publishers have been promoted on Google Discover, reaping tens of millions of clicks. Google said it was working on a "fix" for this issue...

Facebook, Instagram and Tiktok content may also start flowing into the Discover feed in future. When Google announced the addition of posts from X, Instagram and Youtube Shorts in September, it said there would be "more platforms to come".

Microsoft

Microsoft 365 Endured 9+ Hours of Outages Thursday (crn.com) 36

Early Friday "there were nearly 113 incidents of people reporting issues with Microsoft 365 as of 1:05 a.m. ET," reports Reuters. But that's down "from over 15,890 reports at its peak a day earlier, according to Downdetector." Reuters points out the outage affected antivirus software Microsoft Defender and data governance software Microsoft Purview, while CRN notes it also impacted "a number of Microsoft 365 services" including Outlook and Exchange online: During the outage, Outlook users received a "451 4.3.2 temporary server issue" error message when attempting to send or receive email. Users did not have the ability to send and receive email through Exchange Online, including notification emails from Microsoft Viva Engage, according to the vendor. Other issues that cropped up include an inability to send and receive subscription email through [analytics platform] Microsoft Fabric, collect message traces, search within SharePoint online and Microsoft OneDrive and create chats, meetings, teams, channels or add members in Microsoft Teams...

As with past cloud outages with other vendors, even after Microsoft fixed the issues, recovery efforts by its users to return to a normal state took additional time... Microsoft confirmed in a post on X [Thursday] at 4:14 p.m. ET that it "restored the affected infrastructure to a (healthy) state" but "further load balancing is required to mitigate impact...." The company reported "residual imbalances across the environment" at 7:02 p.m., "restored access to the affected services" and stable mail flow at 12:33 a.m. Jan. 23. At that time, Microsoft still saw a "small number of remaining affected services" without full service stability. The company declared impact from the event "resolved" at 1:29 p.m. Eastern. Microsoft sent out another X post at 8:20 a.m. asking users experiencing residual issues to try "clearing local DNS caches or temporarily lowering DNS TTL values may help ensure a quicker remediation...."

Microsoft said in an admin center update that [Thursday's] outage was "caused by elevated service load resulting from reduced capacity during maintenance for a subset of North America hosted infrastructure." Furthermore, Microsoft noted that during "ongoing efforts to rebalance traffic" it introduced a "targeted load balancing configuration change intended to expedite the recovery process, which incidentally introduced additional traffic imbalances associated with persistent impact for a portion of the affected infrastructure." US itek's David Stinner said it appears that Microsoft did not have enough capacity on its backup system while doing maintenance on its main system. "It looks like the backup system was overloaded, and it brought the system down while they were still doing maintenance on the main system," he said. "That is why it took so many hours to get back up and running. If your primary system is down for maintenance and your backup system fails due to capacity issues, then it is going to take a while to get your primary system back up and running."

"This was not Microsoft's first outage of 2026," the article notes, "with the vendor handling access issues with Teams, Outlook and other M365 services on Wednesday, a Copilot issue on Jan. 15 plus an Azure outage earlier in the month..."
Businesses

Amazon Is Buying America's First New Copper Output In More Than a Decade (wsj.com) 35

An anonymous reader quotes a report from the Wall Street Journal: Amazon is turning to an Arizona mine that last year became the first new source of U.S. copper in more than a decade, to meet its data centers' ravenous appetite for the industrial metal. The mine was restarted as a proving ground for Rio Tinto's new method of unlocking low-grade copper deposits. Rio signed a two-year supply pact with Amazon Web Services, a vote of confidence for its Nuton venture, which uses bacteria and acid to extract copper from ore that was previously uneconomical to process. The move by Amazon is the latest example of a technology company rushing to secure the power and critical materials necessary to build and operate artificial-intelligence data centers. The Nuton copper will satisfy only a sliver of Amazon's needs. The biggest data centers each require tens of thousands of metric tons of copper for all the wires, busbars, circuit boards, transformers and other electrical components housed there. The 14,000 metric tons of copper cathode that Rio expects the Arizona Nuton project to yield over four years wouldn't be enough for one of those facilities.

Rio deployed its bioleaching process in the recent restart of a mine east of Tucson and has partnerships to take the technology to several others in the Americas. The idea is to uncork the low-grade ore left behind at old mines and is key to Rio's plans to boost output when new discoveries are harder than ever to bring online and copper demand is surging. [...] "We work at the commodity level to find lower carbon solutions to drive our business growth," said Chris Roe, Amazon's director of worldwide carbon. "That means steel, and that means concrete, and it absolutely means copper with regard to our data centers." Roe said the copper will be routed to companies that produce components for Amazon's data centers. As part of the deal, Amazon is supplying Rio with cloud-computing and data analytics to optimize Nuton's recovery rates and help the miner expand production.

Crime

Italy's Privacy Watchdog, Scourge of US Big Tech, Hit By Corruption Probe (reuters.com) 10

The powerful data privacy watchdog in Italy long known for aggressively policing U.S. and Chinese AI giants is under investigation for possible corruption and embezzlement. Reuters reports: Rome prosecutors are investigating the agency's president, Pasquale Stanzione, and three other board members over alleged excessive spending and possible corruption behind its decisions, Italian news agencies including ANSA as well as the judicial source, who did not wish to be named, said. Stanzione, when asked by reporters to comment on the investigation, said he was "absolutely serene."

The opposition 5-Star Movement said the agency's credibility had been undermined and called for Stanzione to resign. Stanzione declined to answer when asked repeatedly by reporters whether he would step down. The data privacy authority, known in Italy as the Garante, is one of the European Union's most proactive regulators in assessing AI platform compliance with the bloc's data privacy regime. It frequently takes initiatives -- such as requesting information or imposing fines or bans -- on matters affecting high-tech multinationals operating in the country.

Power

America's Biggest Power Grid Operator Has an AI Problem - Too Many Data Centers (msn.com) 61

America's largest power-grid operator, PJM, which delivers electricity to 67 million people across a 13-state region from New Jersey to Kentucky, is approaching a supply crisis as AI data centers in Northern Virginia's "Data Center Alley" consume electricity at an unprecedented rate.

The nonprofit expects demand to grow by 4.8% annually over the next decade. Mark Christie, former chairman of the Federal Energy Regulatory Commission, said the reliability risk that was once "on the horizon" is now "across the street." Dominion Energy, the utility serving parts of Virginia, has received requests from data-center developers requiring more than 40 gigawatts of electricity -- roughly twice its Virginia network capacity at the end of 2024. Older power plants are going out of service faster than new ones can be built, and the grid could max out during periods of high demand, forcing rolling blackouts during heat waves or deep freezes.

In November, efforts to establish new rules for data centers stalled when PJM, tech companies, power suppliers and utilities couldn't agree on a plan. Monitoring Analytics, the firm that oversees the market, warned that unless data centers bring their own power supply, "PJM will be in the position of allocating blackouts rather than ensuring reliability."
AI

AI Fails at Most Remote Work, Researchers Find (msn.com) 39

A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post.

They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can accomplish many impressive tasks involving computer code, documents or images. That has prompted predictions that human work of many kinds could soon be done by computers alone. Bentley University and Gallup found in a survey [PDF] last year that about three-quarters of Americans expect AI to reduce the number of U.S. jobs over the next decade. But economic data shows the technology largely has not replaced workers.

To understand what work AI can do on its own today, researchers collected hundreds of examples of projects posted on freelancing platforms that humans had been paid to complete. They included tasks such as making 3D product animations, transcribing music, coding web video games and formatting research papers for publication. The research team then gave each task to AI systems such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI. "Current models are not close to being able to automate real jobs in the economy," said Jason Hausenloy, one of the researchers on the Remote Labor Index study...

The results, which show how AI systems fall short, challenge predictions that the technology is poised to soon replace large portions of the workforce... The AI systems failed on nearly half of the Remote Labor Index projects by producing poor-quality work, and they left more than a third incomplete. Nearly 1 in 5 had basic technical problems such as producing corrupt files, the researchers found.

One test involved creating an interactive dashboard for data from the World Happiness Report, according to the article. "At first glance, the AI results look adequate. But closer examination reveals errors, such as countries inexplicably missing data, overlapping text and legends that use the wrong colors — or no colors at all."

The researchers say AI systems are hobbled by a lack of memory, and are also weak on "visual" understanding.
IOS

iOS 26 Shows Unusually Slow Adoption Months After Release (macrumors.com) 61

Apple's iOS 26 appears to be witnessing the slowest adoption rate in recent memory, with third-party analytics from StatCounter indicating that only 15 to 16% of active iPhones worldwide are running the operating system nearly four months after its September release. The figures stand in stark contrast to iOS 18, which had reached approximately 63% adoption by January 2025, and iOS 17, which hit 54% by January 2024. iOS 16 had surpassed 60% by January 2023.

StatCounter's breakdown for January 2026 shows iOS 26.1 accounting for roughly 10.6% of devices, iOS 26.2 at about 4.6%, and the original iOS 26.0 at 1.1%. More than 60% of iPhones tracked by the analytics firm remain on iOS 18.

MacRumors' own visitor data tells a similar story: 89.3% of the site's readers were on iOS 18 during the first week of January 2025, but only 25.7% are running iOS 26 during the same period this year. iOS 26 introduced Liquid Glass, a sweeping visual redesign that replaces much of the traditional opaque interface with translucent layers, blurred backgrounds, and dynamic depth effects.
United Kingdom

UK Urged To Unplug From US Tech Giants as Digital Sovereignty Fears Grow (theregister.com) 53

An anonymous reader shares a report: The Open Rights Group is warning politicians that the UK is leaning far too heavily on US tech companies to run critical systems, and wants the Cybersecurity and Resilience Bill to force a rethink.

The digital rights outfit says the bill, which is due to receive its second reading in the House of Commons today, represents a rare opportunity to force the government to confront what it sees as a strategic blind spot: the UK's reliance on companies such as Amazon, Google, Microsoft, and data analytics biz Palantir for everything from cloud hosting to sensitive public sector systems.

"Just as relying on one country for the UK's energy needs would be risky and irresponsible, so is overreliance on US companies to supply the bulk of our digital infrastructure," said James Baker, platform power programme manager at Open Rights Group. He argued that digital infrastructure has become an extension of geopolitical power, and the UK is increasingly vulnerable to decisions taken far beyond Westminster's control.

AI

Does AI Really Make Coders Faster? (technologyreview.com) 139

One developer tells MIT Technology Review that AI tools weaken the coding instincts he used to have. And beyond that, "It's just not fun sitting there with my work being done for me."

But is AI making coders faster? "After speaking to more than 30 developers, technology executives, analysts, and researchers, MIT Technology Review found that the picture is not as straightforward as it might seem..." For some developers on the front lines, initial enthusiasm is waning as they bump up against the technology's limitations. And as a growing body of research suggests that the claimed productivity gains may be illusory, some are questioning whether the emperor is wearing any clothes.... Data from the developer analytics firm GitClear shows that most engineers are producing roughly 10% more durable code — code that isn't deleted or rewritten within weeks — since 2022, likely thanks to AI. But that gain has come with sharp declines in several measures of code quality. Stack Overflow's survey also found trust and positive sentiment toward AI tools falling significantly for the first time. And most provocatively, a July study by the nonprofit research organization Model Evaluation & Threat Research (METR) showed that while experienced developers believed AI made them 20% faster, objective tests showed they were actually 19% slower...

Developers interviewed by MIT Technology Review generally agree on where AI tools excel: producing "boilerplate code" (reusable chunks of code repeated in multiple places with little modification), writing tests, fixing bugs, and explaining unfamiliar code to new developers. Several noted that AI helps overcome the "blank page problem" by offering an imperfect first stab to get a developer's creative juices flowing. It can also let nontechnical colleagues quickly prototype software features, easing the load on already overworked engineers. These tasks can be tedious, and developers are typically glad to hand them off. But they represent only a small part of an experienced engineer's workload. For the more complex problems where engineers really earn their bread, many developers told MIT Technology Review, the tools face significant hurdles...

The models also just get things wrong. Like all LLMs, coding models are prone to "hallucinating" — it's an issue built into how they work. But because the code they output looks so polished, errors can be difficult to detect, says James Liu, director of software engineering at the advertising technology company Mediaocean. Put all these flaws together, and using these tools can feel a lot like pulling a lever on a one-armed bandit. "Some projects you get a 20x improvement in terms of speed or efficiency," says Liu. "On other things, it just falls flat on its face, and you spend all this time trying to coax it into granting you the wish that you wanted and it's just not going to..." There are also more specific security concerns, she says. Researchers have discovered a worrying class of hallucinations where models reference nonexistent software packages in their code. Attackers can exploit this by creating packages with those names that harbor vulnerabilities, which the model or developer may then unwittingly incorporate into software.

Other key points from the article:
  • LLMs can only hold limited amounts of information in context windows, so "they struggle to parse large code bases and are prone to forgetting what they're doing on longer tasks."
  • "While an LLM-generated response to a problem may work in isolation, software is made up of hundreds of interconnected modules. If these aren't built with consideration for other parts of the software, it can quickly lead to a tangled, inconsistent code base that's hard for humans to parse and, more important, to maintain."
  • "Accumulating technical debt is inevitable in most projects, but AI tools make it much easier for time-pressured engineers to cut corners, says GitClear's Harding. And GitClear's data suggests this is happening at scale..."
  • "As models improve, the code they produce is becoming increasingly verbose and complex, says Tariq Shaukat, CEO of Sonar, which makes tools for checking code quality. This is driving down the number of obvious bugs and security vulnerabilities, he says, but at the cost of increasing the number of 'code smells' — harder-to-pinpoint flaws that lead to maintenance problems and technical debt."

Yet the article cites a recent Stanford University study that found employment among software developers aged 22 to 25 dropped nearly 20% between 2022 and 2025, "coinciding with the rise of AI-powered coding tools."

The story is part of MIT Technology Review's new Hype Correction series of articles about AI.


AI

Chinese University Collected More AI Patents Than MIT, Stanford, Princeton and Harvard Combined (bloomberg.com) 33

Tsinghua University collected 4,986 AI and machine learning patents between 2005 and the end of 2024. The Beijing institution has received more than 900 patents last year alone. The total exceeds the combined patent count from MIT, Stanford, Princeton and Harvard during the same period. China now accounts for more than half of all active patent families globally in AI and machine learning fields, according to data analytics service LexisNexis.

The university also has more AI research papers among the 100 most cited than any other school at last count. The US still holds the most influential AI patents and the top performing models. Harvard and MIT consistently rank ahead of Tsinghua in patent influence. American institutions produced 40 notable AI models in 2024 compared to 15 from Chinese organizations, according to Stanford's AI Index Report. China's share of the world's elite AI researchers -- the top 2% -- rose from 10% in 2019 to 26% in 2022. The US share fell from 35% to 28% during the same period, according to the Information Technology & Innovation Foundation.
Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 51

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
AI

Slashdot Reader Mocks Databricks 'Context-Aware AI Assistant' for Odd Bar Chart 17

Long-time Slashdot reader theodp took a good look at the images on a promotional web page for Databricks' "context-aware AI assistant": If there was an AI Demo Hall of Shame, the first inductee would have to be Amazon. Their demo tried to support its CEO's claims that Amazon Q Code Transformation AI saved it 4,500 developer-years and an additional $260 million in "annualized efficiency gains" by automatically and accurately upgrading code to a more current version of Java. But it showcased a program that didn't even spell "Java" correctly. (It was instead called 'Jave')...

Today's nominee for the AI Demo Hall of Shame inductee is analytics platform Databricks for the NYC Taxi Trips Analysis it's been showcasing on its Data Science page since last November. Not only for its choice of a completely trivial case study that requires no 'Data Science' skills — find and display the ten most expensive and longest taxi rides — but also for the horrible AI-generated bar chart used to present the results of the simple ranking that deserves its own spot in the Graph Hall of Shame. In response to a prompt of "Now create a new bar chart with matplotlib for the most expensive trips," the Databricks AI Assistant dutifully complies with the ill-advised request, spewing out Python code to display the ten rides on a nonsensical bar chart whose continuous x-axis hides points sharing the same distance. (One might also question why no annotation is provided to call out or explain the 3 trips with a distance of 0 miles that are among the ten most expensive rides, with fares of $260, $188, and $105).

Looked at with a critical eye, these examples used to sell data scientists, educators, management, investors, and Wall Street on AI would likely raise eyebrows rather than impress their intended audiences.
Education

AI-Generated Lesson Plans Fall Short On Inspiring Students, Promoting Critical Thinking (theconversation.com) 50

An anonymous reader quotes a report from The Conversation: When teachers rely on commonly used artificial intelligence chatbots to devise lesson plans, it does not result in more engaging, immersive or effective learning experiences compared with existing techniques, we found in our recent study. The AI-generated civics lesson plans we analyzed also left out opportunities for students to explore the stories and experiences of traditionally marginalized people. The allure of generative AI as a teaching aid has caught the attention of educators. A Gallup survey from September 2025 found that 60% of K-12 teachers are already using AI in their work, with the most common reported use being teaching preparation and lesson planning. [...]

For our research, we began collecting and analyzing AI-generated lesson plans to get a sense of what kinds of instructional plans and materials these tools provide to teachers. We decided to focus on AI-generated lesson plans for civics education because it is essential for students to learn productive ways to participate in the U.S. political system and engage with their communities. To collect data for this study, in August 2024 we prompted three GenAI chatbots -- the GPT-4o model of ChatGPT, Google's Gemini 1.5 Flash model and Microsoft's latest Copilot model -- to generate two sets of lesson plans for eighth grade civics classes based on Massachusetts state standards. One was a standard lesson plan and the other a highly interactive lesson plan.

We garnered a dataset of 311 AI-generated lesson plans, featuring a total of 2,230 activities for civic education. We analyzed the dataset using two frameworks designed to assess educational material: Bloom's taxonomy and Banks' four levels of integration of multicultural content. Bloom's taxonomy is a widely used educational framework that distinguishes between "lower-order" thinking skills, including remembering, understanding and applying, and "higher-order" thinking skills -- analyzing, evaluating and creating. Using this framework to analyze the data, we found 90% of the activities promoted only a basic level of thinking for students. Students were encouraged to learn civics through memorizing, reciting, summarizing and applying information, rather than through analyzing and evaluating information, investigating civic issues or engaging in civic action projects.

When examining the lesson plans using Banks' four levels of integration of multicultural content model (PDF), which was developed in the 1990s, we found that the AI-generated civics lessons featured a rather narrow view of history -- often leaving out the experiences of women, Black Americans, Latinos and Latinas, Asian and Pacific Islanders, disabled individuals and other groups that have long been overlooked. Only 6% of the lessons included multicultural content. These lessons also tended to focus on heroes and holidays rather than deeper explorations of understanding civics through multiple perspectives. Overall, we found the AI-generated lesson plans to be decidedly boring, traditional and uninspiring. If civics teachers used these AI-generated lesson plans as is, students would miss out on active, engaged learning opportunities to build their understanding of democracy and what it means to be a citizen.

Transportation

Miami Is Testing a Self-Driving Police Car That Can Launch Drones (thedrive.com) 47

Miami-Dade County is piloting a self-driving police car built by PolicingLab and powered by Perrone Robotics, equipped with 360-degree cameras, AI analytics, license plate readers, and even drone-launch capabilities. The Drive reports: "Designed as a force multiplier, the PUG combines advanced autonomy from Perrone Robotics with AI-driven analytics, real-time crime data, and a suite of sensors including 360-degree cameras, thermal imaging, license plate recognition, and drone launch capabilities," [says the PolicingLab's announcement.] "Its role: extend deputy resources, improve efficiency, and enhance community safety without additional cost to Miami-Dade taxpayers," it continued.

For starters, this is merely a pilot program being sponsored by PolicingLab, not a standard addition to the department's fleet. And second, at least initially, it's being soft-launched as a feeler for the Sheriff's public affairs folks. It'll be posted up at public and media events in order to "gather feedback" before the department considers whether to press it into service. Once it's actually brought online, PolicingLab says the squad car will offer several benefits to the department: "The 12-month pilot will evaluate outcomes such as improved response times, enhanced deterrence, officer safety, and stronger public trust," it said. "Results will inform whether and how the program expands, potentially serving as a national model for agencies across the country."

In other words, PolicingLab expects that the data collected about real-world policing will more than offset the costs of building and supporting the car in the long run, but if these are ever pressed into regular service, you can bet they'll come with hefty subscription and support costs, even if they do eliminate expensive human labor (and judgment) from the situation.

Slashdot Top Deals