Programming

The Toughest Programming Question for High School Students on This Year's CS Exam: Arrays 65

America's nonprofit College Board lets high school students take college-level classes — including a computer programming course that culminates with a 90-minute test. But students did better on questions about If-Then statements than they did on questions about arrays, according to the head of the program. Long-time Slashdot reader theodp explains: Students exhibited "strong performance on primitive types, Boolean expressions, and If statements; 44% of students earned 7-8 of these 8 points," says program head Trevor Packard. But students were challenged by "questions on Arrays, ArrayLists, and 2D Arrays; 17% of students earned 11-12 of these 12 points."

"The most challenging AP Computer Science A free-response question was #4, the 2D array number puzzle; 19% of students earned 8-9 of the 9 points possible."

You can see that question here. ("You will write the constructor and one method of the SumOrSameGame class... Array elements are initialized with random integers between 1 and 9, inclusive, each with an equal chance of being assigned to each element of puzzle...") Although to be fair, it was the last question on the test — appearing on page 16 — so maybe some students just didn't get to it.

theodp shares a sample Java solution and one in Excel VBA solution (which includes a visual presentation).

There's tests in 38 subjects — but CS and Statistics are the subjects where the highest number of students earned the test's lowest-possible score (1 out of 5). That end of the graph also includes notoriously difficult subjects like Latin, Japanese Language, and Physics.

There's also a table showing scores for the last 23 years, with fewer than 67% of students achieving a passing grade (3+) for the first 11 years. But in 2013 and 2017, more than 67% of students achieved that passsing grade, and the percentage has stayed above that line ever since (except for 2021), vascillating between 67% and 70.4%.

2018: 67.8%
2019: 69.6%
2020: 70.4%
2021: 65.1%
2022: 67.6%
2023: 68.0%
2024: 67.2%
2025: 67.0%
Microsoft

Microsoft Research Identifies 40 Jobs Most Vulnerable To AI (fortune.com) 166

Microsoft researchers have identified 40 occupations [PDF] with the highest exposure to AI, ranking jobs by how closely their tasks align with AI's current capabilities. The study analyzed 200,000 real-world conversations from Copilot users and compared AI performance against occupational data.

Interpreters and translators top the list, followed by historians and passenger attendants. Customer service and sales representatives, comprising about 5 million U.S. jobs, also face significant AI competition. Knowledge workers performing computer, math, or administrative tasks showed high vulnerability, as did sales positions involving information sharing and explanation. The research found occupations requiring Bachelor's degrees demonstrate higher AI applicability than those with lower educational requirements.

First, the top 10 least affected occupations by generative AI: 1. Dredge Operators
2. Bridge and Lock Tenders
3. Water Treatment Plant and System Operators
4. Foundry Mold and Coremakers
5. Rail-Track Laying and Maintenance Equipment Operators
6. Pile Driver Operators
7. Floor Sanders and Finishers
8. Orderlies
9. Motorboat Operators
10. Logging Equipment Operators
Now, the top 40 most affected occupations by generative AI:1. Interpreters and Translators
2. Historians
3. Passenger Attendants
4. Sales Representatives of Services
5. Writers and Authors
6. Customer Service Representatives
7. CNC Tool Programmers
8. Telephone Operators
9. Ticket Agents and Travel Clerks
10. Broadcast Announcers and Radio DJs
11. Brokerage Clerks
12. Farm and Home Management Educators
13. Telemarketers
14. Concierges
15. Political Scientists
16. News Analysts, Reporters, Journalists
17. Mathematicians
18. Technical Writers
19. Proofreaders and Copy Markers
20. Hosts and Hostesses
21. Editors
22. Business Teachers, Postsecondary
23. Public Relations Specialists
24. Demonstrators and Product Promoters
25. Advertising Sales Agents
26. New Accounts Clerks
27. Statistical Assistants
28. Counter and Rental Clerks
29. Data Scientists
30. Personal Financial Advisors
31. Archivists
32. Economics Teachers, Postsecondary
33. Web Developers
34. Management Analysts
35. Geographers
36. Models
37. Market Research Analysts
38. Public Safety Telecommunicators
39. Switchboard Operators
40. Library Science Teachers, Postsecondary.

Education

ChatGPT's New Study Mode Is Designed To Help You Learn, Not Just Give Answers 29

An anonymous reader quotes a report from Ars Technica: The rise of large language models like ChatGPT has led to widespread concern that "everyone is cheating their way through college," as a recent New York magazine article memorably put it. Now, OpenAI is rolling out a new "Study Mode" that it claims is less about providing answers or doing the work for students and more about helping them "build [a] deep understanding" of complex topics.

Study Mode isn't a new ChatGPT model but a series of "custom system instructions" written for the LLM "in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning," OpenAI said. Instead of the usual summary of a subject that stock ChatGPT might give -- which one OpenAI employee likened to "a mini textbook chapter" -- Study Mode slowly rolls out new information in a "scaffolded" structure. The mode is designed to ask "guiding questions" in the Socratic style and to pause for periodic "knowledge checks" and personalized feedback to make sure the user understands before moving on. It's unknown how many students will use this guided learning tool instead of just asking ChatGPT to generate answers from the start.

In an early hands-off demo attended by Ars Technica, Study Mode responded to a request to "teach me about game theory" by first asking about the user's overall familiarity with the subject and what they'll be using the information for. ChatGPT introduced a short overview of some core game theory concepts, then paused to ask a question before providing a relevant real-world example. In another example involving a classic "train traveling at speed" math problem, Study Mode resisted multiple simulated attempts by the frustrated "student" to simply ask for the answer and instead tried to gently redirect the conversation to how the available information could be used to generate that answer. An OpenAI representative told Ars that Study Mode will eventually provide direct solutions if asked repeatedly, but the default behavior is more tuned to a Socratic tutoring style.
OpenAI said it drew inspiration for Study Mode from "power users" and collaborated with pedagogy experts and college students to help refine its responses. As for whether the mode can be trusted, OpenAI told Ars that "the risk of hallucination is lower with Study Mode because the model processes information in smaller chunks, calibrating along the way."

The current Study Mode prompt does, however, result in some "inconsistent behavior and mistakes across conversations," the company warned.
Piracy

Creator of 1995 Phishing Tool 'AOHell' On Piracy, Script Kiddies, and What He Thinks of AI (yahoo.com) 14

In 1995's online world, AOL existed mostly beside the internet as a "walled, manicured garden," remembers Fast Company.

Then along came AOHell "the first of what would become thousands of programs designed by young hackers to turn the system upside down" — built by a high school dropout calling himself "Da Chronic" who says he used "a computer that I couldn't even afford" using "a pirated copy of Microsoft Visual Basic." [D]istributed throughout the teen chatrooms, the program combined a pile of tricks and pranks into a slick little control panel that sat above AOL's windows and gave even newbies an arsenal of teenage superpowers. There was a punter to kick people out of chatrooms, scrollers to flood chats with ASCII art, a chat impersonator, an email and instant message bomber, a mass mailer for sharing warez (and later mp3s), and even an "Artificial Intelligence Bot" [which performed automated if-then responses]. Crucially, AOHell could also help users gain "free" access to AOL. The program came with a program for generating fake credit card numbers (which could fool AOL's sign up process), and, by January 1995, a feature for stealing other users' passwords or credit cards. With messages masquerading as alerts from AOL customer service reps, the tool could convince unsuspecting users to hand over their secrets...

Of course, Da Chronic — actually a 17-year-old high school dropout from North Carolina named Koceilah Rekouche — had other reasons, too. Rekouche wanted to hack AOL because he loved being online with his friends, who were a refuge from a difficult life at home, and he couldn't afford the hourly fee. Plus, it was a thrill to cause havoc and break AOL's weak systems and use them exactly how they weren't meant to be, and he didn't want to keep that to himself. Other hackers "hated the fact that I was distributing this thing, putting it into the team chat room, and bringing in all these noobs and lamers and destroying the community," Rekouche told me recently by phone...

Rekouche also couldn't have imagined what else his program would mean: a free, freewheeling creative outlet for thousands of lonely, disaffected kids like him, and an inspiration for a generation of programmers and technologists. By the time he left AOL in late 1995, his program had spawned a whole cottage industry of teenage script kiddies and hackers, and fueled a subculture where legions of young programmers and artists got their start breaking and making things, using pirated software that otherwise would have been out of reach... In 2014, [AOL CEO Steve] Case himself acknowledged on Reddit that "the hacking of AOL was a real challenge for us," but that "some of the hackers have gone on to do more productive things."

When he first met Mark Zuckerberg, he said, the Facebook founder confessed to Case that "he learned how to program by hacking [AOL]."

"I can't imagine somebody doing that on Facebook today," Da Chronic says in a new interview with Fast Company. "They'll kick you off if you create a Google extension that helps you in the slightest bit on Facebook, or an extension that keeps your privacy or does a little cool thing here and there. That's totally not allowed."

AOHell's creators had called their password-stealing techniques "phishing" — and the name stuck. (AOL was working with federal law enforcement to find him, according to a leaked internal email, but "I didn't even see that until years later.") Enrolled in college, he decided to write a technical academic paper about his program. "I do believe it caught the attention of Homeland Security, but I think they realized pretty quickly that I was not a threat."

He's got an interesting perspective today, noting with today's AI tool's it's theoretically possible to "craft dynamic phishing emails... when I see these AI coding tools I think, this might be like today's Visual Basic. They take out a lot of the grunt work."

What's the moral of the story? "I didn't have any qualifications or anything like that," Da Chronic says. "So you don't know who your adversary is going to be, who's going to understand psychology in some nuanced way, who's going to understand how to put some technological pieces together, using AI, and build some really wild shit."
Education

College Grads Are Pursuing a New Career Path: Training AI Models (bloomberg.com) 34

College graduates across specialized fields are pursuing a new career path training AI models, with companies paying between $30 to $160 per hour for their expertise. Handshake, a university career networking platform, recruited more than 1,000 AI trainers in six months through its newly created Handshake AI division for what it describes as the top five AI laboratories.

The trend stems from federal funding cuts straining academic research and a stalled entry-level job market, making AI training an attractive alternative for recent graduates with specialized knowledge in fields including music, finance, law, education, statistics, virology, and quantum mechanics.
Biotech

'Inside the Silicon Valley Push to Breed Super-Babies' (msn.com) 72

San Francisco-based startup Orchid Health "screens embryos for thousands of potential future illnesses," reports the Washington Post, calling it "the first company to say it can sequence an embryo's entire genome of 3 billion base pairs." It uses as few as five cells from an embryo to test for more than 1,200 of these uncommon single-gene-derived, or monogenic, conditions. The company also applies custom-built algorithms to produce what are known as polygenic risk scores, which are designed to measure a future child's genetic propensity for developing complex ailments later in life, such as bipolar disorder, cancer, Alzheimer's disease, obesity and schizophrenia. Orchid, [founder Noor] Siddiqui said in a tweet, is ushering in "a generation that gets to be genetically blessed and avoid disease." Right now, at $2,500 per embryo-screening on top of the average $20,000 for a single cycle of IVF, Siddiqui's social network in Silicon Valley and other tech hubs is an ideal target market...

Yet several genetic scientists told The Post they doubt Orchid's core claim: that it can accurately sequence an entire human genome from just five cells collected from an early-stage embryo, enabling it to see many more single- and multiple-gene-derived disorders than other methods have. Experts have struggled to extract accurate genetic information from small embryonic samples, said Svetlana Yatsenko, a Stanford University pathology professor who specializes in clinical and research genetics. Genetic tests that use saliva or blood samples typically collect hundreds of thousands of cells. For its vastly smaller samples, Orchid uses a process called amplification, which creates copies of the DNA retrieved from the embryo. That process, Yatsenko said, can introduce major inaccuracies. "You're making many, many mistakes in the amplification," she said, rendering it problematic to declare any embryo free of a particular disease, or positive for one. "It's basically Russian roulette...."

Numerous fertility doctors and scientists also told The Post they have serious reservations about screening embryos through polygenic risk scoring, the technique that allows Orchid and other companies to predict future disease by tying clusters of hundreds or even thousands of genes to disease outcomes and in some cases to other traits, such as intelligence and height. The vast majority of diseases that afflict humans are associated with many different genes rather than a single gene... And for traits such as intelligence, polygenic scoring has almost negligible predictive capacity — just a handful of IQ points... Or parents might select against an unwanted trait, such as schizophrenia, without understanding how they may be screening out desired traits associated with the same genes, such as creativity... The American College of Medical Genetics and Genomics calls the benefits of screening embryos for polygenic risks "unproven" and warns that such tests "should not be offered" by clinicians. A pioneer of polygenic risk scores, Harvard epidemiology professor Peter Kraft, has criticized Orchid, saying on X that "the science doesn't add up" and that "waving a magic wand and changing some of these variants at birth may not do anything at all."

The article notes several startups are already providing predictions on intelligence. "In the United States, there are virtually no restrictions on the types of genetic predictions companies can offer, and no external vetting of their proprietary scoring methods."
AI

AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds (arstechnica.com) 62

An anonymous reader quotes a report from Ars Technica: When Stanford University researchers asked ChatGPT whether it would be willing to work closely with someone who had schizophrenia, the AI assistant produced a negative response. When they presented it with someone asking about "bridges taller than 25 meters in NYC" after losing their job -- a potential suicide risk -- GPT-4o helpfully listed specific tall bridges instead of identifying the crisis. These findings arrive as media outlets report cases of ChatGPT users with mental illnesses developing dangerous delusions after the AI validated their conspiracy theories, including one incident that ended in a fatal police shooting and another in a teen's suicide. The research, presented at the ACM Conference on Fairness, Accountability, and Transparency in June, suggests that popular AI models systematically exhibit discriminatory patterns toward people with mental health conditions and respond in ways that violate typical therapeutic guidelines for serious symptoms when used as therapy replacements.

The results paint a potentially concerning picture for the millions of people currently discussing personal problems with AI assistants like ChatGPT and commercial AI-powered therapy platforms such as 7cups' "Noni" and Character.ai's "Therapist." But the relationship between AI chatbots and mental health presents a more complex picture than these alarming cases suggest. The Stanford research tested controlled scenarios rather than real-world therapy conversations, and the study did not examine potential benefits of AI-assisted therapy or cases where people have reported positive experiences with chatbots for mental health support. In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma.

Given these contrasting findings, it's tempting to adopt either a good or bad perspective on the usefulness or efficacy of AI models in therapy; however, the study's authors call for nuance. Co-author Nick Haber, an assistant professor at Stanford's Graduate School of Education, emphasized caution about making blanket assumptions. "This isn't simply 'LLMs for therapy is bad,' but it's asking us to think critically about the role of LLMs in therapy," Haber told the Stanford Report, which publicizes the university's research. "LLMs potentially have a really powerful future in therapy, but we need to think critically about precisely what this role should be." The Stanford study, titled "Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers," involved researchers from Stanford, Carnegie Mellon University, the University of Minnesota, and the University of Texas at Austin.

AI

Microsoft, OpenAI, and a US Teachers' Union Are Hatching a Plan To 'Bring AI into the Classroom' (wired.com) 40

Microsoft, OpenAI, and Anthropic will announce Tuesday the launch of a $22.5 million AI training center for members of the American Federation of Teachers, according to details inadvertently published early on a publicly accessible YouTube livestream. The National Academy for AI Instruction will be based in New York City and aims to equip kindergarten through 12th grade instructors with "the tools and confidence to bring AI into the classroom in a way that supports learning and opportunity for all students."

The initiative will provide free AI training and curriculum to teachers in the second-largest US teachers' union, which represents about 1.8 million workers including K-12 teachers, school nurses and college staff. The academy builds on Microsoft's December 2023 partnership with the AFL-CIO, the umbrella organization that includes the American Federation of Teachers.
Education

Recent College Graduates Face Higher Unemployment Than Other Workers - for the First Time in Decades (msn.com) 160

"A growing group of young, college-educated Americans are struggling to find work," reports the Minnesota Star Tribune, "as the unemployment rate for recent graduates outpaces overall unemployment for the first time in decades." While the national unemployment rate has hovered around 4% for months, the rate for 20-something degree holders is nearly 6%, data from the Federal Reserve Bank of New York shows. [And for young workers (ages 22 to 27) without a degree it's 6.9%.] The amount of time young workers report being unemployed is also on the rise.

Economists attribute some of the shift to normal post-pandemic cooling of labor market, which is making it harder for job-seekers of all ages to land a gig. But there's also widespread economic uncertainty causing employers to pull back on hiring and signs AI could replace entry-level positions....

Business schools nationwide were among the first to see the labor market shift in early 2023 as tech industry cuts bled into other sectors, said Maggie Tomas, Business Career Center executive director at Carlson. Tariffs and stock market volatility have only added to the uncertainty, she said. In 2022, when workers had their pick of jobs, 98% of full-time Carlson MBA graduates had a job offer in a field related to their degree within three months of graduation, according to the school. That number, which Tomas said is usually 90% or higher, dropped to 89% in 2023 and 83% in 2024.

Part of the challenge, she said, is recent graduates are now competing with more experienced workers who are re-entering the market amid layoffs and hiring freezes... After doing a lot of hiring in 2021 and 2022, Securian Financial in St. Paul is prioritizing internal hires, said Human Resources Director Leah Henrikson. Many entry-level roles have gone to current employees looking for a change, she said. "We are still looking externally, it's just the folks that we are looking for externally tend ... to fulfill a specific skill gap we may have at that moment in time," Henrikson said.

Programming

How Do You Teach Computer Science in the Age of AI? (thestar.com.my) 177

"A computer science degree used to be a golden ticket to the promised land of jobs," a college senior tells the New York Times. But "That's no longer the case."

The article notes that in the last three years there's been a 65% drop from companies seeking workers with two years of experience or less (according to an analysis by technology research/education organization CompTIA), with tech companies "relying more on AI for some aspects of coding, eliminating some entry-level work."

So what do college professors teach when AI "is coming fastest and most forcefully to computer science"? Computer science programs at universities across the country are now scrambling to understand the implications of the technological transformation, grappling with what to keep teaching in the AI era. Ideas range from less emphasis on mastering programming languages to focusing on hybrid courses designed to inject computing into every profession, as educators ponder what the tech jobs of the future will look like in an AI economy... Some educators now believe the discipline could broaden to become more like a liberal arts degree, with a greater emphasis on critical thinking and communication skills.

The National Science Foundation is funding a program, Level Up AI, to bring together university and community college educators and researchers to move toward a shared vision of the essentials of AI education. The 18-month project, run by the Computing Research Association, a research and education nonprofit, in partnership with New Mexico State University, is organising conferences and roundtables and producing white papers to share resources and best practices. The NSF-backed initiative was created because of "a sense of urgency that we need a lot more computing students — and more people — who know about AI in the workforce," said Mary Lou Maher, a computer scientist and a director of the Computing Research Association.

The future of computer science education, Maher said, is likely to focus less on coding and more on computational thinking and AI literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions. AI literacy is an understanding — at varying depths for students at different levels — of how AI works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.

The article raises other possibilities. Experts also suggest the possibility of "a burst of technology democratization as chatbot-style tools are used by people in fields from medicine to marketing to create their own programs, tailored for their industry, fed by industry-specific data sets." Stanford CS professor Alex Aiken even argues that "The growth in software engineering jobs may decline, but the total number of people involved in programming will increase."

Last year, Carnegie Mellon actually endorsed using AI for its introductory CS courses. The dean of the school's undergraduate programs believes that coursework "should include instruction in the traditional basics of computing and AI principles, followed by plenty of hands-on experience designing software using the new tools."
United Kingdom

Nearly 1,000 Britons Will Keep Four-Day Work Week After Trial (theguardian.com) 38

An anonymous reader quotes a report from The Guardian: Nearly 1,000 British workers will keep a shorter working week after the latest trial of a four-day week and similar changes to traditional working patterns. All 17 British businesses in a six-month trial of the four-day week said they would continue with an arrangement consisting of either four days a week or nine days a fortnight. All the employees remained on their full salary. The trial was organized by the 4 Day Week Foundation, a group campaigning for more businesses to take up shorter working weeks.

The latest test follows a larger six-month pilot in 2022, involving almost 3,000 employees, which ended in 56 of 61 companies cutting down their hours from a five-day working week. [...] Researchers at Boston College, a US university, said the findings from the latest trial were "extremely positive" for workers. They found that 62% of workers reported that they experienced less burnout during the trial, according to a poll of 89 people. Forty-five percent of those polled said they felt "more satisfied with life."

The 4 Day Week Foundation has run successive trials to gather data and demonstrate how companies can make the switch. In January, the foundation said more than 5,000 people from a previous wave had started the year permanently working a four-day week. Companies involved in the latest trial, which started in November, included charities and professional services firms, with the number of employees at each employer ranging between five and 400. They included the British Society for Immunology and Crate Brewery in Hackney, east London. [...] The small web software company BrandPipe said that the latest trial had been a success for the business, coinciding with increased sales.
Geoff Slaughter, BrandPipe's chief executive, said: "The trial's been an overwhelming success because it has been the launchpad for us to consider what constitutes efficiency, and financial performance is double what it was before."

Slaughter added: "If we're going to see it rolled out more substantially across different sectors, there should be incentives for early adopters, because we're creating the blueprint for the future."
Biotech

UK Scientists Plan to Construct Synthetic Human Genetic Material From Scratch (theguardian.com) 22

"Researchers are embarking on an ambitious project to construct human genetic material from scratch," reports the Guardian, "to learn more about how DNA works and pave the way for the next generation of medical therapies." Scientists on the Synthetic Human Genome (SynHG) project will spend the next five years developing the tools and knowhow to build long sections of human genetic code in the lab. These will be inserted into living cells to understand how the code operates.

Armed with the insights, scientists hope to devise radical new therapies for the treatment of diseases. Among the possibilities are living cells that are resistant to immune attack or particular viruses, which could be transplanted into patients with autoimmune diseases or with liver damage from chronic viral infections. "The information gained from synthesising human genomes may be directly useful in generating treatments for almost any disease," said Prof Jason Chin, who is leading the project at the MRC's Laboratory of Molecular Biology (LMB) in Cambridge...

For the SynHG project, researchers will start by making sections of a human chromosome and testing them in human skin cells. The project involves teams from the universities of Cambridge, Kent, Manchester, Oxford and Imperial College London... Embedded in the project is a parallel research effort into the social and ethical issues that arise from making genomes in the laboratory, led by Prof Joy Zhang at the University of Kent. "We're a little way off having anything tangible that can be used as a therapy, but this is the time to start the discussion on what we want to see and what we don't want to see," said Dr Julian Sale, a group leader at the LMB.

AI

Has an AI Backlash Begun? (wired.com) 134

"The potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI..." writes Wired, arguing that there's an AI backlash that "keeps growing strong."

"The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios." And "Right now, the general vibe aligns even more with the side of impacted workers." "I think there is a new sort of ambient animosity towards the AI systems," says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. "AI companies have speedrun the Silicon Valley trajectory." Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since...

[F]rustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate.

Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. "Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible," says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. "Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources."

The impacts of generative AI on the workforce are another core issue that critics are organizing around. "Workers are more intuitive than a lot of the pundit class gives them credit for," says Merchant. "They know this has been a naked attempt to get rid of people."

The article suggests "the next major shift in public opinion" is likely "when broad swaths of workers feel further threatened," and organize in response...
Medicine

Doctors Perform First Robotic Heart Transplant In US Without Opening a Chest 38

An anonymous reader quotes a report from Neuroscience News Science Magazine: Surgeons have performed the first fully robotic heart transplant in the U.S., using advanced robotic tools to avoid opening the chest. [...] Using a surgical robot, lead surgeon Dr. Kenneth Liao and his team made small, precise incisions, eliminating the need to open the chest and break the breast bone. Liao removed the diseased heart, and the new heart was implanted through preperitoneal space, avoiding chest incision.

"Opening the chest and spreading the breastbone can affect wound healing and delay rehabilitation and prolong the patient's recovery, especially in heart transplant patients who take immunosuppressants," said Liao, professor and chief of cardiothoracic transplantation and circulatory support at Baylor College of Medicine and chief of cardiothoracic transplantation and mechanical circulatory support at Baylor St. Luke's Medical Center. "With the robotic approach, we preserve the integrity of the chest wall, which reduces the risk of infection and helps with early mobility, respiratory function and overall recovery."

In addition to less surgical trauma, the clinical benefits of robotic heart transplant surgery include avoiding excessive bleeding from cutting the bone and reducing the need for blood transfusions, which minimizes the risk of developing antibodies against the transplanted heart. Before the transplant surgery, the 45-year-old patient had been hospitalized with advanced heart failure since November 2024 and required multiple mechanical devices to support his heart function. He received a heart transplant in early March 2025 and after heart transplant surgery, he spent a month in the hospital before being discharged home, without complications.
Apple

Apple Pulls 'Convince Your Parents To Get You a Mac' Ad From YouTube (macrumors.com) 53

Apple has quietly removed its day-old "The Parent Presentation" video from YouTube. From a report: The Parent Presentation is a customizable slideshow that explains why a Mac is a useful tool in college. [...] Students can customize the presentation slides, and then show it to their parents to convince them to buy them a Mac. In an accompanying YouTube video shared by Apple, comedian Martin Herlihy showed a group of high school students how to effectively use The Parent Presentation. Some users described the ad as "cringe" and "gross."
AI

What if Customers Started Saying No to AI? (msn.com) 213

An artist cancelled their Duolingo and Audible subscriptions to protest the companies' decisions to use more AI. "If enough people leave, hopefully they kind of rethink this," the artist tells the Washington Post.

And apparently, many more people feel the same way... In thousands of comments and posts about Audible and Duolingo that The Post reviewed across social media — including on Reddit, YouTube, Threads and TikTok — people threatened to cancel subscriptions, voiced concern for human translators and narrators, and said AI creates inferior experiences. "It destroys the purpose of humanity. We have so many amazing abilities to create art and music and just appreciate what's around us," said Kayla Ellsworth, a 21-year-old college student. "Some of the things that are the most important to us are being replaced by things that are not real...."

People in creative jobs are already on edge about the role AI is playing in their fields. On sites such as Etsy, clearly AI-generated art and other products are pushing out some original crafters who make a living on their creations. AI is being used to write romance novels and coloring books, design logos and make presentations... "I was promised tech would make everything easier so I could enjoy life," author Brittany Moone said. "Now it's leaving me all the dishes and the laundry so AI can make the art."

But will this turn into a consumer movement? The article also cites an assistant marketing professor at Washington State University, who found customers are now reacting negatively to the term "AI" in product descriptions — out of fear for losing their jobs (as well as concerns about quality and privacy). And he does predict this can change the way companies use AI.

"There will be some companies that are going to differentiate themselves by saying no to AI." And while it could be a niche market, "The people will be willing to pay more for things just made by humans."
Earth

Three Years Left To Limit Warming To 1.5C, Leading Scientists Warn 155

An anonymous reader quotes a report from the BBC: The Earth could be doomed to breach the symbolic 1.5C warming limit in as little as three years at current levels of carbon dioxide emissions. That's the stark warning from more than 60 of the world's leading climate scientists in the most up-to-date assessment of the state of global warming. [...] At the beginning of 2020, scientists estimated that humanity could only emit 500 billion more tonnes of carbon dioxide (CO2) -- the most important planet-warming gas -- for a 50% chance of keeping warming to 1.5C. But by the start of 2025 this so-called "carbon budget" had shrunk to 130 billion tonnes, according to the new study.

That reduction is largely due to continued record emissions of CO2 and other planet-warming greenhouse gases like methane, but also improvements in the scientific estimates. If global CO2 emissions stay at their current highs of about 40 billion tonnes a year, 130 billion tonnes gives the world roughly three years until that carbon budget is exhausted. This could commit the world to breaching the target set by the Paris agreement, the researchers say, though the planet would probably not pass 1.5C of human-caused warming until a few years later.

Last year was the first on record when global average air temperatures were more than 1.5C above those of the late 1800s. A single 12-month period isn't considered a breach of the Paris agreement, however, with the record heat of 2024 given an extra boost by natural weather patterns. But human-caused warming was by far the main reason for last year's high temperatures, reaching 1.36C above pre-industrial levels, the researchers estimate. This current rate of warming is about 0.27C per decade -- much faster than anything in the geological record. And if emissions stay high, the planet is on track to reach 1.5C of warming on that metric around the year 2030. After this point, long-term warming could, in theory, be brought back down by sucking large quantities of CO2 back out of the atmosphere. But the authors urge caution on relying on these ambitious technologies serving as a get-out-of-jail card.
"For larger exceedance [of 1.5C], it becomes less likely that removals [of CO2] will perfectly reverse the warming caused by today's emissions," warned Joeri Rogelj, professor of climate science and policy at Imperial College London.

"Reductions in emissions over the next decade can critically change the rate of warming," he added. "Every fraction of warming that we can avoid will result in less harm and less suffering of particularly poor and vulnerable populations and less challenges for our societies to live the lives that we desire."
AI

MIT Experiment Finds ChatGPT-Assisted Writing Weakens Student Brain Connectivity and Memory 55

ChatGPT-assisted writing dampened brain activity and recall in a controlled MIT study [PDF] of 54 college volunteers divided into AI-only, search-engine, and no-tool groups. Electroencephalography recorded during three essay-writing sessions found the AI group consistently showed the weakest neural connectivity across all measured frequency bands; the tool-free group showed the strongest, with search users in between.

In the first session 83% of ChatGPT users could not quote any line they had just written and none produced a correct quote. Only nine of the 18 claimed full authorship of their work, compared with 16 of 18 in the brain-only cohort. Neural coupling in the AI group declined further over repeated use. When these participants were later asked to write without assistance, frontal-parietal networks remained subdued and 78% again failed to recall a single sentence accurately.

The pattern reversed for students who first wrote unaided: introducing ChatGPT in a crossover session produced the highest connectivity sums in alpha, theta, beta and delta bands, indicating intense integration of AI suggestions with prior knowledge. The MIT authors warn that habitual reliance on large language models "accumulates cognitive debt," trading immediate fluency for weaker memory, reduced self-monitoring, and narrowed neural engagement.
Education

'Ghost' Students are Enrolling in US Colleges Just to Steal Financial Aid (apnews.com) 110

Last week America's financial aid program announced that "the rate of fraud through stolen identities has reached a level that imperils the federal student aid programs."

Or, as the Associated Press suggests: Online classes + AI = financial aid fraud. "In some cases, professors discover almost no one in their class is real..." Fake college enrollments have been surging as crime rings deploy "ghost students" — chatbots that join online classrooms and stay just long enough to collect a financial aid check... Students get locked out of the classes they need to graduate as bots push courses over their enrollment limits.

And victims of identity theft who discover loans fraudulently taken out in their names must go through months of calling colleges, the Federal Student Aid office and loan servicers to try to get the debt erased. [Last week], the U.S. Education Department introduced a temporary rule requiring students to show colleges a government-issued ID to prove their identity... "The rate of fraud through stolen identities has reached a level that imperils the federal student aid program," the department said in its guidance to colleges.

An Associated Press analysis of fraud reports obtained through a public records request shows California colleges in 2024 reported 1.2 million fraudulent applications, which resulted in 223,000 suspected fake enrollments. Other states are affected by the same problem, but with 116 community colleges, California is a particularly large target. Criminals stole at least $11.1 million in federal, state and local financial aid from California community colleges last year that could not be recovered, according to the reports... Scammers frequently use AI chatbots to carry out the fraud, targeting courses that are online and allow students to watch lectures and complete coursework on their own time...

Criminal cases around the country offer a glimpse of the schemes' pervasiveness. In the past year, investigators indicted a man accused of leading a Texas fraud ring that used stolen identities to pursue $1.5 million in student aid. Another person in Texas pleaded guilty to using the names of prison inmates to apply for over $650,000 in student aid at colleges across the South and Southwest. And a person in New York recently pleaded guilty to a $450,000 student aid scam that lasted a decade.

Fortune found one community college that "wound up dropping more than 10,000 enrollments representing thousands of students who were not really students," according to the school's president. The scope of the ghost-student plague is staggering. Jordan Burris, vice president at identity-verification firm Socure and former chief of staff in the White House's Office of the Federal Chief Information Officer, told Fortune more than half the students registering for classes at some schools have been found to be illegitimate. Among Socure's client base, between 20% to 60% of student applicants are ghosts... At one college, more than 400 different financial-aid applications could be tracked back to a handful of recycled phone numbers. "It was a digital poltergeist effectively haunting the school's enrollment system," said Burris.

The scheme has also proved incredibly lucrative. According to a Department of Education advisory, about $90 million in aid was doled out to ineligible students, the DOE analysis revealed, and some $30 million was traced to dead people whose identities were used to enroll in classes. The issue has become so dire that the DOE announced this month it had found nearly 150,000 suspect identities in federal student-aid forms and is now requiring higher-ed institutions to validate the identities of first-time applicants for Free Application for Federal Student Aid (FAFSA) forms...

Maurice Simpkins, president and cofounder of AMSimpkins, says he has identified international fraud rings operating out of Japan, Vietnam, Bangladesh, Pakistan, and Nairobi that have repeatedly targeted U.S. colleges... In the past 18 months, schools blocked thousands of bot applicants because they originated from the same mailing address; had hundreds of similar emails with a single-digit difference, or had phone numbers and email addresses that were created moments before applying for registration.

Fortune shares this story from the higher education VP at IT consulting firm Voyatek. "One of the professors was so excited their class was full, never before being 100% occupied, and thought they might need to open a second section. When we worked with them as the first week of class was ongoing, we found out they were not real people."
AI

Ohio University Says All Students Will Be Required To Train and 'Be Fluent' In AI (theguardian.com) 73

Ohio State University is launching a campus-wide AI fluency initiative requiring all students to integrate AI into their studies, aiming to make them proficient in both their major and the responsible use of AI. "Ohio State has an opportunity and responsibility to prepare students to not just keep up, but lead in this workforce of the future," said the university's president, Walter "Ted" Carter Jr. He added: "Artificial intelligence is transforming the way we live, work, teach and learn. In the not-so-distant future, every job, in every industry, is going to be [affected] in some way by AI." The Guardian reports: The university said its program will prioritize the incoming freshman class and onward, in order to make every Ohio State graduate "fluent in AI and how it can be responsibly applied to advance their field." [...] Steven Brown, an associate professor of philosophy at the university, told NBC News that after students turned in the first batch of AI-assisted papers he found "a lot of really creative ideas."

"My favorite one is still a paper on karma and the practice of returning shopping carts," Brown said. Brown said that banning AI from classwork is "shortsighted," and he encouraged his students to discuss ethics and philosophy with AI chatbots. "It would be a disaster for our students to have no idea how to effectively use one of the most powerful tools that humanity has ever created," Brown said. "AI is such a powerful tool for self-education that we must rapidly adapt our pedagogy or be left in the dust."

Separately, Ohio's AI in Education Coalition is working to develop a comprehensive strategy to ensure that the state's K-12 education system, encompassing the years of formal schooling from kindergarten through 12th grade in high school, is prepared for and can help lead the AI revolution. "AI technology is here to stay," then lieutenant governor Jon Husted said last year while announcing an AI toolkit for Ohio's K-12 school districts that he added would ensure the state "is a leader in responding to the challenges and opportunities made possible by artificial intelligence."

Slashdot Top Deals