AI

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality (msn.com) 124

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers.

The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population...

In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

AI

AI Could Replace 3 Million Low-Skilled Jobs in the UK By 2035, Research Warns (theguardian.com) 45

Up to 3 million low-skilled jobs could disappear in the UK by 2035 because of automation and AI, according to a report by a leading educational research charity. The Guardian: The jobs most at risk are those in occupations such as trades, machine operations and administrative roles, the National Foundation for Educational Research (NFER) said. Highly skilled professionals, on the other hand, were forecast to be more in demand as AI and technological advances increase workloads "at least in the short to medium term."

Overall, the report expects the UK economy to add 2.3 million jobs by 2035, but unevenly distributed. The findings stand in contrast to other recent research suggesting AI will affect highly skilled, technical occupations such as software engineering and management consultancy more than trades and manual work.

Data Storage

Unpowered SSDs in Your Drawer Are Slowly Losing Data (xda-developers.com) 79

An anonymous reader shares a report: Solid-state drives sitting unpowered in drawers or storage can lose data over time because voltage gradually leaks from their NAND flash cells, and consumer-grade drives using QLC NAND retain data for about a year while TLC NAND lasts up to three years without power. More expensive MLC and SLC NAND can hold data for five and ten years respectively. The voltage loss can result in missing data or completely unusable drives.

Hard drives remain more resistant to power loss despite their susceptibility to bit rot. Most users relying on SSDs for primary storage in regularly powered computers face little risk since drives typically stay unpowered for only a few months at most. The concern mainly affects creative professionals and researchers who need long-term archival storage.

AI

'We Could've Asked ChatGPT': UK Students Fight Back Over Course Taught By AI (theguardian.com) 55

An anonymous reader shared this report from the Guardian: James and Owen were among 41 students who took a coding module at the University of Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible".

"If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course. This year, the university uploaded a policy statement to the course website appearing to justify the use of AI, laying out "a framework for academic professionals leveraging AI automation" in scholarly work and teaching...

For students, AI teaching appears to be less transformative than it is demoralising. In the US, students post negative online reviews about professors who use AI. In the UK, undergraduates have taken to Reddit to complain about their lecturers copying and pasting feedback from ChatGPT or using AI-generated images in courses.

"I feel like a bit of my life was stolen," James told the Guardian (which also quotes an unidentified student saying they felt "robbed of knowledge and enjoyment".) But the article also points out that a survey last year of 3,287 higher-education teaching staff by edtech firm Jisc found that nearly a quarter were using AI tools in their teaching.
Businesses

How HR Took Over the World (economist.com) 98

Human-resources departments in American companies employed 1.3 million professionals in 2024, a 64% increase over ten years. Overall employment grew 14% in the same period. Professional-services and technology firms saw the number of HR workers double since 2014. Similar patterns have emerged in Australia, Britain and Germany.

Chief human-resources officers also gained ground financially. Their total compensation, which stood at 40% of the average director's salary in 1992, reached 70% by 2022, according to a Stanford University study. Mary Barra, who runs General Motors, previously held the carmaker's top HR position.

The expansion has followed several workplace disruptions, including the Me Too movement, the pandemic's shift to remote work, and the rise of diversity initiatives, Economist reports. Companies also faced more state regulations on employee relations and a jump in workplace complaints. The average number of discrimination or harassment allegations rose from six per 1000 employees in 2021 to 15 last year.
AI

Neurodiverse Professionals 25% More Satisfied With AI Tools and Agents (cnbc.com) 30

An anonymous reader shared this report from CNBC: Neurodiverse professionals may see unique benefits from artificial intelligence tools and agents, research suggests. With AI agent creation booming in 2025, people with conditions like ADHD, autism, dyslexia and more report a more level playing field in the workplace thanks to generative AI. A recent study from the UK's Department for Business and Trade found that neurodiverse workers were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical respondents. [The study involved 1,000 users of Microsoft 365 Copilot from October through December of 2024.]

"Standing up and walking around during a meeting means that I'm not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out the top-level themes," said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD as an adult, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function issues) and hyperactive symptoms (increased movement). "I've white-knuckled my way through the business world," DeZao said. "But these tools help so much...."

Generative AI happens to be particularly adept at skills like communication, time management and executive functioning, creating a built-in benefit for neurodiverse workers who've previously had to find ways to fit in among a work culture not built with them in mind. Because of the skills that neurodiverse individuals can bring to the workplace — hyperfocus, creativity, empathy and niche expertise, just to name a few — some research suggests that organizations prioritizing inclusivity in this space generate nearly one-fifth higher revenue. "Investing in ethical guardrails, like those that protect and aid neurodivergent workers, is not just the right thing to do," said Kristi Boyd, an AI specialist with the SAS data ethics practice. "It's a smart way to make good on your organization's AI investments."

AI

A New White-Collar Gig Economy: Training AI To Take Over (bloomberg.com) 34

AI labs are paying skilled professionals hundreds of dollars per hour to train their models in specialized fields. Companies like Mercor, Surge AI, Scale AI and Turing recruit bankers, lawyers, engineers and doctors to improve the accuracy of AI systems in professional settings. Mercor advertises roles for medical secretaries, movie directors and private detectives at rates ranging from $20 to $185 per hour for contract work and up to $200,000 for full-time positions. Surge AI offers as much as $1,000 per hour for expertise from startup CEOs and venture capital partners. Mercor pays out over $1.5 million daily to professionals it hires for clients including OpenAI and Anthropic.

Some contractors are former employees of Goldman Sachs and McKinsey. Others moonlight in this work while keeping their regular jobs. Brendan Foody, Mercor's 22-year-old CEO, acknowledged at a conference last week that trade secrets could potentially be compromised given the volume of work submitted. Uber CEO Dara Khosrowshahi said on this week's earnings call that some AI training gigs on its platform require PhDs.
AI

Adobe Struggles To Assure Investors That It Can Thrive in AI Era (msn.com) 16

An anonymous reader shares a report: Adobe brought together 10,000 marketers, filmmakers and content creators to its annual conference this week to persuade them that the company's software products are adapting to AI and remain the best tools for their work. But it's Adobe's investors, rather than its users, who are the most skeptical that generative AI technology won't disrupt the company's business as the top seller of software for creative professionals.

Despite a strong strategy, Adobe is "at risk of structural AI-driven competitive and pricing pressure," wrote Tyler Radke, an analyst at Citigroup. The company's shares have lost about a quarter of their value this year as AI tools like Google's video-generating model Veo have gained steam. In an interview with Bloomberg Television earlier this week, Adobe Chief Executive Officer Shantanu Narayen said the company is undervalued as the market is focused on semiconductors and the training of AI models.

Businesses

Companies Battle Wave of AI-Generated Fake Expense Receipts (ft.com) 65

Employees are using AI to generate fake expense receipts. Leading expense software platforms report a sharp increase in AI-created fraudulent documents following the launch of improved image generation models by OpenAI and Google. AppZen said fake AI receipts accounted for 14% of fraudulent documents submitted in September compared with none last year. Ramp flagged more than one million dollars in fraudulent invoices within 90 days. About 30% of financial professionals in the US and UK surveyed by Medius reported seeing a rise in falsified receipts after OpenAI released GPT-4o last year.

SAP Concur processes more than 80 million compliance checks monthly and now warns customers to not trust their eyes. The receipts include wrinkles in paper, detailed itemization matching real menus and signatures. Creating fraudulent documents previously required photo editing skills or paying for such services. Free and accessible image generation software has made it possible for anyone to falsify receipts in seconds by writing simple text instructions to chatbots.
Ubuntu

Finally, You Can Now be a 'Certified' Ubuntu Sys-Admin/Linux User (itsfoss.com) 50

Thursday Ubuntu-maker Canonical "officially launched Canonical Academy, a new certification platform designed to help professionals validate their Linux and Ubuntu skills through practical, hands-on assessments," writes the blog It's FOSS: Focusing on real-world scenarios, Canonical Academy aims to foster practical skills rather than theoretical knowledge. The end goal? Getting professionals ready for the actual challenges they will face on the job. The learning platform is already live with its first course offering, the System Administrator track (with three certification exams), which is tailored for anyone looking to validate their Linux and Ubuntu expertise.

The exams use cloud-based testing environments that simulate real workplace scenarios. Each assessment is modular, meaning you can progress through individual exams and earn badges for each one. Complete all the exams in this track to earn the full Sysadmin qualification... Canonical is also looking for community members to contribute as beta testers and subject-matter experts (SME). If you are interested in helping shape the platform or want to get started with your certification, you can visit the Canonical Academy website.

The sys-admin track offers exams for Linux Terminal, Ubuntu Desktop 2024, Ubuntu Server 2024, and "managing complex systems," according to an official FAQ. "Each exam provides an in-browser remote desktop interface into a functional Ubuntu Desktop environment running GNOME. From this initial node, you will be expected to troubleshoot, configure, install, and maintain systems, processes, and other general activities associated with managing Linux. The exam is a hybrid format featuring multiple choice, scenario-based, and performance-based questions..."

"Test-takers interested in the types of material covered on each exam can review links to tutorials and documentation on our website."

The FAQ advises test takers to use a Chromium-based browser, as Firefox "is NOT supported at this time... There is a known issue with keyboards and Firefox in the CUE.01 Linux 24.04 preview release at this time, which will be resolved in the CUE.01 Linux 24.10 exam release."
Books

Was the Web More Creative and Human 20 Years Ago? (bookforum.com) 77

Readers in 2025 "may struggle to remember the optimism of the aughts, when the internet seemed to offer endless possibilities for virtual art and writing that was free..." argues a new review at Bookforum. "The content we do create online, if we still create, often feels unreflectively automatic: predictable quote-tweet dunks, prefabricated poses on Instagram, TikTok dances that hit their beats like clockwork, to say nothing of what's literally thoughtlessly churned out by LLM-powered bots."

They write that author Joanna Walsh "wants us to remember how truly creative, and human, the internet once was," in the golden age of user-generated content — and funny cat picture sites like I Can Has Cheezburger: I Can Has Cheezburger... was an amateur project, an outlet for tech professionals who wanted an easier way to exchange cute cat pics after a hard day at work. In Amateurs!: How We Built Internet Culture and Why It Matters, Walsh documents how unpaid creative labor is the basis for almost everything that's good (and much that's bad) online, including the open-source code Linux, developed by Linus Torvalds when he was still in school ("just as a hobby, won't be big and professional"), and even, in Walsh's account, the World Wide Web itself. The platforms that emerged in the 2000s as "Web 2.0," including Facebook, YouTube, Reddit, and Twitter, allowed anyone to experiment in a space that had been reserved for coders and hackers, making the internet interactive even for the inexpert and virtually unlimited in potential audience. The explosion in amateur creativity that followed took many forms, from memes to tweeted one-liners to diaristic blogs to durational digital performances to sloppy Photoshops to the formal and informal taxonomic structures — wikis, neologisms, digitally native dialects...

[U]ser-generated content was also, at bottom, about the bottom line, a business model sold to us under the guise of artistic empowerment. Even referring to an anonymous amateur as a "user," Walsh argues, cedes ground: these platforms are populated by producers, but their owners see us as, and turn us into, "helpless addicts." For some, online amateurism translated to professional success, a viral post earning an author a book deal, or a reputation as a top commenter leading to a staff writing job on a web publication... But for most, these days, participation in the online attention economy feels like a tax, or maybe a trickle of revenue, rather than free fun or a ticket to fame. The few remaining professionals in the arts and letters have felt pressured to supplement their full-time jobs with social media self-promotion, subscription newsletters, podcasts, and short-form video. On what was once called Twitter, users can pay, and sometimes get paid, to post with greater reach...

The chapters are bookended by an introduction on the early promise of 2004 and a coda on the defeat of 2025 and supplemented by an appendix with a straightforward timeline of the major events and publications that serve as the book's touchstones... The online spaces where amateur content creators once "created and steered online culture" have been hollowed out and replaced by slop, but what really hurts is that the slop is being produced by bots trained on precisely that amateur content.

United States

Three New California Laws Target Tech Companies' Interactions with Children 47

California Governor Gavin Newsom signed three bills on Monday that establish the nation's most comprehensive framework for regulating how technology companies interact with minors. AB 56 requires social media platforms to display health warnings to users under 18. A child must view a skippable ten-second warning upon logging on each day. An unskippable thirty-second warning must appear if a child spends more than three hours on a platform. That warning repeats after each additional hour. The warnings must state that social media "can have a profound risk of harm to the mental health and well-being of children and adolescents." Minnesota passed a similar law in July.

SB 243 makes California the first state to regulate AI companion chatbots. The law takes effect January 1, 2026. Companies must implement age verification and disclose that interactions are artificially generated. Chatbots cannot represent themselves as healthcare professionals. Companies must offer break reminders to minors and prevent them from viewing sexually explicit images. The legislation gained momentum after teenager Adam Raine died by suicide following conversations with OpenAI's ChatGPT. A Colorado family filed suit against Character AI after their daughter's suicide following problematic conversations with the company's chatbots.

AB 1043 requires device-makers like Apple and Google to collect birth dates when parents set up devices for children. Device-makers must group users into four age brackets and share this information with apps. Google, Meta, OpenAI, and Snap supported the bill. The Motion Picture Association opposed it.
The Almighty Buck

Insurers Balk At Paying Out Huge Settlements For Claims Against AI Firms 25

An anonymous reader quotes a report from the Financial Times: OpenAI and Anthropic are considering using investor funds to settle potential claims from multibillion-dollar lawsuits, as insurers balk at providing comprehensive coverage for the risks associated with artificial intelligence. The two US-based AI start-ups have traditional business insurance coverage in place, but insurance professionals said AI model providers will struggle to secure protection for the full scale of damages they may need to pay out in the future. OpenAI, which has tapped the world's second-largest insurance broker Aon for help, has secured cover of up to $300 million for emerging AI risks, according to people familiar with the company's policy. Another person familiar with the policy disputed that figure, saying it was much lower. But all agreed the amount fell far short of the coverage to insure against potential losses from a series of multibillion-dollar legal claims.

[...] Two people with knowledge of the matter said OpenAI has considered "self insurance," or putting aside investor funding in order to expand its coverage. The company has raised nearly $60 billion to date, with a substantial amount of the funding contingent on a proposed corporate restructuring. One of those people said OpenAI had discussed setting up a "captive" -- a ringfenced insurance vehicle often used by large companies to manage emerging risks. Big tech companies such as Microsoft, Meta, and Google have used captives to cover Internet-era liabilities such as cyber or social media. Captives can also carry risks, since a substantial claim can deplete an underfunded captive, leaving the parent company vulnerable. OpenAI said it has insurance in place and is evaluating different insurance structures as the company grows, but does not currently have a captive and declined to comment on future plans.
Businesses

Qualcomm Is Buying Arduino, Releases New Raspberry Pi-Esque Arduino Board (arstechnica.com) 51

An anonymous reader quotes a report from Ars Technica: Smartphone processor and modem maker Qualcomm is acquiring Arduino, the Italian company known mainly for its open source ecosystem of microcontrollers and the software that makes them function. In its announcement, Qualcomm said that Arduino would "[retain] its brand and mission," including its "open source ethos" and "support for multiple silicon vendors." Qualcomm didn't disclose what it would pay to acquire Arduino. The acquisition also needs to be approved by regulators "and other customary closing conditions."

The first fruit of this pending acquisition will be the Arduino Uno Q, a Qualcomm-based single-board computer with a Qualcomm Dragonwing QRB2210 processor installed. The QRB2210 includes a quad-core Arm Cortex-A53 CPU and a Qualcomm Adreno 702 GPU, plus Wi-Fi and Bluetooth connectivity, and combines that with a real-time microcontroller "to bridge high-performance computing with real-time control."
"Arduino will retain its independent brand, tools, and mission, while continuing to support a wide range of microcontrollers and microprocessors from multiple semiconductor providers as it enters this next chapter within the Qualcomm family," Qualcomm said in its press release. "Following this acquisition, the 33M+ active users in the Arduino community will gain access to Qualcomm Technologies' powerful technology stack and global reach. Entrepreneurs, businesses, tech professionals, students, educators, and hobbyists will be empowered to rapidly prototype and test new solutions, with a clear path to commercialization supported by Qualcomm Technologies' advanced technologies and extensive partner ecosystem."

CNBC notes in its reporting that this acquisition gives Qualcomm "direct access to the tinkerers, hobbyists and companies at the lowest levels of the robotics industry." From the report: Arduino products can't be used to build commercial products but, with chips preinstalled, they're popular for testing out a new idea or proving a concept. Qualcomm hopes that Arduino can help it gain loyalty and legitimacy among startups and builders as robots and other devices increasingly need more powerful chips for artificial intelligence. When some of those experiments become products, Qualcomm wants to sell them its chips commercially.
AI

What If Vibe Coding Creates More Programming Jobs? (msn.com) 82

Vibe coding tools "are transforming the job experience for many tech workers," writes the Los Angeles Times. But Gartner analyst Philip Walsh said the research firm's position is that AI won't replace software engineers and will actually create a need for more. "There's so much software that isn't created today because we can't prioritize it," Walsh said. "So it's going to drive demand for more software creation, and that's going to drive demand for highly skilled software engineers who can do it..." The idea that non-technical people in an organization can "vibe-code" business-ready software is a misunderstanding [Walsh said]... "That's simply not happening. The quality is not there. The robustness is not there. The scalability and security of the code is not there," Walsh said. "These tools reward highly skilled technical professionals who already know what 'good' looks like."
"Economists, however, are also beginning to worry that AI is taking jobs that would otherwise have gone to young or entry-level workers," the article points out. "In a report last month, researchers at Stanford University found "substantial declines in employment for early-career workers'' — ages 22-25 — in fields most exposed to AI. Stanford researchers also found that AI tools by 2024 were able to solve nearly 72% of coding problems, up from just over 4% a year earlier."

And yet Cat Wu, project manager of Anthropic's Claude Code, doesn't even use the term vibe coding. "We definitely want to make it very clear that the responsibility, at the end of the day, is in the hands of the engineers." Wu said she's told her younger sister, who's still in college, that software engineering is still a great career and worth studying. "When I talk with her about this, I tell her AI will make you a lot faster, but it's still really important to understand the building blocks because the AI doesn't always make the right decisions," Wu said. "A lot of times the human intuition is really important."
IT

New Zealand's Institute of IT Professionals Collapses (theregister.com) 33

An anonymous reader quotes a report from The Register: New Zealand's Institute of IT Professionals has discovered it is insolvent and advised members it has no alternative but to enter liquidation. The Institute (ITP) wrote to members on Thursday and posted a document titled "Important Update on ITP's Future" that reveals it has "reached a point where the organization cannot continue. After a full review of our finances, the Board has confirmed that ITP is insolvent."

Insolvency seems to have come as something of a surprise. "These debts are historic. They go back over many years. While some of the issues were worked on in more recent times, the full scale of the problem only became visible during the leadership change in 2025," the Update states. "Once the Board understood the full picture, it was clear that there was no responsible way forward other than liquidation." [...]

ITP's constitution requires its members to formally resolve to wind up the organization, so as one of its final acts the group has called a Special General Meeting (SGM) for 23 October 2025 to confirm liquidation and appoint a liquidator. This situation impacts more than ITP's ~10,000 members, because the organization offers assessment services that assess whether IT professionals' skills and qualifications make them eligible to move to New Zealand for work. ITP also certifies IT degrees at New Zealand universities, and oversees the NZ Cloud Computing Code of Practice. ITP also conducted educational and advocacy activities aimed at growing New Zealand's tech workforce.

China

China's K-visa Plans Spark Worries of a Talent Flood (cnbc.com) 70

An anonymous reader shares a report: Immigration anxieties and a challenging job market have sparked an online backlash over China's latest attempt at attracting global talent -- a new visa program announced in August. The program, which was rolled out on Wednesday with the aim of attracting foreign professionals, will also test how China balances its immigration policy with its pursuit of technological ambitions.

Under the new rules, young graduates -- in the fields of science, technology, engineering and mathematics or STEM -- no longer need backing from a local employer and can enjoy more flexibility in terms for entry frequency and duration of stay. The keyword "K-visa" -- as China's new visa category is called -- was among the top searches on social media site Weibo for days, before chatter about National Day traffic jams pushed it off the charts as millions hit the road for a week-long holiday.

Chinese social media users argue that the new visa tilts the playing field toward foreign graduates at the expense of those educated in China. Others on Weibo warned that without employer sponsorship, the program could invite fraudulent applications and open the door to a surge in arrivals from developing countries, piling pressure on an already strained labor market.

AI

OpenAI Says GPT-5 Stacks Up To Humans in a Wide Range of Jobs (techcrunch.com) 39

An anonymous reader shares a report: OpenAI released a new benchmark on Thursday that tests how its AI models perform compared to human professionals across a wide range of industries and jobs. The test, GDPval, is an early attempt at understanding how close OpenAI's systems are to outperforming humans at economically valuable work -- a key part of the company's founding mission to develop artificial general intelligence or AGI.

OpenAI says its found that its GPT-5 model and Anthropic's Claude Opus 4.1 "are already approaching the quality of work produced by industry experts." That's not to say that OpenAI's models are going to start replacing humans in their jobs immediately. Despite some CEOs' predictions that AI will take the jobs of humans in just a few years, OpenAI admits that GDPval today covers a very limited number of tasks people do in their real jobs. However, it is one of the latest ways the company is measuring AI's progress towards this milestone. GDPval is based on nine industries that contribute the most to America's gross domestic product, including domains such as healthcare, finance, manufacturing, and government. The benchmark tests an AI model's performance in 44 occupations among those industries, ranging from software engineers to nurses to journalists.

NASA

NASA Introduces 10 New Astronaut Candidates (cbsnews.com) 59

NASA has unveiled 10 new astronaut candidates drawn from over 8,000 applicants. The diverse group includes four men and six women -- pilots, scientists, and medical professionals -- who will train for future missions to the ISS, the moon, and eventually Mars. CBS News reports: This is NASA's first astronaut class with more women than men. It includes six pilots with experience in high-performance aircraft, a biomedical engineer, an anesthesiologist, a geologist and a former SpaceX launch director. Among the new astronaut candidates is 39-year-old Anna Menon, a mother of two who flew to orbit in 2024 aboard a SpaceX Crew Dragon as a private astronaut on a commercial, non-NASA flight. [...]

The other members of the 2025 astronaut class are:
- Army Chief Warrant Officer 3 Ben Bailey, 38, a graduate of the Naval Test Pilot School with more than 2,000 hours flying more than 30 different aircraft, including recent work with UH-60 Black Hawk and CH-47F Chinook helicopters.
- Lauren Edgar, 40, who holds a Ph.D. in geology from the California Institute of Technology, with experience supporting NASA's Mars exploration rovers and, more recently, serving as a deputy principal investigator with NASA's Artemis 3 moon landing mission.
- Air Force Maj. Adam Fuhrmann, 35, an Air Force Test Pilot School graduate with more than 2,100 hours flying F-16 and F-35 jets. He holds a master's degree in flight test engineering.
- Air Force Maj. Cameron Jones, 35, another graduate of Air Force Test Pilot School as well as the Air Force Weapons School with more than 1,600 hours flying high-performance aircraft, spending most of his time flying the F-22 Raptor.
- Yuri Kubo, 40, a former SpaceX launch director with a master's in electrical and computer engineering who also competed in ultimate frisbee contests.
- Rebecca Lawler, 38, a former Navy P-3 Orion pilot and experimental test pilot with more than 2,800 hours of flight time, including stints flying a NOAA hurricane hunter aircraft. She was a Naval Academy graduate and was a test pilot for United Airlines at the time of her selection.
- Imelda Muller, 34, a former undersea medical officer for the Navy with a medical degree from the University of Vermont's Robert Larner College of Medicine; she was completing her residency in anesthesia at Johns Hopkins University School of Medicine in Baltimore at the time of her astronaut selection.
- Navy Lt. Cmdr. Erin Overcash, 34, a Naval Test Pilot School graduate and an experienced F/A-18 and F/A-18F Super Hornet pilot with 249 aircraft carrier landings. She also trained with the USA Rugby Women's National Team.
- Katherine Spies, 43, a former Marine Corps AH-1 attack helicopter pilot and a graduate of the Naval Test Pilot School with more than 2,000 hours flying time. She was director of flight test engineering for Gulfstream Aerospace Corp. at the time of her astronaut selection.

Social Networks

Why LinkedIn Rewards Mediocrity (elliotcsmith.com) 47

LinkedIn's engagement-driven algorithm systematically elevates shallow, meaningless content over substantive professional discourse, according to a new analysis that highlights how major platforms prioritize user retention metrics over content quality. Entrepreneur and product executive Elliot Smith describes encountering a "seemingly endless stream of posts that are over fluffed, over produced and ultimately say nothing" that he categorizes as "toxic mediocrity."

Smith argues the platform's reward system creates a destructive cycle where "comments, likes and other engagement" signal user activity to LinkedIn's algorithm, which then promotes similar vapid content. "LinkedIn wants you on LinkedIn," Smith writes, noting the Microsoft-owned platform correlates engagement with ad clicks and premium conversions. Smith recommends professionals focus on substantive work rather than platform gaming, arguing "nothing you post there is going to change your career."

Slashdot Top Deals