Education

Digital Platforms Correlate With Cognitive Decline in Young Users (npr.org) 48

Preteens who use increasing amounts of social media perform poorer in reading, vocabulary and memory tests in early adolescence compared to those who use little or no social media. A study published in JAMA examined data from over 6,000 children ages 9 to 10 through early adolescence. Researchers classified the children into three groups: 58% used little or no social media over several years, 37% started with low-level use but spent about an hour daily on social media by age 13, and 6% spent three or more hours daily by that age.

Even low users who spent about one hour per day performed 1 to 2 points lower on reading and memory tasks compared to non-users. High users performed 4 to 5 points lower than non-social media users. Jason Nagata, a pediatrician at the University of California, San Francisco and study author, said the findings were notable because even modest social media use correlated with lower cognitive scores.
Security

Redis Warns of Critical Flaw Impacting Thousands of Instances (bleepingcomputer.com) 3

An anonymous reader quotes a report from BleepingComputer: The Redis security team has released patches for a maximum severity vulnerability that could allow attackers to gain remote code execution on thousands of vulnerable instances. Redis (short for Remote Dictionary Server) is an open-source data structure store used in approximately 75% of cloud environments, functioning like a database, cache, and message broker, and storing data in RAM for ultra-fast access. The security flaw (tracked as CVE-2025-49844) is caused by a 13-year-old use-after-free weakness found in the Redis source code and can be exploited by authenticated threat actors using a specially crafted Lua script (a feature enabled by default). Successful exploitation enables them to escape the Lua sandbox, trigger a use-after-free, establish a reverse shell for persistent access, and achieve remote code execution on the targeted Redis hosts.

After compromising a Redis host, attackers can steal credentials, deploy malware or cryptocurrency mining tools, extract sensitive data from Redis, move laterally to other systems within the victim's network, or use stolen information to gain access to other cloud services. "This grants an attacker full access to the host system, enabling them to exfiltrate, wipe, or encrypt sensitive data, hijack resources, and facilitate lateral movement within cloud environments," said Wiz researchers, who reported the security issue at Pwn2Own Berlin in May 2025 and dubbed it RediShell.

While successful exploitation requires attackers first to gain authenticated access to a Redis instance, Wiz found around 330,000 Redis instances exposed online, with at least 60,000 of them not requiring authentication. Redis and Wiz urged admins to patch their instances immediately by applying security updates released on Friday, "prioritizing those that are exposed to the internet." To further secure their Redis instances against remote attacks, admins can also enable authentication, disable Lua scripting and other unnecessary commands, launch Redis using a non-root user account, enable Redis logging and monitoring, limit access to authorized networks only, and implement network-level access controls using firewalls and Virtual Private Clouds (VPCs).

AI

What If Vibe Coding Creates More Programming Jobs? (msn.com) 82

Vibe coding tools "are transforming the job experience for many tech workers," writes the Los Angeles Times. But Gartner analyst Philip Walsh said the research firm's position is that AI won't replace software engineers and will actually create a need for more. "There's so much software that isn't created today because we can't prioritize it," Walsh said. "So it's going to drive demand for more software creation, and that's going to drive demand for highly skilled software engineers who can do it..." The idea that non-technical people in an organization can "vibe-code" business-ready software is a misunderstanding [Walsh said]... "That's simply not happening. The quality is not there. The robustness is not there. The scalability and security of the code is not there," Walsh said. "These tools reward highly skilled technical professionals who already know what 'good' looks like."
"Economists, however, are also beginning to worry that AI is taking jobs that would otherwise have gone to young or entry-level workers," the article points out. "In a report last month, researchers at Stanford University found "substantial declines in employment for early-career workers'' — ages 22-25 — in fields most exposed to AI. Stanford researchers also found that AI tools by 2024 were able to solve nearly 72% of coding problems, up from just over 4% a year earlier."

And yet Cat Wu, project manager of Anthropic's Claude Code, doesn't even use the term vibe coding. "We definitely want to make it very clear that the responsibility, at the end of the day, is in the hands of the engineers." Wu said she's told her younger sister, who's still in college, that software engineering is still a great career and worth studying. "When I talk with her about this, I tell her AI will make you a lot faster, but it's still really important to understand the building blocks because the AI doesn't always make the right decisions," Wu said. "A lot of times the human intuition is really important."
AI

OpenAI's First Study On ChatGPT Usage (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Today, OpenAI's Economic Research Team went a long way toward answering that question, on a population level, releasing a first-of-its-kind National Bureau of Economic Research working paper (in association with Harvard economist David Denning) detailing how people end up using ChatGPT across time and tasks. While other research has sought to estimate this kind of usage data using self-reported surveys, this is the first such paper with direct access to OpenAI's internal user data. As such, it gives us an unprecedented direct window into reliable usage stats for what is still the most popular application of LLMs by far. After digging through the dense 65-page paper, here are seven of the most interesting and/or surprising things we discovered about how people are using OpenAI today. Here are the seven most interesting and surprising findings from the study:

1. ChatGPT is now used by "nearly 10% of the world's adult population," up from 100 million users in early 2024 to over 700 million users in 2025. Daily traffic is about one-fifth of Google's at 2.6 billion GPT messages per day.

2. Long-term users' daily activity has plateaued since June 2025. Almost all recent growth comes from new sign-ups experimenting with ChatGPT, not from established users increasing their usage.

3. 46% of users are aged 18-25, making ChatGPT especially popular among the youngest adult cohort. Factoring in under-18 users (not counted in the study), the majority of ChatGPT users likely weren't alive in the 20th century.

4. At launch in 2022, ChatGPT was 80% male-dominated. By late 2025, the balance has shifted: 52.4% of users are now female.

5. In 2024, work vs. personal use was close to even. By mid-2025, 72% of usage is non-work related -- people are using ChatGPT more for personal, creative, and casual needs than for productivity.

6. 28% of all conversations involve writing assistance (emails, edits, translations). For work-related queries, that jumps to 42% overall, and 52% among business/management jobs. Furthermore, the report found that editing and critiquing text is more common than generating text from scratch.

7. 14.9% of work-related usage is dealt with "making decisions and solving problems." This shows people don't just use ChatGPT to do tasks -- they use it as an advisor or co-pilot to help weigh options and guide choices.
Earth

Growth Collides With Rising Seas in Charleston 50

Charleston's planned $1.3 billion sea wall will protect the city's historic downtown peninsula while leaving lower-income neighborhoods like Rosemont exposed to rising waters. The eight-mile barrier, with Charleston contributing $455 million, excludes historically Black communities already experiencing regular flooding.

Meanwhile, developers have received approval for thousands of new homes in flood-prone areas, including Long Savannah's 4,500 units and Cainhoy's 9,000-home development on filled wetlands. Charleston's sea level rose 13 inches over the past century and faces another four-foot rise by 2100. Climate Central projects 8,000 residents and 4,700 homes will face annual flooding risk by 2050. The Bridge Pointe neighborhood already underwent FEMA buyouts after successive floods, while coastal South Carolina zip codes report among the nation's highest insurance non-renewal rates.
Movies

James Cameron Struggles With Real-World Horrors for 'Terminator 7' and New Hiroshima Movie (theguardian.com) 85

"James Cameron has a confession: he can't write Terminator 7..." according to the Guardian, "because reality keeps nicking his plotlines." "I'm at a point right now where I have a hard time writing science-fiction," Cameron told CNN this week. "I'm tasked with writing a new Terminator story [but] I don't know what to say that won't be overtaken by real events. We are living in a science-fiction age right now...."

What Cameron should be looking for is a complete system reboot to reinvigorate the saga in the way Prey brought fans back to Predator and Alien: Romulus restored interest in slimy Xenomorphs. All evidence suggests that the 70-year-old film-maker is far more interested in the current challenges surrounding AI, superintelligences and humankind's constant efforts to destroy itself, which doesn't exactly lend itself to the sort of back-to-basics, relentless-monsters-hunt-a-few-unlucky-humans-for-two-hours approach that has worked elsewhere. The challenge here seems to be to fuse Terminator's core DNA — unstoppable cyborgs, explosive chase sequences, and Sarah Connor-level defiance — with the occasionally rather more prosaic yet equally scary existential anxieties of 21st-century AI doom-mongering. So we may get Terminator 7: Kill List, in which a single, battered freedom fighter is hunted across a decimated city by a T-800 running a predictive policing algorithm that knows her next move before she does. Or T7: Singularity's Mom, in which a lone Sarah Connor-type must protect a teenage coder whose chatbot will one day evolve into Skynet. Or Terminator 7: Terms and Conditions, in which humanity's downfall comes not from nuclear warfare but from everyone absent-mindedly agreeing to Skynet's new privacy policy, triggering an army of leather-clad enforcers to collect on the fine print.

Or perhaps the future just looks terrifying enough without Cameron getting involved — which, rather worryingly for the future of the franchise, seems to be the director's essential point.

"The only way out is through," Cameron said in the CNN interview, "by using our intelligence, by using our curiosity, by using our command of technology, but also, by really understanding the stark probabilities that we face."

In the meantime, Cameron is working on a new film inspired by the book Ghosts of Hiroshima, a book written by Charles Pellegrino, one of the consultants on Titanic. "I know what a meticulous researcher he is," Cameron told CNN in a recent interview. (Transcript here.) CAMERON: He's talked about this book for ages and ages and sent me early versions of it. So, I've read it with interest, great interest a number of times now. What compels me out of all that and what I think the human hook for understanding this tragedy is, is to follow a handful, specifically two will be featured of survivors, that actually survived not only the Hiroshima blast, but then went to Nagasaki and three days later were hit again.... This film scares me. I fear making this film. I fear the images that I'm going to have to create, to be honest and to be truthful.
CNN also spoke to former U.S. Energy secretary Ernest Moni, who is now a CEO at the nonprofit global security organization, the Nuclear Threat Initiative: MONI: There remains a false narrative that the possession of these nuclear weapons is actually making us safer when they're not. That's the narrative I think, ultimately, we need to change. Harry Truman said, quite correctly, these nuclear weapons, they are not military weapons. Dropped on a city, they indiscriminately kill combatants, non-combatants, women, children, etc. They should not be thought of as military weapons, but as weapons of mass destruction, indiscriminate mass destruction when certainly dropped in an urban center.
Thanks to long-time Slashdot reader schwit1 for sharing the article.
United States

A Marco Rubio Impostor is Using AI Voice To Call High-Level Officials (msn.com) 45

An impostor pretending to be Secretary of State Marco Rubio contacted foreign ministers, a U.S. governor and a member of Congress by sending them voice and text messages that mimic Rubio's voice and writing style using AI-powered software, Washington Post reported Tuesday, citing a senior U.S. official and a State Department cable. From the report: U.S. authorities do not know who is behind the string of impersonation attempts but they believe the culprit was probably attempting to manipulate powerful government officials "with the goal of gaining access to information or accounts," according to a cable sent by Rubio's office to State Department employees.

Using both text messaging and the encrypted messaging app Signal, which the Trump administration uses extensively, the impostor "contacted at least five non-Department individuals, including three foreign ministers, a U.S. governor, and a U.S. member of Congress," said the cable, dated July 3. The impersonation campaign began in mid-June when the impostor created a Signal account using the display name "Marco.Rubio@state.gov" to contact unsuspecting foreign and domestic diplomats and politicians, said the cable.

Medicine

Weedkiller Ingredient Widely Used In US Can Damage Organs and Gut Bacteria, Research Shows (theguardian.com) 85

An anonymous reader quotes a report from The Guardian: The herbicide ingredient used to replace glyphosate in Roundup and other weedkiller products can kill gut bacteria and damage organs in multiple ways, new research shows. The ingredient, diquat, is widely employed in the US as a weedkiller in vineyards and orchards, and is increasingly sprayed elsewhere as the use of controversial herbicide substances such as glyphosate and paraquat drops in the US. But the new piece of data suggests diquat is more toxic than glyphosate, and the substance is banned over its risks in the UK, EU, China and many other countries. Still, the EPA has resisted calls for a ban, and Roundup formulas with the ingredient hit the shelves last year. [...]

Diquat is also thought to be a neurotoxin, carcinogen and linked to Parkinson's disease. An October analysis of EPA data by the Friends of the Earth non-profit found it is about 200 times more toxic than glyphosate in terms of chronic exposure. [...] The new review of scientific literature in part focuses on the multiple ways in which diquat damages organs and gut bacteria, including by reducing the level of proteins that are key pieces of the gut lining. The weakening can allow toxins and pathogens to move from the stomach into the bloodstream, and trigger inflammation in the intestines and throughout the body. Meanwhile, diquat can inhibit the production of beneficial bacteria that maintain the gut lining. Damage to the lining also inhibits the absorption of nutrients and energy metabolism, the authors said.

The research further scrutinizes how the substance harms the kidneys, lungs and liver. Diquat "causes irreversible structural and functional damage to the kidneys" because it can destroy kidney cells' membranes and interfere with cell signals. The effects on the liver are similar, and the ingredient causes the production of proteins that inflame the organ. Meanwhile, it seems to attack the lungs by triggering inflammation that damages the organ's tissue. More broadly, the inflammation caused by diquat may cause multiple organ dysfunction syndrome, a scenario in which organ systems begin to fail. The authors note that many of the studies are on rodents and more research on low, long-term exposure is needed.
The report notes that the EPA is not reviewing the chemical, "and even non-profits that push for tighter pesticide regulations have largely focused their attention elsewhere."

"[T]hat was in part because U.S. pesticide regulations are so weak that advocates are tied up with battles over ingredients like glyphosate, paraquat and chlorpyrifos -- substances that are banned elsewhere but still widely used here. Diquat is 'overshadowed' by those ingredients."
Programming

Microsoft Open Sources Copilot Chat for VS Code on GitHub (nerds.xyz) 18

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.

"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

China

DeepSeek Faces Ban From Apple, Google App Stores In Germany 15

Germany's data protection commissioner has urged Apple and Google to remove Chinese AI startup DeepSeek from their app stores due to concerns about data protection. Reuters reports: Commissioner Meike Kamp said in a statement on Friday that she had made the request because DeepSeek illegally transfers users' personal data to China. The two U.S. tech giants must now review the request promptly and decide whether to block the app in Germany, she added, though her office has not set a precise timeframe. According to its own privacy policy, DeepSeek stores numerous pieces of personal data, such as requests to its AI program or uploaded files, on computers in China.

"DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union," [Commissioner Meike Kamp] said. "Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies," she added. The commissioner said she took the decision after asking DeepSeek in May to meet the requirements for non-EU data transfers or else voluntarily withdraw its app. DeepSeek did not comply with this request, she added.
Science

Casino Lights Could Be Warping Your Brain To Take Risks, Scientists Warn (sciencealert.com) 28

ScienceAlert reports: Casino lighting could be nudging gamblers to be more reckless with their money, according to a new study, which found a link between blue-enriched light and riskier gambling behavior. The extra blue light emitted by casino decor and LED screens seems to trigger certain switches in our brains, making us less sensitive to financial losses compared to gains of equal magnitude, researchers from Flinders University and Monash University in Australia found...

The researchers think circadian photoreception, which is our non-visual response to light, is playing a part here. The level of blue spectrum light may be activating specific eye cells connected to brain regions in charge of decision-making, emotional regulation, and processing risk versus reward scenarios.

"Under conditions where the lighting emitted less blue, people tended to feel a $100 loss much more strongly than a $100 gain — the loss just feels worse," [says the study's lead author, a psychologist at the Flinders Health and Medical Research Institute]. "But under bright, blue-heavy light such as that seen in casino machines, the $100 loss didn't appear to feel as bad, so people were more willing to take the risk...." That raises some questions around ethics and responsibility, according to the researchers. While encouraging risk taking might be good for the gambling business, it's not good for the patrons spending their cash.

One professor involved in the study reached this conclusion. "It is possible that simply dimming the blue in casino lights could help promote safer gambling behaviors."

The research has been published in Scientific Reports.

Thanks to Slashdot reader alternative_right for sharing the news.
Cloud

AWS Forms EU-Based Cloud Unit As Customers Fret (theregister.com) 31

An anonymous reader quotes a report from The Register: In a nod to European customers' growing mistrust of American hyperscalers, Amazon Web Services says it is establishing a new organization in the region "backed by strong technical controls, sovereign assurances, and legal protections." Ever since the Trump 2.0 administration assumed office and implemented an erratic and unprecedented foreign policy stance, including aggressive tariffs and threats to the national sovereignty of Greenland and Canada, customers in Europe have voiced unease about placing their data in the hands of big U.S. tech companies. The Register understands that data sovereignty is now one of the primary questions that customers at European businesses ask sales reps at hyperscalers when they have conversations about new services.

[...] AWS is forming a new European organization with a locally controlled parent company and three subsidiaries incorporated in Germany, as part of its European Sovereign Cloud (ESC) rollout, set to launch by the end of 2025. Kathrin Renz, an AWS Industries VP based in Munich, will lead the operation as the first managing director of the AWS ESC. The other leaders, we're told, include a government security official and a privacy official – all EU citizens. The cloud giant stated: "AWS will establish an independent advisory board for the AWS European Sovereign Cloud, legally obligated to act in the best interest of the AWS European Sovereign Cloud. Reinforcing the sovereign control of the AWS European Sovereign Cloud, the advisory board will consist of four members, all EU citizens residing in the EU, including at least one independent board member who is not affiliated with Amazon. The advisory board will act as a source of expertise and provide accountability for AWS European Sovereign Cloud operations, including strong security and access controls and the ability to operate independently in the event of disruption."

The AWS ESC allows the business to continue operations indefinitely, "even in the event of a connectivity interruption between the AWS European Sovereign Cloud and the rest of the world." Authorized ESC staff who are EU residents will have independent access to a replica of the source code needed to maintain services under "extreme circumstances." The services will have "no critical dependencies on non-EU infrastructure," with staff, tech, and leadership all based on the continent, AWS said. "The AWS European Sovereign Cloud will have its own dedicated Amazon Route 53, providing customers with a highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services," the company said.
"The Route 53 name servers for the AWS European Sovereign Cloud will use only European Top Level Domains (TLDs) for their own names," added AWS. "AWS will also launch a dedicated 'root' European Certificate Authority, so that the key material, certificates, and identity verification needed for Secure Sockets Layer/Transport Layer Security certificates can all run autonomously within the AWS European Sovereign Cloud."

The Register also notes that the sovereign cloud will be "supported by a dedicated European Security Operations Center (SOC), led by an EU citizen residing in the EU." That said, the parent company "remains under American ownership and may be subject to the Cloud Act, which requires U.S. companies to turn over data to law enforcement authorities with the proper warrants, no matter where that data is stored."
AI

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions 75

An anonymous reader quotes a report from 404 Media: The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning "a bunch of schizoposters" who believe "they've made some sort of incredible discovery or created a god or become a god," highlighting a new type of chatbot-fueled delusion that started getting attention in early May. "LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities," one of the moderators of r/accelerate, wrote in an announcement. "There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment."

The moderator said that it has banned "over 100" people for this reason already, and that they've seen an "uptick" in this type of user this month. The moderator explains that r/accelerate "was formed to basically be r/singularity without the decels." r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. "Decels" is short for the pejorative "decelerationists," who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a "pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents."

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about "Chatgpt induced psychosis." From someone saying their partner is convinced he created the "first truly recursive AI" with ChatGPT that is giving them "the answers" to the universe. [...] The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions." The author of that post said they noticed a spike in websites, blogs, Githubs, and "scientific papers" that "are very obvious psychobabble," and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. "Ironically, the OP post appears to be falling for the same issue as well," the r/accelerate moderator wrote.
"Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people," an r/accelerate moderator told 404 Media. "The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now."

Moderators of the subreddit often cite the term "Neural Howlround" to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles.
Businesses

Klarna's Losses Widen After More Consumers Fail To Repay Loans 100

Klarna's net loss more than doubled in the first quarter [non-paywalled link] as more consumers failed to repay loans from the Swedish "buy now, pay later" lender as concerns rose about the financial health of US consumers. Financial Times: The fintech, which offers interest-free consumer loans to allow customers to make retail purchases, on Monday reported a net loss of $99 million for the three months to March, up from $47 million a year earlier.

The company, which makes money by charging fees to merchants and to consumers who fail to repay on time, said its customer credit losses had risen to $136 million, a 17% year-on-year increase. The increased failure to repay comes on the back of gloomy economic sentiment in the US, where a closely watched measure of consumers' confidence last week fell to its second-lowest level on record. US President Donald Trump's trade war has driven expectations of higher inflation.
Further reading: The Klarna Hype Machine.
AI

Google DeepMind Creates Super-Advanced AI That Can Invent New Algorithms 31

An anonymous reader quotes a report from Ars Technica: Google's DeepMind research division claims its newest AI agent marks a significant step toward using the technology to tackle big problems in math and science. The system, known as AlphaEvolve, is based on the company's Gemini large language models (LLMs), with the addition of an "evolutionary" approach that evaluates and improves algorithms across a range of use cases. AlphaEvolve is essentially an AI coding agent, but it goes deeper than a standard Gemini chatbot. When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.

According to DeepMind, this AI uses an automatic evaluation system. When a researcher interacts with AlphaEvolve, they input a problem along with possible solutions and avenues to explore. The model generates multiple possible solutions, using the efficient Gemini Flash and the more detail-oriented Gemini Pro, and then each solution is analyzed by the evaluator. An evolutionary framework allows AlphaEvolve to focus on the best solution and improve upon it. Many of the company's past AI systems, for example, the protein-folding AlphaFold, were trained extensively on a single domain of knowledge. AlphaEvolve, however, is more dynamic. DeepMind says AlphaEvolve is a general-purpose AI that can aid research in any programming or algorithmic problem. And Google has already started to deploy it across its sprawling business with positive results.
DeepMind's AlphaEvolve AI has optimized Google's Borg cluster scheduler, reducing global computing resource usage by 0.7% -- a significant cost saving at Google's scale. It also outperformed specialized AI like AlphaTensor by discovering a more efficient algorithm for multiplying complex-valued matrices. Additionally, AlphaEvolve proposed hardware-level optimizations for Google's next-gen Tensor chips.

The AI remains too complex for public release but that may change in the future as it gets integrated into smaller research tools.
Education

America's College Board Launches AP Cybersecurity Course For Non-College-Bound Students (edweek.org) 26

Besides administering standardized pre-college tests, America's nonprofit College Board designs college-level classes that high school students can take. But now they're also crafting courses "not just with higher education at the table, but industry partners such as the U.S. Chamber of Commerce and the technology giant IBM," reports Education Week.

"The organization hopes the effort will make high school content more meaningful to students by connecting it to in-demand job skills." It believes the approach may entice a new kind of AP student: those who may not be immediately college-bound.... The first two classes developed through this career-driven model — dubbed AP Career Kickstart — focus on cybersecurity and business principles/personal finance, two fast-growing areas in the workforce." Students who enroll in the courses and excel on a capstone assessment could earn college credit in high school, just as they have for years with traditional AP courses in subjects like chemistry and literature. However, the College Board also believes that students could use success in the courses as a selling point with potential employers... Both the business and cybersecurity courses could also help fulfill state high school graduation requirements for computer science education...

The cybersecurity course is being piloted in 200 schools this school year and is expected to expand to 800 schools next school year... [T]he College Board is planning to invest heavily in training K-12 teachers to lead the cybersecurity course.

IBM's director of technology, data and AI called the effort "a really good way for corporations and companies to help shape the curriculum and the future workforce" while "letting them know what we're looking for." In the article the associate superintendent for teaching at a Chicago-area high school district calls the College Board's move a clear signal that "career-focused learning is rigorous, it's valuable, and it deserves the same recognition as traditional academic pathways."

Also interesting is why the College Board says they're doing it: The effort may also help the College Board — founded more than a century ago — maintain AP's prominence as artificial intelligence tools that can already ace nearly every existing AP test on an ever-greater share of job tasks once performed by humans. "High schools had a crisis of relevance far before AI," David Coleman, the CEO of the College Board, said in a wide-ranging interview with EdWeek last month. "How do we make high school relevant, engaging, and purposeful? Bluntly, it takes [the] next generation of coursework. We are reconsidering the kinds of courses we offer...."

"It's not a pivot because it's not to the exclusion of higher ed," Coleman said. "What we are doing is giving employers an equal voice."

Thanks to long-time Slashdot reader theodp for sharing the article.
Facebook

After Meta Blocks Whistleblower's Book Promotion, It Becomes an Amazon Bestseller (thetimes.com) 39

After Meta convinced an arbitrator to temporarily prevent a whistleblower from promoting their book about the company (titled: Careless People), the book climbed to the top of Amazon's best-seller list. And the book's publisher Macmillan released a defiant statement that "The arbitration order has no impact on Macmillan... We will absolutely continue to support and promote it." (They added that they were "appalled by Meta's tactics to silence our author through the use of a non-disparagement clause in a severance agreement.")

Saturday the controversy was even covered by Rolling Stone: [Whistleblower Sarah] Wynn-Williams is a diplomat, policy expert, and international lawyer, with previous roles including serving as the Chief Negotiator for the United Nations on biosafety liability, according to her bio on the World Economic Forum...

Since the book's announcement, Meta has forcefully responded to the book's allegations in a statement... "Eight years ago, Sarah Wynn-Williams was fired for poor performance and toxic behavior, and an investigation at the time determined she made misleading and unfounded allegations of harassment. Since then, she has been paid by anti-Facebook activists and this is simply a continuation of that work. Whistleblower status protects communications to the government, not disgruntled activists trying to sell books."

But the negative coverage continues, with the Observer Sunday highlighting it as their Book of the Week. "This account of working life at Mark Zuckerberg's tech giant organisation describes a 'diabolical cult' able to swing elections and profit at the expense of the world's vulnerable..."

Though ironically Wynn-Williams started their career with optimism about Facebook's role in the app internet.org. . "Upon witnessing how the nascent Facebook kept Kiwis connected in the aftermath of the 2011 Christchurch earthquake, she believed that Mark Zuckerberg's company could make a difference — but in a good way — to social bonds, and that she could be part of that utopian project...

What internet.org involves for countries that adopt it is a Facebook-controlled monopoly of access to the internet, whereby to get online at all you have to log in to a Facebook account. When the scales fall from Wynn-Williams's eyes she realises there is nothing morally worthwhile in Zuckerberg's initiative, nothing empowering to the most deprived of global citizens, but rather his tool involves "delivering a crap version of the internet to two-thirds of the world". But Facebook's impact in the developing world proves worse than crap. In Myanmar, as Wynn-Williams recounts at the end of the book, Facebook facilitated the military junta to post hate speech, thereby fomenting sexual violence and attempted genocide of the country's Muslim minority. "Myanmar," she writes with a lapsed believer's rue, "would have been a better place if Facebook had not arrived." And what is true of Myanmar, you can't help but reflect, applies globally...

"Myanmar is where Wynn-Williams thinks the 'carelessness' of Facebook is most egregious," writes the Sunday Times: In 2018, UN human rights experts said Facebook had helped spread hate speech against Rohingya Muslims, about 25,000 of whom were slaughtered by the Burmese military and nationalists. Facebook is so ubiquitous in Myanmar, Wynn-Williams points out, that people think it is the entire internet. "It's no surprise that the worst outcome happened in the place that had the most extreme take-up of Facebook." Meta admits it was "too slow to act" on abuse in its Myanmar services....

After Wynn-Williams left Facebook, she worked on an international AI initiative, and says she wants the world to learn from the mistakes we made with social media, so that we fare better in the next technological revolution. "AI is being integrated into weapons," she explains. "We can't just blindly wander into this next era. You think social media has turned out with some issues? This is on another level."

The Almighty Buck

Gen Z Americans Don't Have Enough Saved To Cover a Single Month of Spending (fortune.com) 189

An anonymous reader quotes a report from Fortune: Younger Americans don't have enough saved to cover a single month of spending, showcasing their vulnerability should the economy head into a downturn. Members of the Gen Z generation -- people born after 1995 -- were spending twice the amount they had in savings on average in February, according to Bank of America Institute analysis of internal account and card data released Friday. The ratio has increased in the past two years, and is much higher than for other generations. In part that's because Gen Z consumers, many of whom still hold entry-level positions and make less than their older peers, tend to spend a bigger share of their incomes on necessities including rent and utilities. But they're also more likely to shell out on discretionary categories like travel and entertainment. Spending in non-essentials among that cohort is up more than 25% from a year ago -- substantially above the overall rate.

While the report noted that Gen Z workers are still garnering robust pay gains compared to older groups, it showcases a point of vulnerability as households' views of the economy dim. [...] The Bank of America report also pointed to a worsening labor market for younger Americans. The number of Gen Z households receiving unemployment benefits rose by nearly a third in the past year -- the most of any generation. It also noted that, with underemployment on the rise, that could have long-term career effects for that cohort.

Japan

Japan Births Fall To Lowest in 125 Years 190

The number of babies born in Japan last year fell to the lowest level since records began 125 years ago as the country's demographic crisis deepens and government efforts to reverse the decline continue to fail. Financial Times [non-paywalled source]: Japan recorded 720,988 births in 2024, according to preliminary government figures published on Thursday. The number has declined for nine straight years and appears to be largely unaffected by financial and other government incentives for married couples to produce more children.
AI

AI Reshapes Corporate Workforce as Companies Halt Traditional Hiring 119

Major corporations are reshaping their workforces around AI with Salesforce announcing it will not hire software engineers in 2025 and other companies laying off thousands while shifting focus to AI-specific roles. Duolingo has laid off thousands after implementing ChatGPT-4, UPS cut 4,000 jobs in its largest layoff in 116 years, and IBM paused hiring for back-office and HR positions that AI can now handle.

Amazon is redirecting staff from Alexa to AI areas, while Intuit is laying off 10% of its non-AI workforce. Cisco plans to cut 7% of employees in its second round of job cuts this year as it prioritizes AI and cybersecurity. Salesforce reports its AI platform is boosting software engineering productivity by 30%. SAP is restructuring 8,000 positions to focus on AI-driven business areas. The trend extends globally, with Microsoft relocating thousands during an "exodus" from China, while entry-level jobs on Wall Street are becoming obsolete.

A study found that 3 out of 10 companies replaced workers with AI last year, with over one-third of firms using AI likely to automate more roles in 2025. Job listings at large privately-held AI companies have dropped 14.2% over six months, JP Morgan wrote in a note seen by Slashdot. The transformation is creating new opportunities, with rising demand for AI skills in job postings. A survey of more than 1,200 users found nearly two-thirds of young professionals use AI tools at work, with 93% not worried about job threats, as business leaders view Generation Z's digital skills as beneficial for leveraging AI.

Slashdot Top Deals