Books

Sweden Swaps Screens For Books In the Classroom (arstechnica.com) 68

An anonymous reader quotes a report from Ars Technica: In 2023, the Swedish government announced that the country's schools would be going back to basics, emphasizing skills such as reading and writing, particularly in early grades. After mostly being sidelined, physical books are now being reintroduced into classrooms, and students are learning to write the old-fashioned way: by hand, with a pencil or pen, on sheets of paper. The Swedish government also plans to make schools cellphone-free throughout the country.

Educational authorities have been investing heavily. Last year alone, the education ministry allocated $83 million to purchase textbooks and teachers' guides. In a country with about 11 million people, the aim is for every student to have a physical textbook for each subject. The government also put $54 million towards the purchase of fiction and non-fiction books for students.

These moves represent a dramatic pivot from previous decades, during which Sweden -- and many other nations -- moved away from physical books in favor of tablets and digital resources in an effort to prepare students for life in an online world. Perhaps unsurprisingly, the Nordic country's efforts have sparked a debate on the role of digital technology in education, one that extends well beyond the country's borders. US parents in districts that have adopted digital technology to a great extent may be wondering if educators will reverse course, too.
As for why Sweden is pivoting away from digital devices, researcher Linda Falth said the move was driven by several factors, including concerns over whether the digitization of classrooms had been evidence-based. "There was also a broader cultural reassessment," Falth said. "Sweden had positioned itself as a frontrunner in digital education, but over time concerns emerged about screen time, distraction, reduced deep reading, and the erosion of foundational skills such as sustained attention and handwriting."

Falth noted that proponents of reform believe that "basic skills -- especially reading, writing, and numeracy -- must be firmly established first, and that physical textbooks are often better suited for that purpose."

Further reading: Digital Platforms Correlate With Cognitive Decline in Young Users
Desktops (Apple)

Windows PCs Crash Three Times As Often As Macs, Report Says (techspot.com) 186

A workplace-device study says Windows PCs crash significantly more often than Macs, lag further behind on patching and encryption in some sectors, and are typically replaced sooner. TechSpot reports: Omnissa's 2026 State of Digital Workspace report outlines the IT challenges that various organizations face from the growing use of AI and the heterogeneous deployment of enterprise devices. The relative instability of Windows and Android is a recurring theme throughout the report. The company gathered telemetry from clients located across the globe in retail, healthcare, finance, education, government, and other sectors throughout 2025. The data suggests that IT administrators face frustrating security gaps due to inconsistent patching across a diverse mosaic of devices and operating systems.

Employee workflow disruption, often due to software issues, is one area of concern. The report found that Windows devices were forced to shut down 3.1 times more often than Macs. Windows programs also froze 7.5 times more often than macOS apps and needed to be restarted more than twice as often. Certain industries were also alarmingly lax in securing Windows and Android devices. More than half of Windows and Android devices in healthcare and pharma were five major operating system updates behind, likely leaving them more vulnerable to errors and malware. More than half of the desktops and mobile devices used for education were also unencrypted, putting students' privacy at risk.

Macs also last longer, being replaced every five years on average, compared to every three years for Windows PCs. Despite a recent backlash against Windows, driven by a push for digital sovereignty in countries such as Germany, Windows use on government devices actually doubled last year. Meanwhile, Macs using Apple's M-series chips showcase a significant thermal advantage, with an average temperature of 40.1 degrees Celsius, while Intel processors run at 65.2 degrees.

Robotics

Melania Trump Welcomes Humanoid Robot At White House Summit 94

Longtime Slashdot reader theodp writes: In Melania and the Robot, the New York Times reports on First Lady Melania Trump's inaugural Fostering the Future Together Coalition Summit, which brought together international leaders, First Spouses from around the world, tech leaders, educators, and nonprofits to collaborate on practical solutions that expand access to educational tools while strengthening protections for children in digital environments (Day 2 WH summary). The Times begins:

"On Wednesday, Mrs. Trump appeared at the White House alongside Figure 3, a humanoid, A.I.-powered robot whose uses, according to the company that makes it, include fetching towels, carrying groceries and serving champagne. But Mrs. Trump joins tech executives and some researchers in envisioning a world beyond robot butlery. She is interested in how these robots could cut it as educators. Both clad in shades of white, the first lady and the visiting robot walked into a gathering of first spouses from around the world, a group that included Sara Netanyahu of Israel, Olena Zelenska of Ukraine, and Brigitte Macron of France. The dulcet tones from a (presumably human) military orchestra played as the first lady and her guest entered the event. Both lady and robot extolled the virtues of further integrating robots into the educational and social lives of children. In the history of modern first-lady initiatives, which have included building a national book festival (Laura Bush), reshuffling the food pyramid (Michelle Obama) and advocating for free community college (Jill Biden), Mrs. Trump's involvement of a humanoid robot in education policy was a first."

"Figure 3 delivered brief remarks and delivered salutations in several languages. With its sleek black-and-white appearance, Figure 3 would fit right in with the first lady's branding aesthetic, which includes a self-titled coffee table book and movie, not least because the name "MELANIA" was emblazoned on the side of its glossy plastic head. After Figure 3 teetered gingerly away, Mrs. Trump looked around the room and told them that the future looked a lot like what they had just witnessed. 'The future of A.I. is personified,' she told her audience. 'It will be formed in the shape of humans. Very soon artificial intelligence will move from our mobile phones to humanoids that deliver utility.' She invited her guests to envision a future in which a robot philosopher educated children."
Robotics

Qualcomm's New Arduino Ventuno Q Is an AI-Focused Computer Designed For Robotics (engadget.com) 25

Qualcomm and Arduino have unveiled the Arduino Ventuno Q, a new AI-focused single-board computer built for robotics and edge systems. Engadget reports: Called the Arduino Ventuno Q, it uses Qualcomm's Dragonwing IQ8 processor along with a dedicated STM32H5 low-latency microcontroller (MCU). "Ventuno Q is engineered specifically for systems that move, manipulate and respond to the physical world with precision and reliability," the company wrote on the product page. The Ventuno Q is more sophisticated (and expensive) than Arduinio's usual AIO boards, thanks to the Dragonwing IQ8 processor that includes an 8-core ARM Cortex CPU, Adreno Arm Cortex A623 GPU and Hexagon Tensor NPU that can hit up ot 40 TOPs. It also comes with 16GB of LPDDR5 RAM, along with 64GB of eMMC storage and an M.2 NVME Gen.4 slot to expand that. Other features include Wi-Fi 6, Bluetooth 5.3, 2.5Gbps ethernet and USB camera support.

The Ventuno Q includes Arudino App Lab, with pre-trained AI models including LLMs, VLMs, ASR, gesture recognition, pose estimation and object tracking, all running offline. It's designed for AI systems that run entirely offline like smart kiosks, healthcare assistants and traffic flow analysis, along with Edge AI vision and sensing systems. It also supports a full robotics stack including vision processing combined with deterministic motor control for precise vision and manipulation. It's also ideal for education and research in areas like computer vision, generative AI and prototyping at the edge, according to Arduino.
Further reading: Up Next for Arduino After Qualcomm Acquisition: High-Performance Computing
Books

Hyperion Author Dan Simmons Dies From Stroke At 77 (arstechnica.com) 17

Author Dan Simmons, best known for the epic sci-fi novel Hyperion and its sequels, has died at 77 following a stroke. Ars Technica's Eric Berger remembers Simmons, writing: Simmons, who worked in elementary education before becoming an author in the 1980s, produced a broad portfolio of writing that spanned several genres, including horror fiction, historical fiction, and science fiction. Often, his books included elements of all of these. This obituary will focus on what is generally considered his greatest work, and what I believe is possibly the greatest science fiction novel of all time, Hyperion.

Published in 1989, Hyperion is set in a far-flung future in which human settlement spans hundreds of planets. The novel feels both familiar, in that its structure follows Chaucer's Canterbury Tales, and utterly unfamiliar in its strange, far-flung setting.
Simmons' Hyperion appeared in an Ask Slashdot story back in 2008, when Slashdot reader willyhill asked for tips on how Slashdotters track down great sci-fi. If you're in the mood for a little nostalgia, or just want to browse the thread for book recommendations, it's well worth revisiting.
Education

Microsoft: Computer Programming Is Dying, Long Live AI Literacy 104

theodp writes: On Tuesday, Microsoft GM of Education and Workforce Policy (and former Code.org Chief Academic Officer) Pat Yongpradit posted an obituary of sorts for coders. "Computer programmers and software developers are codified differently in the BLS [Bureau of Labor Statistics] data," Yongpradit wrote. "The modern AI-infused world needs less computer programmers (coders) and more software developers (more holistic and higher level). So when folks say that there is less hiring of computer programmers, they are right. But there will be more hiring of software developers, especially those who have adopted an AI-forward mindset and skillset. [...] The number of just pure computer programming roles has already been declining due to reasons like outsourcing, AI will just accelerate the decline."

On Wednesday, Yongpradit's colleague Allyson Knox, Senior Director of Education and Workforce Policy at Microsoft, put another AI nail in the coder coffin, testifying before the House Committee on Education -- the Workforce Subcommittee on Early Childhood, Elementary, and Secondary Education on Building an AI-ready America: Teaching in the Age of AI. "Thank you to Chairman Tim Walberg, Ranking Member Bobby Scott, Chair Kevin Kiley, Ranking Member Suzanne Bonamici and members of the Subcommittee for the opportunity to share Microsoft perspective and that of the educators and parents we hear from every day across the country," Knox wrote in a LinkedIn post.

"Three themes continue to emerge throughout these discussions: 1. Educators want support to build AI literacy and critical thinking skills. 2. Schools need guidance and guardrails to ensure student data is protected and adults remain in control. 3. Teachers want classroom-ready tools, and a voice in shaping them. If we focus on these priorities, we can help ensure AI expands opportunity for every student across the United States."

Yongpradit and Knox report up to Microsoft President Brad Smith, who last July told Code.org CEO Hadi Partovi it was time for the tech-backed nonprofit to "switch hats" from coding to AI as Microsoft announced a new $4 billion initiative to advance AI education. Smith's thoughts on the extraordinary promise of AI in education were cited by Knox in her 2026 Congressional testimony. Interestingly, Knox argued for the importance of computer programming literacy in her 2013 Congressional testimony at a hearing on Our Nation of Builders: Training the Builders of the Future. "Congress needs to come up with fresh ideas on how we can continue to train the next generation of builders, programmers, manufacturers, technicians and entrepreneurs," said Rep. Lee Terry said to open the discussion.

So, are reports of computer programming's imminent death greatly exaggerated?
Education

What's the Point of School When AI Can Do Your Homework? 153

An anonymous reader quotes a report from 404 Media: There's a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein's website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions. Educators told me that Einstein is just one of many AI tools that can do homework for students, but should be seen as a warning to schools that are increasingly seen by students as a place to gain a diploma and status as opposed to the value of education itself.

If an AI can go to school for you what's the point of going to school? For Advait Paliwal, Brown dropout and co-creator of Einstein, there isn't one. "I think about horses," he said. "They used to pull carriages, but when cars came around, I'd argue horses became a lot more free," he said. "They can do whatever they want now. It would be weird if horses revolted and said 'no, I want to pull carriages, this is my purpose in life.'" But humans aren't horses. "This is much bigger than Einstein," Matthew Kirschenbaum told 404 Media. "Einstein is symptomatic. I doubt we'll be talking about Einstein, as such, in a year. But it's symptomatic of what's about to descend on higher ed and secondary ed as well."

[...] The attractiveness of agentic AIs is a symptom of a decades-long trend in higher education. "Universitiesby and large adopted a transactive model of education," Kirschenbaum said. "Students see their diploma as a credential. They pay tuition and at the end of four years, sometimes five years, they receive the credential and, in theory at least, that is then the springboard to economic stability and prosperity." Paliwal seems to agree. He told 404 Media that he attempted to change the university from the inside while working as a TA, but felt stymied by politics. "The only way to force these institutions to evolve is to bring reality to their face. And usually the loudest critics are the ones who can't do their own job well and live in fear of automation," he said.
"I think we really need to question what learning even is and whether traditional educational institutions are actually helping or harming us," said Paliwal. "We're seeing a rise in unemployment across degree holders because of AI, and that makes me question whether this is really what humans are born to do. We've been brainwashed as a society into valuing ourselves by the output of our productive work, and I think humanity is a lot more beautiful than that. Is it really education if we're just memorizing things to perform a task well?"

Kirschenbaum added: "What we're finding is that if forms of education can be transacted then we've just about arrived at the point where autonomous software AI agents are capable of performing the transaction on your behalf," he said. "And so the whole educational paradigm has come back to essentially bite itself in the ass."
AI

America's Peace Corps Announces 'Tech Corps' Volunteers to Help Bring AI to Foreign Countries (engadget.com) 49

Over 240,000 Americans volunteered for Peace Corps projects in 142 countries since the program began more than half a century ago.

But now the agency is launching a new initiative — called Tech Corps. "It's the Peace Corps, but make it AI," explains Engadget: The Peace Corps' latest proposal will recruit STEM graduates or those with professional experience in the artificial intelligence sector and send them to participating host countries.

According to the press release, volunteers will be placed in Peace Corps countries that are part of the American AI Exports Program, which was created last year from an executive order from President Trump as a way to bolster the US' grip on the AI market abroad. Tech Corps members will be tasked with using AI to resolve issues related to agriculture, education, health and economic development. The program will offer its members 12- to 27-month in-person assignments or virtual placements, which will include housing, healthcare, a living stipend and a volunteer service award if the corps member is placed overseas.

"American technology to power prosperity," reads the headline at Tech Corps web site. ("Build the tech nations depend on... See the world. Be the future."

The site says they're recruiting "service-minded technologists to serve in the Peace Corps to help countries around the world harness American AI to enhance opportunity and prosperity for their citizens." (And experienced technology professionals can donate 5-15 hours a week "to mentor and support projects on-the-ground.")
AI

Code.org President Steps Down Citing 'Upending' of CS By AI 15

Long-time Slashdot reader theodp writes: Last July, as Microsoft pledged $4 billion to advance AI education in K-12 schools, Microsoft President Brad Smith told nonprofit Code.org CEO/Founder Hadi Partovi it was time to "switch hats" from coding to AI. He added that "the last 12 years have been about the Hour of Code, but the future involves the Hour of AI." On Friday, Code.org announced leadership changes to make it so.

"I am thrilled to announce that Karim Meghji will be stepping into the role of President & CEO," Partovi wrote on LinkedIn. "Having worked closely with Karim over the last 3.5 years as our CPO, I have complete confidence that he possesses the perfect balance of historical context and 'founder-level' energy to lead us into an AI-centric future."

In a separate LinkedIn post, Code.org co-founder Cameron Wilson explained why he was transitioning to an executive advisor role. "Our community is entering a new chapter as AI changes and upends computer science as a discipline and society at large. Code.org's mission is still the same, however, we are starting a new chapter focused on ensuring students can thrive in the Age of AI. This new chapter will bring new opportunities, new problems to solve, and new communities to engage."

The Code.org leadership changes come just weeks after Code.org confirmed laid off about 14% of its staff, explaining it had "made the difficult decision to part ways with 18 colleagues as part of efforts to ensure our long-term sustainability." January also saw Code.org Chief Academic Officer Pat Yongpradit jump to Microsoft where he now helps "lead Microsoft's global strategy to put people first in an age of AI by shaping education and workforce policy" as a member of Microsoft's Global Education and Workforce Policy team.
AI

India Tells University To Leave AI Summit After Presenting Chinese Robot as Its Own (reuters.com) 11

An anonymous reader shares a report: An Indian university has been asked to vacate its stall at the country's flagship AI summit after a staff member was caught presenting a commercially available robotic dog made in China as its own creation, two government sources said.

"You need to meet Orion. This has been developed by the Centre of Excellence at Galgotias University," Neha Singh, a professor of communications, told state-run broadcaster DD News this week in remarks that have since gone viral.

But social media users quickly identified the robot as the Unitree Go2, sold by China's Unitree Robotics for about $2,800 and widely used in research and education globally. The episode has drawn sharp criticism and has cast an uncomfortable spotlight on India's artificial intelligence ambitions.

Education

Bill Introduced To Replace West Virginia's New CS Course Graduation Requirement With Computer Literacy Proficiency 51

theodp writes: West Virginia lawmakers on Tuesday introduced House Bill 5387 (PDF), which would repeal the state's recently enacted mandatory stand-alone computer science graduation requirement and replace it with a new computer literacy proficiency requirement. Not too surprisingly, the Bill is being opposed by tech-backed nonprofit Code.org, which lobbied for the WV CS graduation requirement (PDF) just last year. Code.org recently pivoted its mission to emphasize the importance of teaching AI education alongside traditional CS, teaming up with tech CEOs and leaders last year to launch a national campaign to mandate CS and AI courses as graduation requirements.

"It would basically turn the standalone computer science course requirement into a computer literacy proficiency requirement that's more focused on digital literacy," lamented Code.org as it discussed the Bill in a Wednesday conference call with members of the Code.org Advocacy Coalition, including reps from Microsoft's Education and Workforce Policy team. "It's mostly motivated by a variety of different issues coming from local superintendents concerned about, you know, teachers thinking that students don't need to learn how to code and other things. So, we are addressing all of those. We are talking with the chair and vice chair of the committee a week from today to try to see if we can nip this in the bud." Concerns were also raised on the call about how widespread the desire for more computing literacy proficiency (over CS) might be, as well as about legislators who are associating AI literacy more with digital literacy than CS.

The proposed move from a narrower CS focus to a broader goal of computer literacy proficiency in WV schools comes just months after the UK's Department for Education announced a similar curriculum pivot to broader digital literacy, abandoning the narrower 'rigorous CS' focus that was adopted more than a decade ago in response to a push by a 'grassroots' coalition that included Google, Microsoft, UK charities, and other organizations.
Science

2 To 3 Cups of Coffee a Day May Reduce Dementia Risk. But Not if It's Decaf. (nytimes.com) 109

If you think your daily doses of espresso or Earl Grey sharpen your mind, you just might be right, new science suggests. The New York Times: A large new study provides evidence of cognitive benefits from coffee and tea -- if it's caffeinated and consumed in moderation: two to three cups of coffee or one to two cups of tea daily.

People who drank that amount for decades had lower chances of developing dementia than people who drank little or no caffeine, the researchers reported. They followed 131,821 participants for up to 43 years. "This is a very large, rigorous study conducted long term among men and women that shows that drinking two or three cups of coffee per day is associated with reduced risk of dementia," said Aladdin Shadyab, an associate professor of public health and medicine at the University of California, San Diego, who wasn't involved in the study.

The findings, published Monday in JAMA, don't prove caffeine causes these beneficial effects, and it's possible other attributes protected caffeine drinkers' brain health. But independent experts said the study adjusted for many other factors, including health conditions, medication, diet, education, socioeconomic status, family history of dementia, body mass index, smoking and mental illness.

AI

OpenAI Starts Running Ads in ChatGPT (openai.com) 70

OpenAI has started testing ads inside ChatGPT for logged-in adult users on the Free and Go subscription tiers in the United States, the company said. The Plus, Pro, Business, Enterprise and Education tiers remain ad-free. Ads are matched to users based on conversation topics, past chats, and prior ad interactions, and appear clearly labeled as "sponsored" and visually separated from ChatGPT's organic responses.

OpenAI says the ads do not influence ChatGPT's answers, and advertisers receive only aggregate performance data like view and click counts rather than access to individual conversations. Users under 18 do not see ads, and ads are excluded from sensitive topics such as health, mental health, and politics. Free-tier users can opt out of ads in exchange for fewer daily messages.

Further reading: Anthropic Pledges To Keep Claude Ad-free, Calls AI Conversations a 'Space To Think'.
The Internet

Dave Farber Dies at Age 91 (seclists.org) 17

The mailing list for the North American Network Operators' Group discusses Internet infrastructure issues like routing, IP address allocation, and containing malicious activity. This morning there was another message: We are heartbroken to report that our colleague — our mentor, friend, and conscience — David J. Farber passed away suddenly at his home in Roppongi, Tokyo. He left us on Saturday, Feb. 7, 2026, at the too-young age of 91...

Dave's career began with his education at Stevens Institute of Technology, which he loved deeply and served as a Trustee. He joined the legendary Bell Labs during its heyday, and worked at the Rand Corporation. Along the way, among countless other activities, he served as Chief Technologist of the U.S. Federal Communications Commission; became a proficient (instrument-rated) pilot; and was an active board member of the Electronic Frontier Foundation, a digital civil-liberties organization.

His professional accomplishments and impact are almost endless, but often captured by one moniker: "grandfather of the Internet," acknowledging the foundational contributions made by his many students at the University of California, Irvine; the University of Delaware; the University of Pennsylvania; and Carnegie Mellon University. In 2018, at the age of 83, Dave moved to Japan to become Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC). He loved teaching, and taught his final class on January 22, 2026... Dave thrived in Japan in every way...

It's impossible to summarize a life and career as rich and long as Dave"s in our few words here. And each of us, even those who knew him for decades, represent just one facet of his life. But because we are here at its end, we have the sad duty of sharing this news.

Farber once said that " At both Bell Labs and Rand, I had the privilege, at a young age, of working with and learning from giants in our field. Truly I can say (as have others) that I have done good things because I stood on the shoulders of those giants. In particular, I owe much to Dr. Richard Hamming, Paul Baran and George Mealy."
Android

Why Google's Android for PC Launch May Be Messy and Controversial (theverge.com) 53

Google's much-anticipated plan to merge Android and ChromeOS into a single operating system called Aluminium is shaping up to be a drawn-out, complicated transition that could leave existing Chromebook users behind, according to previously unreported court documents in the Google search antitrust case.

The new OS won't be compatible with all existing Chromebook hardware, and Google will be forced to maintain ChromeOS through at least 2033 to honor its 10-year support commitment to current users -- meaning two parallel operating systems running for years.

The timeline itself is messier than Google has let on publicly, the filings suggest. Sameer Samat, Google's head of Android, called the merger "something we're super excited about for next year" last September, but court filings describe the "fastest path" to market as offering Aluminium to "commercial trusted testers" in late 2026 before a full release in 2028.

Enterprise and education customers -- the segments where Chromebooks currently dominate -- are slated for 2028 as well. Columbia computer science professor Jason Nieh, who interviewed Google engineers as a witness in the case, testified that Aluminium requires a heavier software stack and more powerful hardware to run.
Stats

AI Use at Work Has Increased, Gallup Poll Finds (apnews.com) 53

An anonymous reader shared this report from the Associated Press: American workers adopted artificial intelligence into their work lives at a remarkable pace over the past few years, according to a new poll. Some 12% of employed adults say they use AI daily in their job, according to a Gallup Workforce survey conducted this fall of more than 22,000 U.S. workers.

The survey found roughly one-quarter say they use AI at least frequently, which is defined as at least a few times a week, and nearly half say they use it at least a few times a year. That compares with 21% who were using AI at least occasionally in 2023, when Gallup began asking the question, and points to the impact of the widespread commercial boom that ChatGPT sparked for generative AI tools that can write emails and computer code, summarize long documents, create images or help answer questions...

While frequent AI use is on the rise with many employees, AI adoption remains higher among those working in technology-related fields. About 6 in 10 technology workers say they use AI frequently, and about 3 in 10 do so daily. The share of Americans working in the technology sector who say they use AI daily or regularly has grown significantly since 2023, but there are indications that AI adoption could be starting to plateau after an explosive increase between 2024 and 2025...

A separate Gallup Workforce survey from 2025 found that even as AI use is increasing, few employees said it was "very" or "somewhat" likely that new technology, automation, robots or AI will eliminate their job within the next five years. Half said it was "not at all likely," but that has decreased from about 6 in 10 in 2023.

A bar chart lists the sectors most likely to be using AI at their jobs:
  1. Technology (77%)
  2. Finance (64%)
  3. College/University (63%)
  4. Professional Services (62%)
  5. K-12 Education (56%)
  6. Community/Social Services (43%)
  7. Government/Public Policy (42%)
  8. Manufacturing (41%)
  9. Health Care (41%)
  10. Retail (33%)

AI

The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org) 33

This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...

Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."

AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...

AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."

The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."

"We find that AI has the potential to benefit or hinder students, depending on how it is used."
AI

South Korea Launches Landmark Laws To Regulate AI 7

An anonymous reader quotes a report from the Korea Herald: South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally establish safety requirements for high-performance, or so-called frontier, AI systems -- a move that sets the country apart in the global regulatory landscape. According to the Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies. Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.

The act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body -- the Presidential Council on National Artificial Intelligence Strategy -- and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments. The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion.

To reduce the initial burden on businesses, the government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law's scope and how to respond accordingly. Officials noted that the grace period may be extended depending on how international standards and market conditions evolve. The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.

Enforcement under the Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritizes corrective orders for noncompliance, with fines -- capped at 30 million won ($20,300) -- issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one. Transparency obligations for generative AI largely align with those in the EU, but Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
"This is not about boasting that we are the first in the world," said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry. "We're approaching this from the most basic level of global consensus."

Korea's approach differs from the EU by defining "high-performance AI" using technical thresholds like cumulative training compute, rather than regulating based on how AI is used. As a result, Korea believes no current models meet the bar for regulation, while the EU is phasing in broader, use-based AI rules over several years.
Education

Google Begins Offering Free SAT Practice Tests Powered By Gemini (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: It's no secret that students worldwide use AI chatbots to do their homework and avoid learning things. On the flip side, students can also use AI as a tool to beef up their knowledge and plan for the future with flashcards or study guides. Google hopes its latest Gemini feature will help with the latter. The company has announced that Gemini can now create free SAT practice tests and coach students to help them get higher scores. As a standardized test, the content of the SAT follows a predictable pattern. So there's no need to use a lengthy, personalized prompt to get Gemini going. Just say something like, "I want to take a practice SAT test," and the chatbot will generate one complete with clickable buttons, graphs, and score analysis.

Of course, generative AI can go off the rails and provide incorrect information, which is a problem when you're trying to learn things. However, Google says it has worked with education firms like The Princeton Review to ensure the AI-generated tests resemble what students will see in the real deal. The interface for Gemini's practice tests includes scoring and the ability to review previous answers. If you are unclear on why a particular answer is right or wrong, the questions have an "Explain answer" button right at the bottom. After you finish the practice exam, the custom interface (which looks a bit like Gemini's Canvas coding tool) can help you follow up on areas that need improvement.
Google says support for the SAT is just the start, "with more tests coming in the future."
AI

Palantir CEO Says AI To Make Large-Scale Immigration Obsolete (mercurynews.com) 203

AI will displace so many jobs that it will eliminate the need for mass immigration, according to Palantir CEO Alex Karp. Bloomberg: "There will be more than enough jobs for the citizens of your nation, especially those with vocational training," said Karp, speaking at a World Economic Forum panel in Davos, Switzerland on Tuesday. "I do think these trends really do make it hard to imagine why we should have large-scale immigration unless you have a very specialized skill."

Karp, who holds a PhD in philosophy, used himself as an example of the type of "elite" white-collar worker most at risk of disruption. Vocational workers will be more valuable "if not irreplaceable," he said, criticizing the idea that higher education is the ultimate benchmark of a person's talents and employability.

Slashdot Top Deals