Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Submission + - Code.org Unveils Activities for Inaugural Hour of AI

theodp writes: Twelve years after it unveiled activities for the inaugural Hour of Code in 2013, tech-backed nonprofit Code.org's unveiled activities for next month's inaugural Hour of AI. From the press release, Hour of AI Unveils 100+ Free Activities to Help Demystify AI for Educators, Families, and Kids:

Today, Code.org and CSforALL unveiled the activity catalog for the first annual Hour of AI, which takes place during Computer Science Education Week (December 8–14, 2025). More than 50 leading tech companies, nonprofits, and foundations are contributing to a suite of activities that will help learners around the world explore the power and possibilities of AI through creativity, play, and problem-solving.

"The next generation can't afford to be passive users of AI – they must be active shapers of it," said Hadi Partovi, CEO and co-founder of Code.org. "The Hour of AI and its roster of incredible partners are empowering students to explore, create, and take ownership of the technology that is shaping their future."

Building on more than a decade of global excitement around the Hour of Code, the Hour of AI marks a new chapter that helps students move from consuming AI to creating with it. With engaging activities from partners like Google, [Microsoft-owned] Minecraft Education, LEGO Education, Scratch Foundation, and Khan Academy, students will have the opportunity to see how AI and computer science work hand-in-hand to fuel imagination, innovation, and impact.

Submission + - New, more stable qubits could simplify dreamed-of quantum computers (science.org)

sciencehabit writes: The long road to building a fully functioning quantum computer may have shortened thanks to a new version of a gizmo called a superconducting qubit. The new qubit can maintain its delicate quantum states for more than 1 millisecond, three times the previous best for such a device. Reported in Nature, the result suggests a full-fledged quantum computer may need far fewer qubits than previously thought. Most important, the advance was made not by redesigning the qubit, but by improving the materials from which it was fashioned.

“This is great for the field and I’m glad that they published enough data that we really know how [the qubit] is working,” says John Martinis, a physicist at the University of California, Santa Barbara who in October shared the Nobel Prize in Physics for demonstrating quantum effects in electrical circuits. “To me, that’s the best part.”

Submission + - UK Secondary Schools Pivoting from Narrowly Focused CS Curriculum to AI Literacy

theodp writes: The UK Department for Education is "replacing its narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16–18-year-olds." The move aims to correct unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups to address concerns about the UK’s programming talent pipeline.

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org leads a similar shift in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" with the support of its partners, including Microsoft, Google, and Amazon. Code.org has pledged to engage 25 million learners with the new Hour of AI this school year.

Comment Integrity Staffing Solutions, Inc. v. Busk (Score 3, Interesting) 181

Don't count on help from the Supreme Court on this. Integrity Staffing Solutions, Inc. v. Busk, 574 U.S. 27 (2014), was a unanimous decision by the United States Supreme Court, ruling that time spent by workers waiting to undergo anti-employee theft security screenings is not "integral and indispensable" to their work, and thus not compensable under the Fair Labor Standards Act.
 
Jesse Busk was among several workers employed by the temp agency Integrity Staffing Solutions to work in Amazon.com's warehouse in Nevada to help package and fulfill orders. At the end of each day, they had to spend about 25 minutes waiting to undergo anti-theft security checks before leaving. Busk and his fellow workers sued their employer, claiming they were entitled to be paid for those 25 minutes under the Fair Labor Standards Act. They argued that the time waiting could have been reduced if more screeners were added, or shifts were staggered so workers did not have to wait for the checks at the same time. Furthermore, since the checks were made to prevent employee theft, they only benefited the employers and the customers, not the employees themselves.

Submission + - UK Replacing Narrowly Focused CS GCSE in Pivot to AI Literacy for Schoolkids

theodp writes: The UK Department for Education announced this week that it is "replacing the narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16–18-year-olds." The move aims to correct the unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups like Google, Microsoft, and the British Computer Society, who pushed for a curriculum overhaul to address concerns about the UK’s programming talent pipeline (a similar U.S. talent pipeline crisis was also declared around the same time).

From the Government Response to the Curriculum and Assessment Review: "We will rebalance the computing curriculum as the Review suggests, to ensure pupils develop essential digital literacy whilst retaining important computer science content. Through the reformed curriculum, pupils will know from a young age how computers can be trained using data and they will learn essential digital skills such as AI literacy."

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org is orchestrating a similar move in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" that's supported by AI giants and Code.org donors Microsoft, Google, and Amazon. In September, Code.org pledged to the White House at an AI Education Task Force meeting led by First Lady Melania Trump and attended by U.S. Secretary of Education Linda McMahon and Google CEO Sundar Pichai (OpenAI CEO Sam Altman was spotted in the audience) that it will engage 25 million learners in the new Hour of AI this school year, build AI pathways in 25 states, and launch a free high school AI course for 400,000 students by 2028.

Submission + - The Largest Theft In Human History?

theodp writes: In OpenAI Moves To Complete Potentially The Largest Theft In Human History, Zvi Mowshowitz opines on the 'recapitalization' of OpenAI. Mowshowitz writes:

"OpenAI is now set to become a Public Benefit Corporation, with its investors entitled to uncapped profit shares. Its nonprofit foundation will retain some measure of control and a 26% financial stake [valued at approximately $130 billion], in sharp contrast to its previous stronger control and much, much larger effective financial stake. The value transfer is in the hundreds of billions, thus potentially the largest theft in human history. [...] I am in no way surprised by OpenAI moving forward on this, but I am deeply disgusted and disappointed they are being allowed (for now) to do so."

"Many media and public sources are calling this a win for the nonprofit. [...] This is mostly them being fooled. They’re anchoring on OpenAI’s previous plan to far more fully sideline the nonprofit. This is indeed a big win for the nonprofit compared to OpenAI’s previous plan. But the previous plan would have been a complete disaster, an all but total expropriation. It’s as if a mugger demanded all your money, you talked them down to giving up half your money, and you called that exchange a ‘change that recapitalized you.’"

Mowshowitz also points to an OpenAI announcement, The Next Chapter of the Microsoft–OpenAI Partnership, which describes how Microsoft will fare from the deal: "Microsoft holds an investment in OpenAI Group PBC valued at approximately $135 billion, representing roughly 27 percent on an as-converted diluted basis, inclusive of all owners—employees, investors, and the OpenAI Foundation."

Submission + - Alien worlds may be able to make their own water (science.org)

sciencehabit writes: From enabling life as we know it to greasing the geological machinery of plate tectonics, water can have a huge influence on a planet’s behavior. But how do planets get their water? An infant world might be bombarded by icy comets and waterlogged asteroids, for instance, or it could form far enough from its host star that water can precipitate as ice. However, certain exoplanets pose a puzzle to astronomers: alien worlds that closely orbit their scorching home stars yet somehow appear to hold significant amounts of water.

A new series of laboratory experiments, published today in Nature, has revealed a deceptively straightforward solution to this enigma: These planets make their own water. Using diamond anvils and pulsed lasers, researchers managed to re-create the intense temperatures and pressures present at the boundary between these planets’ hydrogen atmospheres and molten rocky cores. Water emerged as the minerals cooked within the hydrogen soup.

Because this kind of geologic cauldron could theoretically boil and bubble for billions of years, the mechanism could even give hellishly hot planets bodies of water—implying that ocean worlds, and the potentially habitable ones among them, may be more common than scientists already thought. “They can basically be their own water engines,” says Quentin Williams, an experimental geochemist at the University of California Santa Cruz who was not involved with the new work.

Submission + - AI hallucinates because it's trained to fake answers it doesn't know (science.org)

sciencehabit writes: Earlier today, OpenAI completed a controversial restructuring of its for-profit arm into a public benefit corporation: the latest gust in a whirlwind that has swept up hundreds of billions of dollars of global investment for artificial intelligence (AI) tools.

But even as the AI company—founded as a nonprofit, now valued at $500 billion—completes its long-awaited restructuring, a nagging issue with its core offering remains unresolved: hallucinations. Large language models (LLMs) such as those that underpin OpenAI’s popular ChatGPT platform are prone to confidently spouting factually incorrect statements. These blips are often attributed to bad input data, but in a preprint posted last month, a team from OpenAI and the Georgia Institute of Technology proves that even with flawless training data, LLMs can never be all-knowing—in part because some questions are just inherently unanswerable.

However, that doesn’t mean hallucinations are inevitable. An AI could just admit three magic words: I don’t know. So why don’t they?

The root problem, the researchers say, may lie in how LLMs are trained. They learn to bluff because their performance is ranked using standardized benchmarks that reward confident guesses and penalize honest uncertainty. In response, the team calls for a rehaul of benchmarking so accuracy and self-awareness count as much as confidence.

Although some experts find the preprint technically compelling, reactions to its suggested remedy vary. Some even question how far OpenAI will go in taking its own medicine to train its models to prioritize truthfulness over engagement. The awkward reality may be that if ChatGPT admitted “I don’t know” too often, then users would simply seek answers elsewhere. That could be a serious problem for a company that is still trying to grow its user base and achieve profitability. “Fixing hallucinations would kill the product,” says Wei Xing, an AI researcher at the University of Sheffield.

Submission + - Code.org Vows to Shape Policy to Prep Kids for AI as CS Shifts Away from Coding

theodp writes: "This year marks a pivotal moment, for Code.org and for the future of education," explains tech-backed nonprofit Code.org's just released 2024-25 Impact Report. "AI is reshaping every aspect of our world, yet most students still lack the opportunity to learn how it works, manage it, or shape its future. For over a decade, Code.org has expanded access to computer science education worldwide, serving as a trusted partner for policymakers, educators, and advocates. Now, as the focus of computer science shifts from coding to AI, we are evolving to prepare every student for an AI-powered world. [...] As this year’s impact shows, Code.org is driving change at every level — from classrooms to statehouses to ministries of education worldwide. [...] When we first launched Hour of Code in 2013, it changed how the world saw computer science. Today, AI is transforming the future of work across every field, yet most classrooms aren’t ready to teach students AI literacy. [...] That’s why, in 2025, the Hour of Code is becoming the Hour of AI, a bold, global event designed to move learners from AI consumers to confident, creative problem-solvers. [...] Our ambitious goal for the 2025-26 school year: Engage 25 million learners, mobilize 100,000 educators, and partner with 1,000 U.S. districts. The Hour of AI is only the beginning. In the year ahead, we will continue building tools, shaping policy, and inspiring movements to ensure every student, everywhere, has the opportunity to not just use AI, but to understand it, shape it, and lead with it."

Interesting, Code.org's pivot from coding to AI literacy comes as former R.I. Governor and past U.S. Secretary of Commerce Gina Raimondo — an early member of Code.org's Governors for CS partnership who was all in on K-12 CS in 2016 — suggested the Computer Science for All initiative might have been a dud. “For a long time, everyone said, ‘let’s make everybody a coder,’” Raimondo said at a Harvard Institute of Politics forum. “We’re going to predict this is where the skills are going to be. Everyone should be a software coder. I don’t know, it doesn’t look necessarily like a super idea right now with AI.”

As it pivots from coding to AI with the blessing of its tech donors, the Code.org Impact Report notes the nonprofit spent a staggering $276.8 million on its K-12 CS efforts from 2013-2025, including $41M for Diversity and Global Marketing, $69.9M for Curriculum + Learning Platform, $122.8M on Partnership + Professional Learning, $25M for Government Affairs, and $18.1M on Global Curriculum (the nonprofit reported assets of $75M in an Aug 2024 IRS filing).

Submission + - Analytics Platform Databricks Joins Amazon, Microsoft in AI Demo Hall of Shame

theodp writes: If there was an AI Demo Hall of Shame, the first inductee would have to be Amazon, whose demo to support its CEO's claims that Amazon Q Code Transformation AI saved it 4,500 developer-years and an additional $260 million in 'annualized efficiency gains' by automatically and accurately upgrading code to a more current version of Java showcased a program that didn't even spell 'Java' correctly (it was instead called 'Jave'). Also worthy of a spot is Microsoft, whose AI demo of a Copilot-driven Excel school exam analysis for educators reassured a teacher they needn't be concerned about the student who received a 27% test score, autogenerating a chart to back up its claim.

Today's nominee for the AI Demo Hall of Shame inductee is analytics platform Databricks for the NYC Taxi Trips Analysis it's been showcasing on its Data Science page since last November. Not only for its choice of a completely trivial case study that requires no 'Data Science' skills — find and display the ten most expensive and longest taxi rides — but also for the horrible AI-generated bar chart used to present the results of the simple ranking that deserves its own spot in the Graph Hall of Shame. In response to a prompt of "Now create a new bar chart with matplotlib for the most expensive trips," the Databricks AI Assistant dutifully complies with the ill-advised request, spewing out Python code to display the ten rides on a nonsensical bar chart whose continuous x-axis hides points sharing the same distance (one might also question why no annotation is provided to call out or explain the 3 trips with a distance of 0 miles that are among the ten most expensive rides, with fares of $260, $188, and $105).

Looked at with a critical eye, all three of these examples used to sell data scientists, educators, management, investors, and Wall Street on AI by Amazon (market cap $2.32 trillion), Microsoft (market cap $3.87 trillion), and Databricks (valuation $100+ billion) would likely raise eyebrows rather than impress their intended audiences. So, is AI fever so great that it sells itself and companies needn't even bother reviewing their AI demos to see if they make sense?

Submission + - Former R.I. Governor Raimondo is Rethinking Coding Education Push in AI Era

theodp writes: As Governor of Rhode Island, the Boston Globe reports, Gina Raimondo made a relentless push to expand computer science in K-12 education, part of an effort to train more students to code. But during a forum at the Harvard Institute of Politics this week, the former R.I. Governor and past U.S. Secretary of Commerce suggested the Computer Science for All initiative might have been a dud (YouTube).

“For a long time, everyone said, ‘let’s make everybody a coder,’” Raimondo said. “We’re going to predict this is where the skills are going to be. Everyone should be a software coder. I don’t know, it doesn’t look necessarily like a super idea right now with AI.”

Raimondo was responding to a question about investing in research and development versus the government picking specific companies to invest in, the Globe notes. She was critical of President Trump’s strategy of having the United States take a stake in companies, although she defended the Biden administration’s handling of subsidies through the CHIPS and Science Act. “You could pick 100 different examples,” Raimondo said. “The government gets it wrong a lot.” Raimondo launched the computer science initiative as governor in 2016 to ensure that it was part of every student’s experience in Rhode Island. It was a trendy – and widely praised – strategy at the time.

Submission + - Tech Workers Versus Enshittification

theodp writes: Writing for the Communications of the ACM, Corey Doctorow makes the case for unionization in Tech Workers Versus Enshittification:

"Now that tech workers are as disposable as Amazon warehouse workers and drivers, as disposable as the factory workers in iPhone City, it’s only a matter of time until the job conditions are harmonized downward. Jeff Bezos doesn’t force his delivery drivers to relieve themselves in bottles because he hates delivery drivers. Jeff Bezos doesn’t allow his coders to use a restroom whenever they need to because he loves hackers. The factor that determines how Jeff Bezos treats workers is 'What is the worst treatment those workers can be forced to accept?'"

"Throughout the entire history of human civilization, there has only ever been one way to guarantee fair wages and decent conditions for workers: unions. Even non-union workers benefit from unions, because strong unions are the force that causes labor protection laws to be passed, which protect all workers. [...] Now is the time to get organized. Your boss has made it clear how you’d be treated if they had their way. They’re about to get it. Walking a picket line is a slog, to be sure, but picket lines beat piss bottles, hands down."

Submission + - Code.org Spent $276M to Get Kids Coding, Now It Wants to Get Them Using AI

theodp writes: "This year marks a pivotal moment, for Code.org and for the future of education," writes Code.org co-founder Hadi Partovi in his Letter From the CEO, explaining the tech-backed nonprofit's pivot to support a shift in focus from coding to AI. "AI is reshaping every aspect of our world, yet most students still lack the opportunity to learn how it works, manage it, or shape its future. For over a decade, Code.org has expanded access to computer science education worldwide, serving as a trusted partner for policymakers, educators, and advocates. Now, as the focus of computer science shifts from coding to AI, we are evolving to prepare every student for an AI-powered world. [...] In the year ahead, we’ll ignite the first-ever Hour of AI ["One moment. One world. Millions of futures to shape."] to engage more than 25 million learners, scale age-appropriate AI curriculum and tools to help put these skills within reach of every student and teacher, and continue shaping the global conversation through our leadership of AI education policy."

The letter introduces the newly-released Code.org 2024-25 Impact Report, which reveals that sparking "global movements and grassroots campaigns" doesn't come cheap. A table that "shows the total cost breakdown of our headline achievements since founding" puts a staggering $276.8 million price tag on its efforts to-date [2013-2025], which includes $41M for Diversity and Global Marketing, $69.9M for Curriculum + Learning Platform, $122.8M on Partnership + Professional Learning, $25M for Government Affairs, and $18.1M on Global Curriculum. (a Code.org IRS filing reported assets of $75 million as of Aug 2024). The report calls out Amazon, Google, Microsoft, the Ballmer Group, Kenneth C. Griffin, and an Anonymous donor for their "generous commitments" of $3+ million each to Code.org in 2024-2025. On its website, publicly-supported charity Code.org credits six "Lifetime Supporters" for providing a minimum of $100 million: Amazon ($30M+, AWS gave another $5M+), Microsoft ($30M+), Google ($10M+), Facebook ($10M+), Ballmer Group ($10M+), Infosys ($10M+). Microsoft, whose President Brad Smith has been helping Code.org promote its AI pivot, is also the lead "AI Education Champion" sponsor of the new Hour of AI.

Submission + - "If Oberlin Won't Stand Up Against AI, Who Will?"

theodp writes: Writing in The Oberlin Review, Oberlin College student Kate Martin asks, "If Oberlin Won’t Stand Up Against AI, Who Will?" Martin begins: "As generative AI infiltrates our academic spaces more and more, liberal arts schools face a particularly troubling threat. Other types of institutions may be more focused on career preparation and, consequently, accept the experience of education as a means to an end. In that case, generative AI programs may be a welcome addition to the processes behind our academic products, so long as they streamline that process. But liberal arts schools are aiming at a loftier goal — one of thinking for its own sake, of growing our minds holistically, and situating our academic pursuits among a wider cultural conversation."

"As a student who quite literally signed up to follow this model of education, I found President Carmen Twillie Ambar’s statement about Oberlin’s emergent Year of AI Exploration deeply worrying. From its first line asking us to type a prompt into ChatGPT to discern its own greatness, it reads like a sales pitch. It frames AI as something omnipotent and inevitable: an emblem of innovation so juicy we need to overhaul all operations and reallocate funds just to step into its world of boundless potential. Let’s acknowledge the reality of the situation: Kids are no longer learning how to write, the planet is being sucked dry, and our collective value system about the very essence of creativity is buckling beneath the weight of the machine."

"Say we all learn to use AI 'responsibly.' What would that mean? When our entire job as students is to learn how to think, where would be a good spot to introduce an entity that is designed to think for us? The life cycle of a written product, from its onset as a spark in our minds to its final form as words on a page, is necessarily full of awkward stages. We push and pull at our ideas, wrestling with them through outlines and rough drafts, before they finally settle into a coherent shape. Well-meaning AI optimists see programs like ChatGPT as friendly companions that can smooth over the wrinkles in our path to well-packaged creative realization, without understanding that turbulence is precisely where our ideas and intellects thrive. Creativity is not throwing an idea into a void and watching it pop back out in neat, aesthetic form — it is a slow, embodied, iterative process that needs all parts of itself to function. Despite AI becoming more and more popular, Oberlin students, and, more generally, progressive young people in academia, are notably silent even as mental alarms are sounding in our heads."

Slashdot Top Deals

"I will make no bargains with terrorist hardware." -- Peter da Silva

Working...