Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Submission + - Code.org Unveils Activities for Inaugural Hour of AI

theodp writes: Twelve years after it unveiled activities for the inaugural Hour of Code in 2013, tech-backed nonprofit Code.org's unveiled activities for next month's inaugural Hour of AI. From the press release, Hour of AI Unveils 100+ Free Activities to Help Demystify AI for Educators, Families, and Kids:

Today, Code.org and CSforALL unveiled the activity catalog for the first annual Hour of AI, which takes place during Computer Science Education Week (December 8–14, 2025). More than 50 leading tech companies, nonprofits, and foundations are contributing to a suite of activities that will help learners around the world explore the power and possibilities of AI through creativity, play, and problem-solving.

"The next generation can't afford to be passive users of AI – they must be active shapers of it," said Hadi Partovi, CEO and co-founder of Code.org. "The Hour of AI and its roster of incredible partners are empowering students to explore, create, and take ownership of the technology that is shaping their future."

Building on more than a decade of global excitement around the Hour of Code, the Hour of AI marks a new chapter that helps students move from consuming AI to creating with it. With engaging activities from partners like Google, [Microsoft-owned] Minecraft Education, LEGO Education, Scratch Foundation, and Khan Academy, students will have the opportunity to see how AI and computer science work hand-in-hand to fuel imagination, innovation, and impact.

Submission + - UK Secondary Schools Pivoting from Narrowly Focused CS Curriculum to AI Literacy

theodp writes: The UK Department for Education is "replacing its narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16–18-year-olds." The move aims to correct unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups to address concerns about the UK’s programming talent pipeline.

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org leads a similar shift in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" with the support of its partners, including Microsoft, Google, and Amazon. Code.org has pledged to engage 25 million learners with the new Hour of AI this school year.

Comment Integrity Staffing Solutions, Inc. v. Busk (Score 3, Interesting) 181

Don't count on help from the Supreme Court on this. Integrity Staffing Solutions, Inc. v. Busk, 574 U.S. 27 (2014), was a unanimous decision by the United States Supreme Court, ruling that time spent by workers waiting to undergo anti-employee theft security screenings is not "integral and indispensable" to their work, and thus not compensable under the Fair Labor Standards Act.
 
Jesse Busk was among several workers employed by the temp agency Integrity Staffing Solutions to work in Amazon.com's warehouse in Nevada to help package and fulfill orders. At the end of each day, they had to spend about 25 minutes waiting to undergo anti-theft security checks before leaving. Busk and his fellow workers sued their employer, claiming they were entitled to be paid for those 25 minutes under the Fair Labor Standards Act. They argued that the time waiting could have been reduced if more screeners were added, or shifts were staggered so workers did not have to wait for the checks at the same time. Furthermore, since the checks were made to prevent employee theft, they only benefited the employers and the customers, not the employees themselves.

Submission + - UK Replacing Narrowly Focused CS GCSE in Pivot to AI Literacy for Schoolkids

theodp writes: The UK Department for Education announced this week that it is "replacing the narrowly focused computer science GCSE with a broader, future-facing computing GCSE [General Certificate of Secondary Education] and exploring a new qualification in data science and AI for 16–18-year-olds." The move aims to correct the unintended consequences of a shift made more than a decade ago from the existing ICT (Information and Communications Technology) curriculum, which focused on basic digital skills, to a more rigorous Computer Science curriculum at the behest of major tech firms and advocacy groups like Google, Microsoft, and the British Computer Society, who pushed for a curriculum overhaul to address concerns about the UK’s programming talent pipeline (a similar U.S. talent pipeline crisis was also declared around the same time).

From the Government Response to the Curriculum and Assessment Review: "We will rebalance the computing curriculum as the Review suggests, to ensure pupils develop essential digital literacy whilst retaining important computer science content. Through the reformed curriculum, pupils will know from a young age how computers can be trained using data and they will learn essential digital skills such as AI literacy."

The UK pivot from rigorous CS to AI literacy comes as tech-backed nonprofit Code.org is orchestrating a similar move in the U.S., pivoting from its original 2013 mission calling for rigorous CS for U.S. K-12 students to a new mission that embraces AI literacy. Code.org next month will replace its flagship Hour of Code event with a new Hour of AI "designed to bring AI education into the mainstream" that's supported by AI giants and Code.org donors Microsoft, Google, and Amazon. In September, Code.org pledged to the White House at an AI Education Task Force meeting led by First Lady Melania Trump and attended by U.S. Secretary of Education Linda McMahon and Google CEO Sundar Pichai (OpenAI CEO Sam Altman was spotted in the audience) that it will engage 25 million learners in the new Hour of AI this school year, build AI pathways in 25 states, and launch a free high school AI course for 400,000 students by 2028.

Submission + - The Largest Theft In Human History?

theodp writes: In OpenAI Moves To Complete Potentially The Largest Theft In Human History, Zvi Mowshowitz opines on the 'recapitalization' of OpenAI. Mowshowitz writes:

"OpenAI is now set to become a Public Benefit Corporation, with its investors entitled to uncapped profit shares. Its nonprofit foundation will retain some measure of control and a 26% financial stake [valued at approximately $130 billion], in sharp contrast to its previous stronger control and much, much larger effective financial stake. The value transfer is in the hundreds of billions, thus potentially the largest theft in human history. [...] I am in no way surprised by OpenAI moving forward on this, but I am deeply disgusted and disappointed they are being allowed (for now) to do so."

"Many media and public sources are calling this a win for the nonprofit. [...] This is mostly them being fooled. They’re anchoring on OpenAI’s previous plan to far more fully sideline the nonprofit. This is indeed a big win for the nonprofit compared to OpenAI’s previous plan. But the previous plan would have been a complete disaster, an all but total expropriation. It’s as if a mugger demanded all your money, you talked them down to giving up half your money, and you called that exchange a ‘change that recapitalized you.’"

Mowshowitz also points to an OpenAI announcement, The Next Chapter of the Microsoft–OpenAI Partnership, which describes how Microsoft will fare from the deal: "Microsoft holds an investment in OpenAI Group PBC valued at approximately $135 billion, representing roughly 27 percent on an as-converted diluted basis, inclusive of all owners—employees, investors, and the OpenAI Foundation."

Submission + - Code.org Vows to Shape Policy to Prep Kids for AI as CS Shifts Away from Coding

theodp writes: "This year marks a pivotal moment, for Code.org and for the future of education," explains tech-backed nonprofit Code.org's just released 2024-25 Impact Report. "AI is reshaping every aspect of our world, yet most students still lack the opportunity to learn how it works, manage it, or shape its future. For over a decade, Code.org has expanded access to computer science education worldwide, serving as a trusted partner for policymakers, educators, and advocates. Now, as the focus of computer science shifts from coding to AI, we are evolving to prepare every student for an AI-powered world. [...] As this year’s impact shows, Code.org is driving change at every level — from classrooms to statehouses to ministries of education worldwide. [...] When we first launched Hour of Code in 2013, it changed how the world saw computer science. Today, AI is transforming the future of work across every field, yet most classrooms aren’t ready to teach students AI literacy. [...] That’s why, in 2025, the Hour of Code is becoming the Hour of AI, a bold, global event designed to move learners from AI consumers to confident, creative problem-solvers. [...] Our ambitious goal for the 2025-26 school year: Engage 25 million learners, mobilize 100,000 educators, and partner with 1,000 U.S. districts. The Hour of AI is only the beginning. In the year ahead, we will continue building tools, shaping policy, and inspiring movements to ensure every student, everywhere, has the opportunity to not just use AI, but to understand it, shape it, and lead with it."

Interesting, Code.org's pivot from coding to AI literacy comes as former R.I. Governor and past U.S. Secretary of Commerce Gina Raimondo — an early member of Code.org's Governors for CS partnership who was all in on K-12 CS in 2016 — suggested the Computer Science for All initiative might have been a dud. “For a long time, everyone said, ‘let’s make everybody a coder,’” Raimondo said at a Harvard Institute of Politics forum. “We’re going to predict this is where the skills are going to be. Everyone should be a software coder. I don’t know, it doesn’t look necessarily like a super idea right now with AI.”

As it pivots from coding to AI with the blessing of its tech donors, the Code.org Impact Report notes the nonprofit spent a staggering $276.8 million on its K-12 CS efforts from 2013-2025, including $41M for Diversity and Global Marketing, $69.9M for Curriculum + Learning Platform, $122.8M on Partnership + Professional Learning, $25M for Government Affairs, and $18.1M on Global Curriculum (the nonprofit reported assets of $75M in an Aug 2024 IRS filing).

Submission + - Analytics Platform Databricks Joins Amazon, Microsoft in AI Demo Hall of Shame

theodp writes: If there was an AI Demo Hall of Shame, the first inductee would have to be Amazon, whose demo to support its CEO's claims that Amazon Q Code Transformation AI saved it 4,500 developer-years and an additional $260 million in 'annualized efficiency gains' by automatically and accurately upgrading code to a more current version of Java showcased a program that didn't even spell 'Java' correctly (it was instead called 'Jave'). Also worthy of a spot is Microsoft, whose AI demo of a Copilot-driven Excel school exam analysis for educators reassured a teacher they needn't be concerned about the student who received a 27% test score, autogenerating a chart to back up its claim.

Today's nominee for the AI Demo Hall of Shame inductee is analytics platform Databricks for the NYC Taxi Trips Analysis it's been showcasing on its Data Science page since last November. Not only for its choice of a completely trivial case study that requires no 'Data Science' skills — find and display the ten most expensive and longest taxi rides — but also for the horrible AI-generated bar chart used to present the results of the simple ranking that deserves its own spot in the Graph Hall of Shame. In response to a prompt of "Now create a new bar chart with matplotlib for the most expensive trips," the Databricks AI Assistant dutifully complies with the ill-advised request, spewing out Python code to display the ten rides on a nonsensical bar chart whose continuous x-axis hides points sharing the same distance (one might also question why no annotation is provided to call out or explain the 3 trips with a distance of 0 miles that are among the ten most expensive rides, with fares of $260, $188, and $105).

Looked at with a critical eye, all three of these examples used to sell data scientists, educators, management, investors, and Wall Street on AI by Amazon (market cap $2.32 trillion), Microsoft (market cap $3.87 trillion), and Databricks (valuation $100+ billion) would likely raise eyebrows rather than impress their intended audiences. So, is AI fever so great that it sells itself and companies needn't even bother reviewing their AI demos to see if they make sense?

Submission + - Former R.I. Governor Raimondo is Rethinking Coding Education Push in AI Era

theodp writes: As Governor of Rhode Island, the Boston Globe reports, Gina Raimondo made a relentless push to expand computer science in K-12 education, part of an effort to train more students to code. But during a forum at the Harvard Institute of Politics this week, the former R.I. Governor and past U.S. Secretary of Commerce suggested the Computer Science for All initiative might have been a dud (YouTube).

“For a long time, everyone said, ‘let’s make everybody a coder,’” Raimondo said. “We’re going to predict this is where the skills are going to be. Everyone should be a software coder. I don’t know, it doesn’t look necessarily like a super idea right now with AI.”

Raimondo was responding to a question about investing in research and development versus the government picking specific companies to invest in, the Globe notes. She was critical of President Trump’s strategy of having the United States take a stake in companies, although she defended the Biden administration’s handling of subsidies through the CHIPS and Science Act. “You could pick 100 different examples,” Raimondo said. “The government gets it wrong a lot.” Raimondo launched the computer science initiative as governor in 2016 to ensure that it was part of every student’s experience in Rhode Island. It was a trendy – and widely praised – strategy at the time.

Submission + - Tech Workers Versus Enshittification

theodp writes: Writing for the Communications of the ACM, Corey Doctorow makes the case for unionization in Tech Workers Versus Enshittification:

"Now that tech workers are as disposable as Amazon warehouse workers and drivers, as disposable as the factory workers in iPhone City, it’s only a matter of time until the job conditions are harmonized downward. Jeff Bezos doesn’t force his delivery drivers to relieve themselves in bottles because he hates delivery drivers. Jeff Bezos doesn’t allow his coders to use a restroom whenever they need to because he loves hackers. The factor that determines how Jeff Bezos treats workers is 'What is the worst treatment those workers can be forced to accept?'"

"Throughout the entire history of human civilization, there has only ever been one way to guarantee fair wages and decent conditions for workers: unions. Even non-union workers benefit from unions, because strong unions are the force that causes labor protection laws to be passed, which protect all workers. [...] Now is the time to get organized. Your boss has made it clear how you’d be treated if they had their way. They’re about to get it. Walking a picket line is a slog, to be sure, but picket lines beat piss bottles, hands down."

Submission + - Code.org Spent $276M to Get Kids Coding, Now It Wants to Get Them Using AI

theodp writes: "This year marks a pivotal moment, for Code.org and for the future of education," writes Code.org co-founder Hadi Partovi in his Letter From the CEO, explaining the tech-backed nonprofit's pivot to support a shift in focus from coding to AI. "AI is reshaping every aspect of our world, yet most students still lack the opportunity to learn how it works, manage it, or shape its future. For over a decade, Code.org has expanded access to computer science education worldwide, serving as a trusted partner for policymakers, educators, and advocates. Now, as the focus of computer science shifts from coding to AI, we are evolving to prepare every student for an AI-powered world. [...] In the year ahead, we’ll ignite the first-ever Hour of AI ["One moment. One world. Millions of futures to shape."] to engage more than 25 million learners, scale age-appropriate AI curriculum and tools to help put these skills within reach of every student and teacher, and continue shaping the global conversation through our leadership of AI education policy."

The letter introduces the newly-released Code.org 2024-25 Impact Report, which reveals that sparking "global movements and grassroots campaigns" doesn't come cheap. A table that "shows the total cost breakdown of our headline achievements since founding" puts a staggering $276.8 million price tag on its efforts to-date [2013-2025], which includes $41M for Diversity and Global Marketing, $69.9M for Curriculum + Learning Platform, $122.8M on Partnership + Professional Learning, $25M for Government Affairs, and $18.1M on Global Curriculum. (a Code.org IRS filing reported assets of $75 million as of Aug 2024). The report calls out Amazon, Google, Microsoft, the Ballmer Group, Kenneth C. Griffin, and an Anonymous donor for their "generous commitments" of $3+ million each to Code.org in 2024-2025. On its website, publicly-supported charity Code.org credits six "Lifetime Supporters" for providing a minimum of $100 million: Amazon ($30M+, AWS gave another $5M+), Microsoft ($30M+), Google ($10M+), Facebook ($10M+), Ballmer Group ($10M+), Infosys ($10M+). Microsoft, whose President Brad Smith has been helping Code.org promote its AI pivot, is also the lead "AI Education Champion" sponsor of the new Hour of AI.

Submission + - "If Oberlin Won't Stand Up Against AI, Who Will?"

theodp writes: Writing in The Oberlin Review, Oberlin College student Kate Martin asks, "If Oberlin Won’t Stand Up Against AI, Who Will?" Martin begins: "As generative AI infiltrates our academic spaces more and more, liberal arts schools face a particularly troubling threat. Other types of institutions may be more focused on career preparation and, consequently, accept the experience of education as a means to an end. In that case, generative AI programs may be a welcome addition to the processes behind our academic products, so long as they streamline that process. But liberal arts schools are aiming at a loftier goal — one of thinking for its own sake, of growing our minds holistically, and situating our academic pursuits among a wider cultural conversation."

"As a student who quite literally signed up to follow this model of education, I found President Carmen Twillie Ambar’s statement about Oberlin’s emergent Year of AI Exploration deeply worrying. From its first line asking us to type a prompt into ChatGPT to discern its own greatness, it reads like a sales pitch. It frames AI as something omnipotent and inevitable: an emblem of innovation so juicy we need to overhaul all operations and reallocate funds just to step into its world of boundless potential. Let’s acknowledge the reality of the situation: Kids are no longer learning how to write, the planet is being sucked dry, and our collective value system about the very essence of creativity is buckling beneath the weight of the machine."

"Say we all learn to use AI 'responsibly.' What would that mean? When our entire job as students is to learn how to think, where would be a good spot to introduce an entity that is designed to think for us? The life cycle of a written product, from its onset as a spark in our minds to its final form as words on a page, is necessarily full of awkward stages. We push and pull at our ideas, wrestling with them through outlines and rough drafts, before they finally settle into a coherent shape. Well-meaning AI optimists see programs like ChatGPT as friendly companions that can smooth over the wrinkles in our path to well-packaged creative realization, without understanding that turbulence is precisely where our ideas and intellects thrive. Creativity is not throwing an idea into a void and watching it pop back out in neat, aesthetic form — it is a slow, embodied, iterative process that needs all parts of itself to function. Despite AI becoming more and more popular, Oberlin students, and, more generally, progressive young people in academia, are notably silent even as mental alarms are sounding in our heads."

Comment Re:that vertical bar chart.. (Score 1) 39

Good catch! Copilot didn't "share its work" and I mistakenly assumed it was a bar chart rather than a histogram with an unfortunate choice of bar width and x-axis range for this score distribution. For comparison, here's a Plotly histogram makeover of the Copilot-generated chart with a bin width of .1 and an x-axis that displays the full range of possible scores, which makes the outlying low scores readily apparent.

Submission + - In Copilot in Excel Demo, AI Tells Teacher a 27% Exam Score Is of No Concern

theodp writes: It's unclear what exactly led to Thursday's announcement that Microsoft will provide free AI tools for Washington State schools and professional development and AI training to teachers and administrators statewide in partnership with the Office of Superintendent of Public Instruction, the Washington Education Association, and the National Education Association. But WA state records do show that the WA STEM Education Innovation Alliance — which "brings together leaders from labor, education, government, and non-profit organizations" — was treated to a "Demonstration of Microsoft Education tool and the latest AI updates for educators and students" (video @2:11:00, slides-79MB pdf) by Microsoft Education Product Manager Mike Tholfsen at a March 2024 meeting. And a report on the future direction of the STEM Alliance unveiled at the Alliance's November 2024 meeting at the Redmond Microsoft Conference Center cited this earlier demo, noting that "Alliance members heard from experts across the AI field in Washington, including leaders from the University of Washington and Microsoft about how AI has the possibility to transform various job sectors and the education system."

The demo, which Microsoft's Tholfsen explained was a hit with the crowd at the Bett UK 2024 EdTech Conference (ppt, 220MB download), includes a segment on Copilot in Excel that is likely to resonate with AI-wary software developers. Not only does it illustrate how the realities of AI assistants sometimes fall maddeningly short of the promises, the demo also shows how AI vendors and customers alike sometimes forget to review promotional AI content closely in all the AI excitement. Here, the Copilot in Excel segment — which purports to show how even teachers who were too 'afraid of' or 'intimidated' to use Excel in the past can now just use natural language prompts to conduct Excel analysis. But even a cursory glance at the example shown should raise eyebrows, as Copilot advises the teacher there are no 'outliers' in the exam scores for their 17 students, whose test scores range from 27%-100% (apparently due to Copilot's choice of an inappropriate outlier detection method for this size population and score range). Fittingly, the student whose 27% score is confidently-but-incorrectly deemed to be of no concern by Copilot is named after Michael Scott, the largely incompetent, unproductive, unprofessional boss of The Office (Microsoft also named the other exam takers after characters from The Office).

The additional Copilot student score 'analysis' touted by Microsoft in the demo is also less than impressive. It includes: 1. A vertical bar chart that fails to convey the test score distribution that a histogram would have (a rookie chart choice mistake), 2. A horizontal bar chart of student scores that only displays every other student's name and shows no score values (a rookie formatting error), and 3. A pivot table copy of the original data but sorted in descending order, generated in response to a teacher's prompt to rank the data (although simply clicking the Excel ribbon's sort button would have yielded similar results). So, will teachers — like programmers — be spending a significant amount of time in the future reviewing, editing, and refining the outputs of their AI agent helpers?

Slashdot Top Deals

In practice, failures in system development, like unemployment in Russia, happens a lot despite official propaganda to the contrary. -- Paul Licker

Working...