Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Submission + - The Microsoft-OpenAI Files 1

theodp writes: GeekWire takes a look at AI’s defining alliance in The Microsoft-OpenAI Files, an epic story drawn from 200+ documents, many made public Friday in Elon Musk’s ongoing suit accusing OpenAI and its CEO Sam Altman of abandoning the nonprofit mission (Microsoft is also a defendant). Musk, who was an OpenAI co-founder, is seeking up to $134 billion in damages.

Previously undisclosed emails, messages, slide decks, reports, and deposition transcripts reveal how Microsoft pursued, rebuffed and backed OpenAI at various moments over the past decade, ultimately shaping the course of the lab that launched the generative AI era. The latest round of documents, filed as exhibits in Musk’s lawsuit, show how Nadella and Microsoft’s senior leadership team rally in a crisis, maneuver against rivals such as Google and Amazon, and talk about deals in private.

Even though Microsoft didn’t have a seat on the OpenAI board, text messages between Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman following Altman's firing as CEO in Nov. 2023 (news of which sent Microsoft's stock plummeting), revealed in the latest filings, show just how influential Microsoft was. A day after Altman's firing, Nadella sent Altman a detailed message from Brad Smith, Microsoft’s president and top lawyer, explaining that Microsoft had created a new subsidiary called Microsoft RAI (Responsible Artificial Intelligence) Inc. from scratch — legal work done, papers ready to file as soon as the WA Secretary of State opened Monday morning — and was ready to capitalize and operationalize it to 'support Sam in whatever way is needed,' including absorbing the OpenAI team at a calculated cost of roughly $25 billion. (Altman’s reply: "kk"). Just days later, as he planned his return as CEO to the now-reeling-from-Microsoft-punches nonprofit, Altman joined Microsoft's Nadella, Smith, and Kevin Scott in a text messaging thread in which the four vetted prospective board members to replace those who had ousted Altman. Later that night, OpenAI announced Altman’s return with the newly constituted board.

If you like stories with happy Microsoft endings, as part of an agreement clearing the way for OpenAI to restructure as a for-profit business, Microsoft in October received a 27% ownership stake in OpenAI worth approximately $135 billion and retains access to the AI startup's technology until 2032, including models that achieve AGI.

Submission + - Conference Table or Multi-Person Workstation Desk as a Dining Room Table?

theodp writes: While a house or apartment with a separate 'formal' dining room/area in addition to casual dining space is a nice-to-have, you may find you only use the space occasionally for get-togethers with family and friends. If you work from home or have kids who need a place to do their homework, have you eschewed or considered eschewing a traditional dining table for a conference room table or multi-person workstation desk that can do double-duty as your everyday workspace and occasional dining room table? If so, care to share how that worked out you (aesthetically and functionally) and tips for what to consider (height, width, finish, power outlets, chair types, cost, etc.)?

Submission + - Microsoft Elevate for Educators Launches with 'AI Merit Badges' for Educators

theodp writes: Just days after Microsoft President Brad Smith moved to deflect White House criticism over AI Data Centers' insatiable demand for electricity (and electricians) driving Americans' utility bills higher, Microsoft announced Microsoft Elevate for Educators, "a program connecting educators with community, professional development, and AI tools to transform teaching" in an effort to "empower every school, educator, and student to thrive with confidence in an AI-powered future."

Towards that end, Microsoft is offering new AI-powered tools "to help schools worldwide prepare educators and students for an AI-driven future" and is also seeking to credential educators with its new Microsoft Elevate Educator Credential, as well as a new Microsoft Certified Instructional Technologist and Coach certification.

Taking a page from the Girl Scouts playbook, Microsoft is encouraging teachers to pursue the Microsoft Elevate Educator pathway (from "Explorer" to "Expert" to "Fellow"), leading to recognition "for their exceptional use of Microsoft tools and resources to enhance teaching and learning experiences." And there's also a Microsoft Elevate School journey, which leads from "Pathfinder" to "Showcase" to "Beacon." Hey, be true to your AI school!

Submission + - Code.org: Use AI in an Interview Without Our OK and You're Dead to Us

theodp writes: Code.org, the nonprofit backed by AI giants Microsoft, Google, and Amazon whose Hour of AI and free AI curriculum aim to make world's K-12 schoolchildren AI literate, points job seekers to its AI Use Policy in Hiring, which promises dire consequences for those who use AI during interviews or take home assignments without its OK.

Explaining "What’s Not Okay," Code.org writes: "While we support thoughtful use of AI, certain uses undermine fairness and honesty in the hiring process. We ask that candidates do not [...] use AI during interviews and take-home assignments without explicit consent from the interview team. Such use goes against our values of integrity and transparency and will result in disqualification from the hiring process."

Interestingly, Code.org CEO Partovi last year faced some blowback from educators over his LinkedIn post that painted schools that police AI use by students as dinosaurs. Partovi wrote, "Schools of the past define AI use as 'cheating.' Schools of the future define AI skills as the new literacy. Every desk-job employer is looking to hire workers who are adept at AI. Employers want the students who are best at this new form of 'cheating.'"

Submission + - LEGO Education Announces CS+AI K-8 Classroom Packs Priced at $2,049-$3,179

theodp writes: Offering a new report as evidence that K-8 teachers see benefits of hands-on computer science and AI education but lack the right tools to engage students, LEGO Education on Monday announced its Hands-on Computer Science & AI Learning Solution for children in grades K-8.

From the press release: "Today, LEGO® Education announced a new hands-on solution and curriculum for computer science and artificial intelligence (AI) for K-8 classrooms that fosters collaboration, creativity, and learning outcomes. Shipping from April 2026, LEGO® Education Computer Science & AI enables schools and districts to expand critically needed access to computer science and AI education." The offerings include Computer Science & AI Kits for 24 students priced at $2,049 for grades K-2, $2,579 for grades 3-5, and $3,179 for grades 6-8.

Not to be outdone, Amazon on Monday announced it's bringing PartyRock — its no-code approach to AI creation — into the classroom to promote AI literacy in support of the White House’s AI education initiatives. "Rather than focusing on the mechanics of AI programming," Amazon explains, "PartyRock emphasizes creative problem-solving and conceptual understanding. Students articulate their ideas through natural language descriptions, and the playground transforms these descriptions into functional applications. This approach shifts the educational focus from syntax and coding structures to the more fundamental questions of what AI can do and how it can be directed to solve problems."

Submission + - Should Real-World Examples be Required for Standards and Other Mandates?

theodp writes: If someone wants to impose standards, forms, documentation requirements, and other mandates on others, it seems only fair that they should be able to — and required to — demonstrate it in action first, right? Without real-world examples of what is considered 'good', people are essentially asked to sign off on a black box without a clear idea of what is being demanded, how much work it may entail, and in the end how worthwhile it even may be.

Surprisingly, that's not how things tend to play out in practice in industry, academia, and other organizations. A case in point is the proposed new Computer Science + AI Standards for pre-kindergarten to high school students assembled by a consortium of educators, tech-backed nonprofits, and tech industry advisors that aims to shape how CS+AI is taught in classrooms. A Friday morning LinkedIn post from the Computer Science Teachers Association reminds educators that they have 72 hours to "help us improve them [the standards] by reviewing and completing our feedback form by 9am ET on Monday, January 12."

Under development since 2023, the 247-page standards document is chock full of students-should-be-able-to pronouncements for all grade levels but offers no concrete examples of what that looks like in practice in terms of acceptable student deliverables or teacher lesson plans — e.g., "Students should be able to create a functional, rule-based AI for a Non-Playable Character (NPC) using programming or visual scripting. Students’ implementation must be based on a recognized AI method (e.g., finite-state machine, behavior tree)."

As Ross Perot once said, the devil is in the details. So, in a world where more and more people specialize in governance, risk, and compliance jobs that involve specifying mandates for others to comply with, shouldn't it be a red flag if they can't show real-world examples of how to satisfy those mandates? If you require it, shouldn't you be able to demonstrate it? Otherwise, doesn't it signal that the mandate hasn’t been validated? And open the door to being told “that’s not what I meant” for those left to guess at what was meant?

Submission + - The GeekWire Stories that Defined 2025 (Spoiler Alert: AI Dominated)

theodp writes: In a year-end podcast, GeekWire looks back at the stories that defined 2025, with the "Most Popular" award going to Coding is dead: UW computer science program rethinks curriculum for the AI era.

Not too surprisingly, AI dominated 2025's headlines. Mandates from tech company leaders to use AI — but with no playbook on how — are creating worker stress, prompting one tech veteran to comment on the brutality of tech cycles: "The challenge, and opportunity for leadership, is whether the [AI] bets actually compound into something durable, or just become another slide deck for next year’s reorg."

GeekWire notes that Microsoft President Brad Smith offered his own evidence to investors that AI-is-real at Microsoft's Annual Shareholder Meeting in December, explaining that he asked Copilot’s Researcher Agent earlier in the day to produce a report on an issue from seven or eight years ago, and it generated a 25-page report with 100 citations that so wowed his colleagues that they clamored for him to share the prompt he used to produce it so they could all learn how to use AI more effectively. While Smith didn't share either the report or prompt in the webcast), the anecdote alone had his fellow Microsoft execs nodding and smiling in amazement (GeekWire couldn't resist wondering aloud how many of the recipients used their AI agents to summarize the 25-page report rather than having to actually read it).

Submission + - Ready, Fire, Aim: As Schools Embrace AI, Skeptics Raise Concerns

theodp writes: "Fueled partly by American tech companies, governments around the globe are racing to deploy generative A.I. systems and training in schools and universities," reports the NY Times. "In early November, Microsoft said it would supply artificial intelligence tools and training to more than 200,000 students and educators in the United Arab Emirates. Days later, a financial services company in Kazakhstan announced an agreement with OpenAI to provide ChatGPT Edu, a service for schools and universities, for 165,000 educators in Kazakhstan. Last month, xAI, Elon Musk’s artificial intelligence company, announced an even bigger project with El Salvador: developing an A.I. tutoring system, using the company’s Grok chatbot, for more than a million students in thousands of schools there."

"In the United States, where states and school districts typically decide what to teach, some prominent school systems recently introduced popular chatbots for teaching and learning. In Florida alone, Miami-Dade County Public Schools, the nation’s third-largest school system, rolled out Google’s Gemini chatbot for more than 100,000 high school students. And Broward County Public Schools, the nation’s sixth-biggest school district, introduced Microsoft’s Copilot chatbot for thousands of teachers and staff members."

"Teachers currently have few rigorous studies to guide generative A.I. use in schools. Researchers are just beginning to follow the long-term effects of A.I. chatbots on teenagers and schoolchildren. 'Lots of institutions are trying A.I.,' said Drew Bent, the education lead at Anthropic. 'We’re at a point now where we need to make sure that these things are backed by outcomes and figure out what’s working and what’s not working.'"

Submission + - "Pull Over and Show Me Your Apple Wallet"

theodp writes: MacRumors reports that Apple plans to expand iPhone and Apple Watch driver's licenses to 7 U.S. states (CT, KY, MS, OK, UT, AR, VA). A recent convert is the State of Illinois, whose website videos demo how you can use your Apple Wallet license to display proof of identity or age the next time you get carded by a cop, bartender, or TSA agent. The new states will join 13 others who already offer driver's licenses in the Wallet app (AZ, MD, CO, GA, OH, HI, CA, IA, NM, MT, ND, WV, IL).

There's certainly been a lot of foot dragging by the states when it comes to embracing phone-based driver's licenses — Slashdot reported that Iowa was ready to launch a mobile app driver's license in 2014; they got one nearly a decade later in late 2023.

Comment Chris Lattner, Lex Fridman on Emojis + Code (Score 1) 83

Chris Lattner: Future of Programming and AI | Lex Fridman Podcast. FRIDMAN: "What's been the response [to emojis] so far?" LATTNER: "Somewhere between, 'Oh, wow, that makes sense. Cool, I like new things,' to 'Oh my god, you're killing my baby.' Like, what are you talking about? This can never be. Like, I can never handle this. How am I gonna type this? (imitates bees buzzing) like, all these things. And so this is something where I think that the world will get there. We don't have to bet the whole farm on this. I think we can provide both paths, but I think it'll be great."

Submission + - What Might Adding Emojis and Pictures to Text Programming Languages Look Like? 1

theodp writes: We all mix pictures, emojis and text freely in our communications. So, why not in our code? That's the premise of Fun With Python and Emoji: What Might Adding Pictures to Text Programming Languages Look Like? (two-image Bluesky explainer; full slides), which takes a look at what mixing emoji with Python and SQL might look like. A GitHub repo includes a Google Colab-ready Python notebook proof-of-concept that does rudimentary emoji-to-text translation via an IPython input transformer.

So, in the Golden Age of AI some 60+ years after Kenneth Iverson introduced chock-full-of-symbols APL, are valid technical reasons still keeping symbols and pictures out of code, or is their absence more of a programming dogma thing?

Slashdot Top Deals

Profanity is the one language all programmers know best.

Working...