United States

The Next Front in the US-China Battle Over Chips (nytimes.com) 87

A U.S.-born chip technology called RISC-V has become critical to China's ambitions. Washington is debating whether and how to limit the technology. From a report: It evolved from a university computer lab in California to a foundation for myriad chips that handle computing chores. RISC-V essentially provides a kind of common language for designing processors that are found in devices like smartphones, disk drives, Wi-Fi routers and tablets. RISC-V has ignited a new debate in Washington in recent months about how far the United States can or should go as it steadily expands restrictions on exporting technology to China that could help advance its military. That's because RISC-V, which can be downloaded from the internet for free, has become a central tool for Chinese companies and government institutions hoping to match U.S. prowess in designing semiconductors.

Last month, the House Select Committee on the Chinese Communist Party -- in an effort spearheaded by Representative Mike Gallagher, Republican of Wisconsin -- recommended that an interagency government committee study potential risks of RISC-V. Congressional aides have met with members of the Biden administration about the technology, and lawmakers and their aides have discussed extending restrictions to stop U.S. citizens from aiding China on RISC-V, according to congressional staff members. The Chinese Communist Party is "already attempting to use RISC-V's design architecture to undermine our export controls," Representative Raja Krishnamoorthi of Illinois, the ranking Democrat on the House select committee, said in a statement. He added that RISC-V's participants should be focused on advancing technology and "not the geopolitical interests of the Chinese Communist Party."

Arm Holdings, a British company that sells competing chip technology, has also lobbied officials to consider restrictions on RISC-V, three people with knowledge of the situation said. Biden administration officials have concerns about China's use of RISC-V but are wary about potential complications with trying to regulate the technology, according to a person familiar with the discussions. The debate over RISC-V is complicated because the technology was patterned after open-source software, the free programs like Linux that allow any developer to view and modify the original code used to make them. Such programs have prompted multiple competitors to innovate and reduce the market power of any single vendor.

Education

US Department of Education Spending $4 Million To Teach 3,450 Kids CS Using Minecraft 38

theodp writes: Among the 45 winners of this year's Education Innovation and Research (EIR) program competitions is Creative Coders: Middle School CS Pathways Through Game Design (PDF). The U.S. Dept. of Education is providing the national nonprofit Urban Arts with $3,999,988 to "use materials and learning from its School of Interactive Arts program to create an engaging, game-based, middle school CS course using [Microsoft] Minecraft tools" for 3,450 middle schoolers (6th-8th grades) in New York and California with the help of "our industry partner Microsoft with the utilization of Minecraft Education."

From Urban Arts' winning proposal: "Because a large majority of children play video games regularly, teaching CS through video game design exemplifies CRT [Culturally Responsive Teaching], which has been linked to 'academic achievement, improved attendance, [and] greater interest in school.' The video game Minecraft has over 173 million users worldwide and is extremely popular with students at the middle school level; the Minecraft Education workspace we utilize in the Creative Coders curriculum is a familiar platform to any player of the original game. By leveraging students' personal interests and their existing 'funds of knowledge', we believe Creative Coders is likely to increase student participation and engagement."

Speaking of UA's EIR grant partner Microsoft, Urban Arts' Board of Directors includes Josh Reynolds, the Director of Modern Workplace for Microsoft Education, whose Urban Arts bio notes "has led some of the largest game-based learning activations worldwide with Minecraft." Urban Arts' Gaming Pathways Educational Advisory Board includes Reynolds and Microsoft Sr. Account Executive Amy Brandt. And in his 2019 book Tools and Weapons, Microsoft President Brad Smith cited $50 million K-12 CS pledges made to Ivanka Trump by Microsoft and other Tech Giants as the key to getting Donald Trump to sign a $1 billion, five-year presidential order (PDF) "to ensure that federal funding from the Department of Education helps advance [K-12] computer science," including via EIR program grants.
Programming

Code.org Sues WhiteHat Jr. For $3 Million 8

theodp writes: Back in May 2021, tech-backed nonprofit Code.org touted the signing of a licensing agreement with WhiteHat Jr., allowing the edtech company with a controversial past (Whitehat Jr. was bought for $300M in 2020 by Byju's, an edtech firm that received a $50M investment from Mark Zuckerberg's venture firm) to integrate Code.org's free-to-educators-and-organizations content and tools into their online tutoring service. Code.org did not reveal what it was charging Byju's to use its "free curriculum and open source technology" for commercial purposes, but Code.org's 2021 IRS 990 filing reported $1M in royalties from an unspecified source after earlier years reported $0. Coincidentally, Whitehat Jr. is represented by Aaron Kornblum, who once worked at Microsoft for now-President Brad Smith, who left Code.org's Board just before the lawsuit was filed.

Fast forward to 2023 and the bloom is off the rose, as Court records show that Code.org earlier this month sued Whitehat Education Technology, LLC (Exhibits A and B) in what is called "a civil action for breach of contract arising from Whitehat's failure to pay Code.org the agreed-upon charges for its use of Code.org's platform and licensed content and its ongoing, unauthorized use of that platform and content." According to the filing, "Whitehat agreed [in April 2022] to pay to Code.org licensing fees totaling $4,000,000 pursuant to a four-year schedule" and "made its first four scheduled payments, totaling $1,000,000," but "about a year after the Agreement was signed, Whitehat informed Code.org that it would be unable to make the remaining scheduled license payments." While the original agreement was amended to backload Whitehat's license fee payment obligations, "Whitehat has not paid anything at all beyond the $1,000,000 that it paid pursuant to the 2022 invoices before the Agreement was amended" and "has continued to access Code.org's platform and content."

That Byju's Whitehat Jr. stiffed Code.org is hardly shocking. In June 2023, Reuters reported that Byju's auditor Deloitte cut ties with the troubled Indian Edtech startup that was once an investor darling and valued at $22 billion, adding that a Byju's Board member representing the Chan-Zuckerberg Initiative had resigned with two other Board members. The BBC reported in July that Byju's was guilty of overexpanding during the pandemic (not unlike Zuck's Facebook). Ironically, the lawsuit Exhibits include screenshots showing Mark Zuckerberg teaching Code.org lessons. Zuckerberg and Facebook were once among the biggest backers of Code.org, although it's unclear whether that relationship soured after court documents were released that revealed Code.org's co-founders talking smack about Zuck and Facebook's business practices to lawyers for Six4Three, which was suing Facebook.

Code.org's curriculum is also used by the Amazon Future Engineer (AFE) initiative, but it is unclear what royalties -- if any -- Amazon pays to Code.org for the use of Code.org curriculum. While the AFE site boldly says, "we provide free computer science curriculum," the AFE fine print further explains that "our partners at Code.org and ProjectSTEM offer a wide array of introductory and advance curriculum options and teacher training." It's unclear what kind of organization Amazon's AFE ("Computer Science Learning Childhood to Career") exactly is -- an IRS Tax Exempt Organization Search failed to find any hits for "Amazon Future Engineer" -- making it hard to guess whether Code.org might consider AFE's use of Code.org software 'commercial use.' Would providing a California school district with free K-12 CS curriculum that Amazon boasts of cultivating into its "vocal champion" count as "commercial use"? How about providing free K-12 CS curriculum to children who live where Amazon is seeking incentives? Or if Amazon CEO Jeff Bezos testifies Amazon "funds computer science coursework" for schools as he attempts to counter a Congressional antitrust inquiry? These seem to be some of the kinds of distinctions Richard Stallman anticipated more than a decade ago as he argued against a restriction against commercial use of otherwise free software.
Education

Microsoft President Brad Smith Quietly Leaves Board of Nonprofit Code.org 4

Longtime Slashdot reader theodp writes: Way back in September 2012, Microsoft President Brad Smith discussed the idea of "producing a crisis" to advance Microsoft's "two-pronged" National Talent Strategy to increase K-12 CS education and the number of H-1B visas. Not long thereafter, the tech-backed nonprofit Code.org (which promotes and provides K-12 CS education and is led by Smith's next-door neighbor) and Mark Zuckerberg's FWD.us PAC (which lobbied for H-1B reform) were born, with Smith on board both. Over the past 10+ years, Smith has played a key role in establishing Code.org's influence in the new K-12 CS education "grassroots" movement, including getting buy-in from three Presidential administrations -- Obama, Trump, and Biden -- as well as the U.S. Dept. of Education and the nation's Governors.

But after recent updates, Code.org's Leadership page now indicates that Smith has quietly left Code.org's Board of Directors and thanks him for his past help and advice. Since November (when archive.org indicates Smith's photo was yanked from Code.org's Leadership page), Smith has been in the news in conjunction with Microsoft's relationship with another Microsoft-bankrolled nonprofit, OpenAI, which has come under scrutiny by the Feds and in the UK. Smith, who noted he and Microsoft helped OpenAI and CEO Sam Altman craft messaging ahead of a White House meeting, announced in a Dec. 8th tweet that Microsoft will be getting a non-voting OpenAI Board seat in connection with Altman's return to power (who that non-voting Microsoft OpenAI board member will be has not been announced).

OpenAI, Microsoft, and Code.org teamed up in December to provide K-12 CS+AI tutorials for this December's AI-themed Hour of Code (the trio has also partnered with Amazon and Google on the Code.org-led TeachAI initiative). And while Smith has left Code.org's Board, Microsoft's influence there will live on as Microsoft CTO Kevin Scott -- credited for forging Microsoft's OpenAI partnership -- remains a Code.org Board member together with execs from other Code.org Platinum Supporters ($3+ million in past 2 years) Google and Amazon.
AI

OpenAI Lays Out Plan For Dealing With Dangers of AI (washingtonpost.com) 32

OpenAI, the AI company behind ChatGPT, laid out its plans for staying ahead of what it thinks could be serious dangers of the tech it develops, such as allowing bad actors to learn how to build chemical and biological weapons. From a report: OpenAI's "Preparedness" team, led by MIT AI professor Aleksander Madry, will hire AI researchers, computer scientists, national security experts and policy professionals to monitor its tech, continually test it and warn the company if it believes any of its AI capabilities are becoming dangerous. The team sits between OpenAI's "Safety Systems" team, which works on existing problems like infusing racist biases into AI, and the company's "Superalignment" team, which researches how to make sure AI doesn't harm humans in an imagined future where the tech has outstripped human intelligence completely.

[...] Madry, a veteran AI researcher who directs MIT's Center for Deployable Machine Learning and co-leads the MIT AI Policy Forum, joined OpenAI earlier this year. He was one of a small group of OpenAI leaders who quit when Altman was fired by the company's board in November. Madry returned to the company when Altman was reinstated five days later. OpenAI, which is governed by a nonprofit board whose mission is to advance AI and make it helpful for all humans, is in the midst of selecting new board members after three of the four board members who fired Altman stepped down as part of his return. Despite the leadership "turbulence," Madry said he believes OpenAI's board takes seriously the risks of AI that he is researching. "I realized if I really want to shape how AI is impacting society, why not go to a company that is actually doing it?"

Education

Amazon, Microsoft, and Google Help Teachers Incorporate AI Into CS Education 16

Long-time Slashdot reader theodp writes: Earlier this month, Amazon came under fire as the Los Angeles Times reported on a leaked confidential document that "reveals an extensive public relations strategy by Amazon to donate to community groups, school districts, institutions and charities" to advance the company's business objectives. "We will not fund organizations that have positioned themselves antagonistically toward our interests," explained Amazon officials of the decision to cut off donations to the Cheech Marin Center for Chicano Art and Culture after it ran an exhibit ("Burn Them All Down") that the artist called a commentary on how public officials were not listening to community concerns about the growing number of Amazon warehouses in Southern California's Inland Empire neighborhoods...

Interestingly on the same day the Los Angeles Times was sounding the alarm on Amazon philanthropy, the White House and National Science Foundation (NSF) held a White House-hosted event on K-12 AI education. There it was announced that the Amazon-backed nonprofit Computer Science Teachers Association (CSTA) will develop new K-12 computer science standards that incorporate AI into foundational computer science education with support from the NSF, Amazon, Google, and Microsoft. CSTA separately announced it had received a $1.5 million donation from Amazon to "support efforts to update the CSTA K-12 Computer Science Standards to reflect the rapid advancements in technologies like artificial intelligence (AI)," adding that the CSTA standards — which CSTA credited Microsoft Philanthropies for helping to advance — "serve as a model for CS teaching and learning across grades K-12" in 42 states.

The announcements, the White House noted, came during Computer Science Education Week, the signature event of which is Amazon, Google, and Microsoft-backed Code.org's Hour of Code (which was AI-themed this year), for which Amazon, Google, and Microsoft — not teachers — provided the event's signature tutorials used by the nation's K-12 students. Amazon, Google, and Microsoft are also advisors to Code.org's TeachAI initiative, which was launched in May "to provide thought leadership to guide governments and educational leaders in aligning education with the needs of an increasingly AI-driven world and connecting the discussion of teaching with AI to teaching about AI and computer science."
Earth

Low-Frequency Sound Can Reveal That a Tornado Is On Its Way (bbc.com) 54

Scientists are exploring infrasound, low-frequency sound waves produced by tornadoes to develop more accurate early warning systems for these destructive storms. The hope is that eavesdropping on infrasound signals, which travel for hundreds of miles, could provide up to two hours of advance warning. The BBC reports: Scientists have been listening to tornadoes and trying to work out whether they produce a unique sound since the 1970s. Experimental evidence suggests that low-frequency infrasound, with a frequency range of 1-10Hz , is produced while a tornado is taking shape and throughout its life. One recent set of measurements from a tornado near Lakin, Kansas in May 2020 revealed that the twister produced a distinct, elevated signal between 10Hz and 15Hz. In some cases arrays of infrasound detecting microphones have been shown to pick up the noise produced by tornadoes from more than 100km (60 miles) away and have also indicated that the infrasound is produced before tornadogenesis even begins. Researchers hope that by eavesdropping on these noises, it may be possible to not only hear a tornado coming but perhaps even predict them up to two hours before they form.

Since 2020, a team from Oklahoma State University has been testing infrasound's predictive powers using equipment installed in tornado-chasing vehicles. Their portable kit, the Ground-based Local Infrasound Data Acquisition, or "Glinda", system, references a character from The Wizard of Oz. They hope the equipment will help storm chasers to better monitor the development of tornadoes in real time, but requires the equipment to be deployed to the right place at the right time. Some researchers, however, are working on systems that can be left to permanently monitor for tornadoes. One group, led by Roger Waxler, principal scientist at the National Centre for Physical Acoustics (NCPA) based at the University of Mississippi, are planning to deploy four permanent arrays of high-tech sensors in south Mississippi to detect infrasound signals. They hope the system will provide a way of consistently monitoring and detecting tornadoes.

[...] Waxler and his team hope their decade-long experiment will lead to an effective early warning system for tornadoes, particularly when combined with other sources such as doppler radar. "It's not unreasonable that we could localize a tornado to half a football field," adds Waxler. "I envision seeing a map on an app with a dot that shows there's a tornado coming up South Lamar [Avenue, for example]." Warnings have improved in recent decades: from 2003 to 2017, 87% of deadly tornadoes were preceded by an advance warning, but people still have an average of just 10-15 minutes to find shelter. A study based on interviews with 23 survivors of two deadly tornadoes found that people tried to evaluate and respond to the risk of a tornado as the situation evolved, but some did not have a place to shelter easily. Experts believe tornado warnings are too often ignored due to "warning fatigue" created by false alarms and hours of televised storm coverage. One study found 37% of people surveyed did not understand the need for taking precautionary measures during a tornado warning. Waxler hopes that a more accurate early warning system could change the way people respond when they hear a storm is approaching. "Rather than going to hide in your bathtub or cellar, it might be a better idea to get in your car and drive if you know where a tornado is. "The goal is to save lives."

AI

MIT Group Releases White Papers On Governance of AI (mit.edu) 46

An anonymous reader quotes a report from MIT News: Providing a resource for U.S. policymakers, a committee of MIT leaders and scholars has released a set of policy briefs that outlines a framework for the governance of artificial intelligence. The approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI. The aim of the papers is to help enhance U.S. leadership in the area of artificial intelligence broadly, while limiting harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.

The main policy paper, "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications. "As a country we're already regulating a lot of relatively high-risk things and providing governance there," says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. "We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach." [...]

"The framework we put together gives a concrete way of thinking about these things," says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT's Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort. The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more.
These are the key policies and approaches mentioned in the white papers:

Extension of Current Regulatory and Liability Approaches: The framework proposes extending current regulatory and liability approaches to cover AI. It suggests leveraging existing U.S. government entities that oversee relevant domains for regulating AI tools. This is seen as a practical approach, starting with areas where human activity is already being regulated and deemed high risk.

Identification of Purpose and Intent of AI Tools: The framework emphasizes the importance of AI providers defining the purpose and intent of AI applications in advance. This identification process would enable the application of relevant regulations based on the specific purpose of AI tools.

Responsibility and Accountability: The policy brief underscores the responsibility of AI providers to clearly define the purpose and intent of their tools. It also suggests establishing guardrails to prevent misuse and determining the extent of accountability for specific problems. The framework aims to identify situations where end users could reasonably be held responsible for the consequences of misusing AI tools.

Advances in Auditing of AI Tools: The policy brief calls for advances in auditing new AI tools, whether initiated by the government, user-driven, or arising from legal liability proceedings. Public standards for auditing are recommended, potentially established by a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).

Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.

Encouragement of Research for Societal Benefit: The policy papers highlight the importance of encouraging research on how to make AI beneficial to society. For instance, there is a focus on exploring the possibility of AI augmenting and aiding workers rather than replacing them, leading to long-term economic growth distributed throughout society.

Addressing Legal Issues Specific to AI: The framework acknowledges the need to address specific legal matters related to AI, including copyright and intellectual property issues. Special consideration is also mentioned for "human plus" legal issues, where AI capabilities go beyond human capacities, such as mass surveillance tools.

Broadening Perspectives in Policymaking: The ad hoc committee emphasizes the need for a broad range of disciplinary perspectives in policymaking, advocating for academic institutions to play a role in addressing the interplay between technology and society. The goal is to govern AI effectively by considering both technical and social systems.
Privacy

Republican Presidential Candidates Debate Anonymity on Social Media (cnbc.com) 174

Four Republican candidates for U.S. president debated Wednesday — and moderator Megyn Kelly had a tough question for former South Carolina governor Nikki Haley. "Can you please speak to the requirement that you said that every anonymous internet user needs to out themselves?" Nikki Haley: What I said was, that social media companies need to show us their algorithms. I also said there are millions of bots on social media right now. They're foreign, they're Chinese, they're Iranian. I will always fight for freedom of speech for Americans; we do not need freedom of speech for Russians and Iranians and Hamas. We need social media companies to go and fight back on all of these bots that are happening. That's what I said.

As a mom, do I think social media would be more civil if we went and had people's names next to that? Yes, I do think that, because I think we've got too much cyberbullying, I think we've got child pornography and all of those things. But having said that, I never said government should go and require anyone's name.

DeSantis: That's false.

Haley: What I said —

DeSantis:You said I want your name. As president of the United States, her first day in office, she said one of the first things I'm going to do --

Haley: I said we were going to get the millions of bots.

DeSantis: "All social medias? I want your name." A government i.d. to dox every American. That's what she said. You can roll the tape. She said I want your name — and that was going to be one of the first things she did in office. And then she got real serious blowback — and understandably so, because it would be a massive expansion of government. We have anonymous speech. The Federalist Papers were written with anonymous writers — Jay, Madison, and Hamilton, they went under "Publius". It's something that's important — and especially given how conservatives have been attacked and they've lost jobs and they've been cancelled. You know the regime would use that to weaponize that against our own people. It was a bad idea, and she should own up to it.

Haley: This cracks me up, because Ron is so hypocritical, because he actually went and tried to push a law that would stop anonymous people from talking to the press, and went so far to say bloggers should have to register with the state --

DeSantis:That's not true.

Haley: — if they're going to write about elected officials. It was in the — check your newpaper. It was absolutely there.

DeSantis quickly attributed the introduction of that legislation to "some legislator".

The press had already extensively written about Haley's position on anonymity on social media. Three weeks ago Business Insider covered a Fox News interview, and quoted Nikki Haley as saying: "When I get into office, the first thing we have to do, social media companies, they have to show America their algorithms. Let us see why they're pushing what they're pushing. The second thing is every person on social media should be verified by their name." Haley said this was why her proposals would be necessary to counter the "national security threat" posed by anonymous social media accounts and social media bots. "When you do that, all of a sudden people have to stand by what they say, and it gets rid of the Russian bots, the Iranian bots, and the Chinese bots," Haley said. "And then you're gonna get some civility when people know their name is next to what they say, and they know their pastor and their family member's gonna see it. It's gonna help our kids and it's gonna help our country," she continued... A representative for the Haley campaign told Business Insider that Haley's proposals were "common sense."

"We all know that America's enemies use anonymous bots to spread anti-American lies and sow chaos and division within our borders. Nikki believes social media companies need to do a better job of verifying users so we can crack down on Chinese, Iranian, and Russian bots," the representative said.

The next day CNBC reported that Haley "appeared to add a caveat... suggesting Wednesday that Americans should still be allowed to post anonymously online." A spokesperson for Haley's campaign added, "Social media companies need to do a better job of verifying users as human in order to crack down on anonymous foreign bots. We can do this while protecting America's right to free speech and Americans who post anonymously."

Privacy issues had also come up just five minutes earlier in the debate. In March America's Treasury Secretary had recommended the country "advance policy and technical work on a potential central bank digital currency, or CBDC, so the U.S. is prepared if CBDC is determined to be in the national interest."

But Florida governor Ron DeSantis spoke out forecefully against the possibility. "They want to get rid of cash, crypto, they want to force you to do that. They'll take away your privacy. They will absolutely regulate your purchases. On Day One as president, we take the idea of Central Bank Digital Currency, and we throw it in the trash can. It'll be dead on arrival." [The audience applauded.]
NASA

SpaceX Plans Key NASA Demonstration For Next Starship Launch (cnbc.com) 15

SpaceX's next test of its Starship rocket is expected to include "a propellant transfer demonstration." CNBC reports: SpaceX last month launched its second Starship flight, a test which saw the company make progress in development of the monster rocket yet fall short of completing the full mission. The propellant transfer demonstration would require that the rocket reach orbit as one of the demo's goals. A successful attempt would push Starship beyond its benchmarks reached thus far. "NASA and SpaceX are reviewing options for the demonstration to take place during an integrated flight test of Starship and the Super Heavy rocket. However, no final decisions on timing have been made," NASA spokesperson Jimi Russell said in a statement to CNBC.

The "propellant transfer demonstration" falls under a NASA "Tipping Point" contract that the agency awarded SpaceX in 2020 for $53.2 million. As part of the contract, NASA wants SpaceX to develop and test "Cryogenic Fluid Management" (CFM) technology, which the agency notes is essential for future missions to the moon and Mars. [...] Under the NASA contract, SpaceX's first demo will involve transferring 10 metric tons of liquid oxygen between tanks within the Starship rocket. While Starship won't be rendezvousing with another tanker rocket for this demo, NASA considers the test progress in maturing the tech. "The goal is to advance cryogenic fluid transfer and fill level gauging technology through technology risk assessment, design and prototype testing, and in-orbit demonstration. The demonstration will decrease key risks for large-scale propellant transfer in the lead-up to future human spaceflight missions," NASA says.

Businesses

Nvidia Beats TSMC and Intel To Take Top Chip Industry Revenue Crown For the First Time (tomshardware.com) 21

Nvidia has swung from fourth to first place in an assessment of chip industry revenue published today. From a report: Taipei-based financial analyst Dan Nystedt noted that the green team took the revenue crown from contract chip-making titan TSMC as Q3 financials came into view. Those keeping an eye on the world of investing and finance will have seen our report about Nvidia's earnings explosion, evidenced by the firm's publishing of its Q3 FY23 results.

Nvidia charted an amazing performance, with a headlining $18.12 billion in revenue for the quarter, up 206% year-over-year (YoY). The firm's profits were also through the roof, and Nystedt posted a graph showing Nvidia elbowed past its chip industry rivals by this metric in Q3 2023, too. Nvidia's advance is supported by multiple highly successful operating segments, which have provided a multiplicative effect on its revenue and income. Again, we saw clear evidence of a seismic shift in revenue, with the latest set of financials shared with investors earlier this week.

The Courts

Sarah Silverman Hits Stumbling Block in AI Copyright Infringement Lawsuit Against Meta (hollywoodreporter.com) 93

Winston Cho writes via The Hollywood Reporter: A federal judge has dismissed most of Sarah Silverman's lawsuit against Meta over the unauthorized use of authors' copyrighted books to train its generative artificial intelligence model, marking the second ruling from a court siding with AI firms on novel intellectual property questions presented in the legal battle. U.S. District Judge Vince Chhabria on Monday offered a full-throated denial of one of the authors' core theories that Meta's AI system is itself an infringing derivative work made possible only by information extracted from copyrighted material. "This is nonsensical," he wrote in the order. "There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs' books."

Another of Silverman's arguments that every result produced by Meta's AI tools constitutes copyright infringement was dismissed because she didn't offer evidence that any of the outputs "could be understood as recasting, transforming, or adapting the plaintiffs' books." Chhabria gave her lawyers a chance to replead the claim, along with five others that weren't allowed to advance. Notably, Meta didn't move to dismiss the allegation that the copying of books for purposes of training its AI model rises to the level of copyright infringement.
In July, Silverman and two authors filed a class action lawsuit against Meta and OpenAI for allegedly using their content without permission to train AI language models.
Supercomputing

Linux Foundation Announces Intent to Form 'High Performance Software Foundation' (linuxfoundation.org) 5

This week the Linux Foundation "announced the intention to form the High Performance Software Foundation.

"Through a series of technical projects, the High Performance Software Foundation aims to build, promote, and advance a portable software stack for high performance computing by increasing adoption, lowering barriers to contribution, and supporting development efforts." As use of high performance computing becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation intends to leverage investments made by the United States Department of Energy's Exascale Computing Project, the EuroHPC Joint Undertaking, and other international projects in accelerated high performance computing to exploit the performance of this diversifying set of architectures. As an umbrella project under the Linux Foundation, HPSF intends to provide a neutral space for pivotal projects in the high performance software ecosystem, enabling industry, academia, and government entities to collaborate together on the scientific software stack.

The High Performance Software Foundation already benefits from strong support across the high performance computing landscape, including leading companies and organizations like Amazon Web Services, Argonne National Laboratory, CEA, CIQ, Hewlett Packard Enterprise, Intel, Kitware, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NVIDIA, Oak Ridge National Laboratory, Sandia National Laboratory, and the University of Oregon.

Its first open source technical projects include:
  • Spack: the high performance computing package manager
  • Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
  • AMReX: a performance-portable software framework designed to accelerate solving partial differential equations on block-structured, adaptively refined meshes.
  • WarpX: a performance-portable Particle-in-Cell code with advanced algorithms that won the 2022 Gordon Bell Prize
  • Trilinos: a collection of reusable scientific software libraries, known in particular for linear, non-linear, and transient solvers, as well as optimization and uncertainty quantification.
  • Apptainer: a container system and image format specifically designed for secure high-performance computing.
  • VTK-m: a toolkit of scientific visualization algorithms for accelerator architectures.
  • HPCToolkit: performance measurement and analysis tools for computers ranging from laptops to the world's largest GPU-accelerated supercomputers.
  • E4S: the Extreme-scale Scientific Software Stack
  • Charliecloud: high performance computing-tailored, lightweight, fully unprivileged container implementation.

AI

Google DeepMind's Weather AI Can Forecast Extreme Weather Faster and More Accurately 40

In research published in Science today, Google DeepMind's model, GraphCast, was able to predict weather conditions up to 10 days in advance, more accurately and much faster than the current gold standard. From a report: GraphCast outperformed the model from the European Centre for Medium-Range Weather Forecasts (ECMWF) in more than 90% of over 1,300 test areas. And on predictions for Earth's troposphere -- the lowest part of the atmosphere, where most weather happens -- GraphCast outperformed the ECMWF's model on more than 99% of weather variables, such as rain and air temperature. Crucially, GraphCast can also offer meteorologists accurate warnings, much earlier than standard models, of conditions such as extreme temperatures and the paths of cyclones. In September, GraphCast accurately predicted that Hurricane Lee would make landfall in Nova Scotia nine days in advance, says Remi Lam, a staff research scientist at Google DeepMind. Traditional weather forecasting models pinpointed the hurricane to Nova Scotia only six days in advance.

[...] Traditionally, meteorologists use massive computer simulations to make weather predictions. They are very energy intensive and time consuming to run, because the simulations take into account many physics-based equations and different weather variables such as temperature, precipitation, pressure, wind, humidity, and cloudiness, one by one. GraphCast uses machine learning to do these calculations in under a minute. Instead of using the physics-based equations, it bases its predictions on four decades of historical weather data. GraphCast uses graph neural networks, which map Earth's surface into more than a million grid points. At each grid point, the model predicts the temperature, wind speed and direction, and mean sea-level pressure, as well as other conditions like humidity. The neural network is then able to find patterns and draw conclusions about what will happen next for each of these data points.
United Kingdom

Tech Groups Fear New Powers Will Allow UK To Block Encryption (ft.com) 40

Tech groups have called on ministers to clarify the extent of proposed powers that they fear would allow the UK government to intervene and block the rollout of new privacy features for messaging apps. FT: The Investigatory Powers Amendment Bill, which was set out in the King's Speech on Tuesday, would oblige companies to inform the Home Office in advance about any security or privacy features they want to add to their platforms, including encryption. At present, the government has the power to force telecoms companies and messaging platforms to supply data on national security grounds and to help with criminal investigations.

The new legislation was designed to "recalibrate" those powers to respond to risks posed to public safety by multinational tech companies rolling out new services that "preclude lawful access to data," the government said. But Meredith Whittaker, president of private messaging group Signal, urged ministers to provide more clarity on what she described as a "bellicose" proposal amid fears that, if enacted, the new legislation would allow ministers and officials to veto the introduction of new safety features. "We will need to see the details, but what is being described suggests an astonishing level of technically confused government over-reach that will make it nearly impossible for any service, homegrown or foreign, to operate with integrity in the UK," she told the Financial Times.

AI

Artists May 'Poison' AI Models Before Copyright Office Can Issue Guidance (arstechnica.com) 66

An anonymous reader writes: Artists have spent the past year fighting companies that have been training AI image generators—including popular tools like the impressively photorealistic Midjourney or the ultra-sophisticated DALL-E 3—on their original works without consent or compensation. Now, the United States has promised to finally get serious about addressing their copyright concerns raised by AI, President Joe Biden said in his much-anticipated executive order on AI, which was signed this week. The US Copyright Office had already been seeking public input on AI concerns over the past few months through a comment period ending on November 15. Biden's executive order has clarified that following this comment period, the Copyright Office will publish the results of its study. And then, within 180 days of that publication—or within 270 days of Biden's order, "whichever comes later"—the Copyright Office's director will consult with Biden to "issue recommendations to the President on potential executive actions relating to copyright and AI."

"The recommendations shall address any copyright and related issues discussed in the United States Copyright Office's study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training," Biden's order said. That means that potentially within the next six to nine months (or longer), artists may have answers to some of their biggest legal questions, including a clearer understanding of how to protect their works from being used to train AI models. Currently, artists do not have many options to stop AI image makers—which generate images based on user text prompts—from referencing their works. Even companies like OpenAI, which recently started allowing artists to opt out of having works included in AI training data, only allow artists to opt out of future training data. [...] According to The Atlantic, this opt-out process—which requires artists to submit requests for each artwork and could be too cumbersome for many artists to complete—leaves artists stuck with only the option of protecting new works that "they create from here on out." It seems like it's too late to protect any work "already claimed by the machines" in 2023, The Atlantic warned. And this issue clearly affects a lot of people. A spokesperson told The Atlantic that Stability AI alone has fielded "over 160 million opt-out requests in upcoming training." Until federal regulators figure out what rights artists ought to retain as AI technologies rapidly advance, at least one artist—cartoonist and illustrator Sarah Andersen—is advancing a direct copyright infringement claim against Stability AI, maker of Stable Diffusion, another remarkable AI image synthesis tool.

Andersen, whose proposed class action could impact all artists, has about a month to amend her complaint to "plausibly plead that defendants' AI products allow users to create new works by expressly referencing Andersen's works by name," if she wants "the inferences" in her complaint "about how and how much of Andersen's protected content remains in Stable Diffusion or is used by the AI end-products" to "be stronger," a judge recommended. In other words, under current copyright laws, Andersen will likely struggle to win her legal battle if she fails to show the court which specific copyrighted images were used to train AI models and demonstrate that those models used those specific images to spit out art that looks exactly like hers. Citing specific examples will matter, one legal expert told TechCrunch, because arguing that AI tools mimic styles likely won't work—since "style has proven nearly impossible to shield with copyright." Andersen's lawyers told Ars that her case is "complex," but they remain confident that she can win, possibly because, as other experts told The Atlantic, she might be able to show that "generative-AI programs can retain a startling amount of information about an image in their training data—sometimes enough to reproduce it almost perfectly." But she could fail if the court decides that using data to train AI models is fair use of artists' works, a legal question that remains unclear.

Businesses

Researchers Revolt Against Weekend Conferences (nature.com) 214

In response to studies that relate high rates of female attrition from biomedical research fields to the obligations of motherhood, researchers concerned about inclusivity are now debating the issue of weekend conference duties. Nature: Because published findings are often old news in the rapidly changing biomedical fields, in-person conferences offer a crucial opportunity for scientists to stay current on trends that shape projects and funding outcomes. Yet fields often expect rock-star-like travel schedules on an economy-class budget in addition to long, irregular weekday hours at the laboratory. This is why early-career scientists with children say that they must seek alternative childcare or risk being scooped or excluded from a collaboration simply because they missed a weekend conference.

International meetings are often scheduled over weekends because that's the only time venues have availability. Few cities have both suitable venues and enough hotel space to welcome 21,000 people from around the world, and even meetings for 3,000 researchers must be booked many years in advance. Because local businesses and regional associations tend to book venues during the working week, large meetings that span three to five days often need to start or end over a weekend. Women who continue to break the glass ceiling in biomedicine are now pitching this timing as an example of unnecessary conflict between work and family.

AI

New AWS Service Lets Customers Rent Nvidia GPUs For Quick AI Projects 7

An anonymous reader quotes a report from TechCrunch: More and more companies are running large language models, which require access to GPUs. The most popular of those by far are from Nvidia, making them expensive and often in short supply. Renting a long-term instance from a cloud provider when you only need access to these costly resources for a single job, doesn't necessarily make sense. To help solve that problem, AWS launched Amazon Elastic Compute Cloud (EC2) Capacity Blocks for ML today, enabling customers to buy access to these GPUs for a defined amount of time, typically to run some sort of AI-related job such as training a machine learning model or running an experiment with an existing model.

The product gives customers access to NVIDIA H100 Tensor Core GPUs instances in cluster sizes of one to 64 instances with 8 GPUs per instance. They can reserve time for up to 14 days in 1-day increments, up to 8 weeks in advance. When the timeframe is over, the instances will shut down automatically. The new product enables users to sign up for a the number of instances they need for defined block of time, just like reserving a hotel room for a certain number of days (as the company put it). From the customer's perspective, they will know exactly how long the job will run, how many GPUs they'll use and how much it will cost up front, giving them cost certainty. As a users sign up for the service, its displays the total cost for the timeframe and resources. Users can dial that up or down, depending on their resource appetite and budgets before agreeing to buy. The new feature is generally available starting today in the AWS US East (Ohio) region.
AI

Biden Signs Executive Order To Oversee and Invest in AI (nbcnews.com) 36

President Joe Biden signed a wide-ranging executive order on artificial intelligence Monday, setting the stage for some industry regulations and funding for the U.S. government to further invest in the technology. From a report: The order is broad, and its focuses range from civil rights and industry regulations to a government hiring spree. In a media call previewing the order Sunday, a senior White House official, who asked to not be named as part of the terms of the call, said AI has so many facets that effective regulations have to cast a wide net. "AI policy is like running into a decathlon, and there's 10 different events here," the official said. "And we don't have the luxury of just picking 'we're just going to do safety' or "we're just going to do equity' or 'we're just going to do privacy.' You have to do all of these things."

The official also called for "significant bipartisan legislation" to further advance the country's interests with AI. Senate Majority Leader Chuck Schumer, D-N.Y., held a private forum in September with industry leaders but has yet to introduce significant AI legislation. Some of the order builds on a previous nonbinding agreement that seven of the top U.S. tech companies developing AI agreed to in July, like hiring outside experts to probe their systems for weaknesses and sharing their critical findings. The order leverages the Defense Production Act to legally require those companies to share safety test results with the federal government.

China

Huawei's Profit Doubles With Made-in-China Chip Breakthrough (yahoo.com) 148

Bloomberg thinks they've identified the source of the advanced chips in Huawei's newest smartphone, citing to "people familiar with the matter". In a suggestion that export restrictions on Europe's most valuable tech company may have come too late to stem China's advances in chipmaking, ASML's so-called immersion deep ultraviolet machines were used in combination with tools from other companies to make the Huawei Technologies Co. chip, the people said, asking not to be identified discussing information that's not public. ASML declined to comment.

There is no suggestion that their sales violated export restrictions... ASML has never been able to sell its EUV machines to China because of export restrictions. But less advanced DUV models can be retooled with deposition and etching gear to produce 7-nanometer and possibly even more advanced chips, according to industry analysts. The process is much more expensive than using EUV, making it very difficult to scale production in a competitive market environment. In China, however, the government is willing to shoulder a significant portion of chipmaking costs.

Chinese companies have been legally stockpiling DUV gear for years — especially after the U.S. introduced its initial export controls last year before getting Japan and the Netherlands on board... According to an investor presentation published by the company last week, ASML experienced a jump in business from China this year as chipmakers there boosted orders ahead of the export controls taking full effect in 2024. China accounted for 46% of ASML's sales in the third quarter, compared with 24% in the previous quarter and 8% in the three months ending in March.

Another article from Bloomberg includes this prediction: The U.S. won't be able to stop Huawei and SMIC from making progress in chip technology, Burn J. Lin, a former Taiwan Semiconductor Manufacturing Co. vice president, told Bloomberg News. Semiconductor Manufacturing International Corp should be able to advance to the next generation at 5 nanometers with machines from ASML Holding NV that it already operates, said Lin, who at TSMC championed the lithography technology that transformed chipmaking.
The end result is that Huawei's profit "more than doubled during the quarter it revealed its biggest achievement in chip technology," the article reports, "adding to signs the Chinese tech leader is steadying a business rocked by US sanctions." The Shenzhen company reported a 118% surge in net profit to 26.4 billion yuan ($3.6 billion) in the September quarter, and a slight rise in sales to 145.7 billion yuan, according to Bloomberg News calculations from nine-month results released Friday. Those numbers included initial sales of the vastly popular Mate 60 Pro, which began shipping in late August... The gadget sold out almost instantly, spurring expectations it could rejuvenate Huawei's fortunes and potentially cut into Apple Inc.'s lead in China, given signs of a disappointing debut for the iPhone 15...

A resurgent Huawei would pose problems not just for Apple but also local brands from Xiaomi Corp. to Oppo and Vivo, all of which are fighting for sales in a shrinking market.

Slashdot Top Deals