Open Source

Jim Zemlin, 'Head Janitor of Open Source,' Marks 20 Years At Linux Foundation (zdnet.com) 3

ZDNet's Steven Vaughan-Nichols interviews Jim Zemlin, Executive Director of The Linux Foundation and "head janitor of open source." An anonymous Slashdot reader shares an excerpt from the article: When I first met Zemlin, he was the head of the Free Standards Group (FSG). The FSG's main project was the Linux Standard Base (LSB) project. The LSB's goal was to get everyone in the Linux desktop world to agree on standards to ensure compatibility among distributions and their applications. Oh well, some struggles are never-ending. Another group, the Open Source Development Labs (OSDL), was simultaneously working on standardizing enterprise Linux. The two non-profits had the same goal of making Linux more useful and popular, so they agreed to merge. Zemlin was the natural pick to head this new group, which would be called The Linux Foundation.

At the time, he told me: "The combination of the two groups really enables the Linux platform and all the members of the Linux Foundation to work really effectively. I clearly understand what the organization's charter needs to be: We need to provide services that are useful to the community and industry, as well as protect, promote, and continue to standardize the platform." While initially focused on Linux, the Foundation's scope expanded significantly around 2010. Until then, the organization had hosted about a dozen projects related to the Linux operating system. However, as Linux gained dominance in various sectors, including high-performance computing, automotive, embedded systems, mobile devices, and cloud computing, the Linux Foundation started to broaden its horizons.
Zemlin says there are three words that sum up the Linux Foundation's effort to keep open source safe and open to a new generation of developers: helpful, hopeful, and humble.

"You must be genuinely helpful to developers. We're the janitors of open source. The Linux Foundation takes care of all the boring but important stuff necessary to support software development so developers can focus on code. This work includes events, project marketing, project infrastructure, finances for projects, training and education, legal assistance, standards, facilitation, open source evangelism, and much, much more."

He continued: "The hopeful part is really the optimistic part. When in 2007, people were saying that this would never work. When leaders of huge companies tell everyone that you know all that you're doing is a cancer or terrible, you have to have a sense of optimism that there are better days ahead. You have to always be thinking, 'No, we can do it and stick with it.'"

However, Zemlin concluded that the number one trait that's "important in working in open source is this idea of humility. I work with hundreds of people every day, and none of them work at the Linux Foundation. We must lead through influence, and that really has been the secret for 20 years of working here without going totally insane. If you can check your ego and take criticism, open source actually turns out to be a really fun community to work with."
Privacy

Strava Closes the Gates To Sharing Fitness Data With Other Apps (theverge.com) 6

The Verge's Richard Lawler reports: Strava recently informed its users and partners that new terms for its API restrict the data that third-party apps can show, refrain from replicating Strava's look, and place a ban on using data "for any model training related to artificial intelligence, machine learning or similar applications." The policy is effective as of November 11th, even though Strava's own post about the change is dated November 15th.

There are plenty of posts on social media complaining about the sudden shift, but one place where dissent won't be tolerated is Strava's own forums. The company says, "...posts requesting or attempting to have Strava revert business decisions will not be permitted."
Brian Bell, Strava's VP of Communications and Social Impact, said in a statement: "We anticipate that these changes will affect only a small fraction (less than .1 percent) of the applications on the Strava platform -- the overwhelming majority of existing use cases are still allowed, including coaching platforms focused on providing feedback to users and tools that help users understand their data and performance."
AI

'Generative AI Is Still Just a Prediction Machine' (hbr.org) 94

AI tools remain prediction engines despite new capabilities, requiring both quality data and human judgment for successful deployment, according to new analysis. While generative AI can now handle complex tasks like writing and coding, its fundamental nature as a prediction machine means organizations must understand its limitations and provide appropriate oversight, argue Ajay Agrawal (Geoffrey Taber Chair in Entrepreneurship and Innovation at the University of Toronto's Rotman School of Management), Joshua Gans (Jeffrey S. Skoll Chair in Technical Innovation and Entrepreneurship at the Rotman School, and the chief economist at the Creative Destruction Lab), and Avi Goldfarb (Rotman Chair in Artificial Intelligence and Healthcare at the Rotman School) in a piece published on Harvard Business Review. Poor data can lead to errors, while lack of human judgment in deployment can result in strategic failures, particularly in high-stakes situations. An excerpt from the story: Thinking of computers as arithmetic machines is more important than most people intuitively grasp because that understanding is fundamental to using computers effectively, whether for work or entertainment. While video game players and photographers may not think about their computer as an arithmetic machine, successfully using a (pre-AI) computer requires an understanding that it strictly follows instructions. Imprecise instructions lead to incorrect results. Playing and winning at early computer games required an understanding of the underlying logic of the game.

[...] AI's evolution has mirrored this trajectory, with many early applications directly related to well-established prediction tasks and, more recently, AI reframing a wide number of applications as predictions. Thus, the higher value AI applications have moved from predicting loan defaults and machine breakdowns to a reframing of writing, drawing, and other tasks as prediction.

Education

Can Google Scholar Survive the AI Revolution? 44

An anonymous reader quotes a report from Nature: Google Scholar -- the largest and most comprehensive scholarly search engine -- turns 20 this week. Over its two decades, some researchers say, the tool has become one of the most important in science. But in recent years, competitors that use artificial intelligence (AI) to improve the search experience have emerged, as have others that allow users to download their data. The impact that Google Scholar -- which is owned by web giant Google in Mountain View, California -- has had on science is remarkable, says Jevin West, a computational social scientist at the University of Washington in Seattle who uses the database daily. But "if there was ever a moment when Google Scholar could be overthrown as the main search engine, it might be now, because of some of these new tools and some of the innovation that's happening in other places," West says.

Many of Google Scholar's advantages -- free access, breadth of information and sophisticated search options -- "are now being shared by other platforms," says Alberto Martin Martin, a bibliometrics researcher at the University of Granada in Spain. AI-powered chatbots such as ChatGPT and other tools that use large language models have become go-to applications for some scientists when it comes to searching, reviewing and summarizing the literature. And some researchers have swapped Google Scholar for them. "Up until recently, Google Scholar was my default search," says Aaron Tay, an academic librarian at Singapore Management University. It's still top of his list, but "recently, I started using other AI tools." Still, given Google Scholar's size and how deeply entrenched it is in the scientific community, "it would take a lot to dethrone," adds West. Anurag Acharya, co-founder of Google Scholar, at Google, says he welcomes all efforts to make scholarly information easier to find, understand and build on. "The more we can all do, the better it is for the advancement of science."
Acharya says Google Scholar uses AI to rank articles, suggest further search queries and recommend related articles. What Google Scholar does not yet provide are AI-generated summaries of search query results. According to Acharya, the company has yet to find "an effective solution" for summarizing conclusions from multiple papers in a brief manner that preserves all the important context.
Programming

On 15th Anniversary, Go Programming Languages Rises in Popularity (go.dev) 40

The Tiobe index tries to track the popularity of programming languages by counting the number of search results for the language's name followed by the word "programming" (on 25 different search engines). And this month there were some surprises...

By TIOBE's reckoning, compared to a year ago PHP has now fallen from #7 to #12, while Delphi/Object Pascal shot up five spots from #16 to #11. In that same year, Fortran jumped from #12 to #8 — while both Visual Basic and SQL dropped down a single rank. Toward the top of the list, C actually fell from the #2 spot over the last 12 months to the #4 spot.

And Go just reached the #7 rank on the TIOBE's ranking of programming language popularity — "an all time high for Go," according to TIOBE CEO Paul Jansen. In this month's note, he explains what he thinks is unusual about this — starting by saying that Go programs are both fast, and easy in many ways — easy to deploy, easy to learn, and easy to understand. Python for instance is easy to learn but not fast, and deployment for larger Python programs is fragile due to dependencies on all kind of versioned libraries in the environment.

If compared to Rust for instance (another contender for a top position), Go is a tiny bit slower, but the Go programs are much easier to understand. The next hurdle for Go in the TIOBE index is JavaScript at position #6. That will be a tough one to pass. JavaScript is ubiquitous in software development, although for larger JavaScript systems we see a shift to TypeScript nowadays.

"If annual trends continue this way, Go will bypass JavaScript within 3 years," TIOBE's CEO predicts. (Adding "Let's see what the future has in store for Go...") Although the Go team actually has specific plans for the future, according to a blog post this week celebrating Go's 15th anniversary: We're working on making Go better for AI — and AI better for Go — by enhancing Go's capabilities in AI infrastructure, applications, and developer assistance. Go is a great language for building production systems, and we want it to be a great language for building production AI systems, too... For AI applications, we will continue building out first-class support for Go in popular AI SDKs, including LangChainGo and Genkit. And from its very beginning, Go aimed to improve the end-to-end software engineering process, so naturally we're looking at bringing the latest tools and techniques from AI to bear on reducing developer toil, leaving more time for the fun stuff — like actually programming!
TIOBE's top 10 programming language rankings for the month of November:
  1. Python
  2. C++
  3. Java
  4. C
  5. C#
  6. JavaScript
  7. Go
  8. Fortran
  9. Visual Basic
  10. SQL

Linux

Linux Kernel 6.12 Has Been Released (omgubuntu.co.uk) 54

Slashdot unixbhaskar writes: Linus has released a fresh Linux kernel for public consumption. Please give it a try and report any glitches to the maintainers for improvement. Also, please do not forget to express your appreciation to those tireless folks who did all the hard work for you.
The blog OMG Ubuntu calls it "one of the most biggest kernel releases for a while," joking that it's a "really real-time kernel." The headline feature in Linux 6.12 is mainline support for PREEMPT_RT. This patch set dramatically improves the performance of real-time applications by making kernel processes pre-emptible — effectively enabled proper real-time computing... Meanwhile, Linus Torvalds himself contributes a new method for user-space address masking designed to claw back some of the performance lost due to Spectre-v1 mitigations.

You might have heard that kernel devs have been working to add QR error codes to Linux's kernel panic BSOD screen (as a waterfall of error text is often cut off and not easily copied for ad-hoc debugging). Well, Linux 6.12 adds support for those during Direct Rendering Manager panics...

A slew of new RISC-V CPU ISA extensions are supported in Linux 6.12; hybrid CPU scaling in the Intel P-State driver lands ahead of upcoming Intel Core Ultra 2000 chips; and AMD P-State driver improves AMD Boost and AMD Preferred Core features.

More coverage from the blog 9to5Linux highlights a new scheduler called sched_ext, Clang support (including LTO) for nolibc, support for NVIDIA's virtual command queue implementation for SMMUv3, and "an updated cpuidle tool that now displays the residency value of cpuidle states for a clearer and more detailed view of idle state information when using cpuidle-info." Linux kernel 6.12 also introduces SWIG bindings for libcpupower to make it easier for developers to write scripts that use and extend the functionality of libcpupower, support for translating normalized error addresses reported by an AMD memory controller into system physical addresses using a UEFI mechanism called platform runtime mechanism (PRM), as well as simplified loading of microcode patches on AMD Zen and newer CPUs by using the family, model, and stepping encoded in the patch revision number...

Moreover, Linux 6.12 adds support for running as a protected guest on Android as well as perf and support for a bunch of new interconnect PMUs. It also adds the final conversions to the new Intel VFM CPU model matching macros, rewrites the PCM buffer allocation handling and locking optimizations, and improves the USB audio driver...

Government

What Happened When a Washington County Tried a 32-Hour Workweek? (cnn.com) 123

On a small network of islands north of Seattle, Washington, San Juan County just completed its first full year of 32-hour workweeks, reports CNN.

And Tuesday the county released a report touting "a host of positive outcomes — from recruiting to retention to employee happiness — and a cost savings of more than $975,000 compared to what the county would have paid if it met the union's pay increase demands." The county said the 32-hour workweek has attracted a host of new talent: Applications have spiked 85.5% and open positions are being filled 23.75% faster, while more employees are staying in their jobs — separation (employees quitting or retiring) dropped by 48%. And 84% of employees said their work-life balance was better. "This is meeting many of the goals that we set out to do when we implemented it," County Manager Jessica Hudson said. said, noting the county is looking for opportunities to expand the initiative...

Departments across San Juan County have implemented the 32-hour workweek differently, some staggering staffing to maintain their previous availability to the public while others have shortened schedules to be open just four days a week... "I tell people, you're not going to see things change from your perspective," said Joe Ingman, a park manager in the county. "Offices are going to stay open, bathrooms are going to get cleaned, grass is going to get mowed." His department adjusted schedules to stay staffed seven days a week, and while communication across shifts was an initial hurdle, issues were quickly ironed out. "It was probably the smoothest summer I've had, and I've been working in parks for over a decade," he said, crediting the new schedule as a boon for recruiting. While job postings used to languish unfilled for months, last summer the applicant pool was not only bigger but more qualified, and the two staffers he hired both cited coming to the county because of the 32-hour workweek.

"It's no more cost to the public to work 32 hours — but we have better applicants," he said. Ingman also said the four-day workweek has done wonders for his job satisfaction; he'd watched colleagues burn out for years, but now sees a path for his own future in the department... County employees have used their extra time off to spend less money on childcare, volunteer in their kids' schools, and contribute to the community... While San Juan County's motivation in adopting a shortened workweek was financial, the benefits its employees cite speak to a larger trend, as workplaces around the country increasingly explore flexible schedules to combat burnout and attract and retain talent.

A survey of CEOs this spring found nearly one third of large US companies were looking into solutions like four-day or four-and-a-half-day workweeks... Even without a reduction in total hours, a Gallup poll last year found a third day off would be widely embraced: 77% of US workers said a 4-day, 40-hour workweek would have a positive impact on their wellbeing.

One worker shared their thoughts with CNN. "Life shouldn't be about just working yourself into the ground..." And they added that "So far, I feel happy; I feel seen as an employee and as a human, and I feel like it could be a beautiful step forward for other people if we just trust it and try it."

They even had some advice for other employers. "Change happens by somebody actually doing the change. The only way we're going to find out if it works is by doing."
Businesses

Amazon Makes It Harder for Disabled Employees to Work From Home (yahoo.com) 63

"Amazon is making it harder for disabled employees to get permission to work from home," reports Bloomberg, a move they say shows Amazon's "determination" to enforce a five-days-a-week return to the office. The company recently told employees with disabilities that it was implementing a more rigorous vetting process, both for new requests to work from home and applications to extend existing arrangements. Affected workers must submit to a "multilevel leader review" and could be required to return to the office for monthlong trials to determine if accommodations meet their needs... Affected employees are receiving calls from "accommodation consultants" who explain how the new policy works. They review medical documentation and discuss how effective working from home has been for employees who've already received an accommodation as well as any previous attempts to help the person work in the office. If the consultant agrees that the person should be allowed to work from home, another Amazon manager must sign off. If they don't, the request goes to a third manager...

Some workers fear the process was designed to make requests less likely to be approved, two employees said. In internal chat rooms, according to one of them, employees have accused [Chief Executive Officer Andy] Jassy of hypocrisy because the bureaucratic process belies his stated determination to cut through red tape that he says is slowing Amazon down.

"Jassy says the return-to-office requirement will strengthen the company's culture, which he believes has suffered since the pandemic and become overly bureaucratic," the article points out. But it adds that down at the workforce level, the move "is seen by some employees as a way to get people to quit and shrink the workforce."
Transportation

'Automotive Grade Linux' Will Promote Open Source Program Offices for Automakers (prnewswire.com) 28

Automotive Grade Linux is a collaborative open source project developing "an open platform from the ground up that can serve as the de facto industry standard" for fast development of new features. Automakers have joined with tech companies and suppliers to speed up development (and adoption) of "a fully open software stack for the connected car" — hosted at the Linux Foundation, and "with Linux at its core..."

And this week they created a new Open Source Program Office expert group, led by Toyota, to promote the establishment of Open Source Program Offices within the automotive industry, "and encourage the sharing of information and best practices between them." Open source software has become more prevalent across the automotive industry as automakers invest more time and resources into software development. Automakers like Toyota and Subaru are using open source software for infotainment and instrument cluster applications. Other open source applications across the automotive industry include R&D, testing, vehicle-to-cloud and fleet management. "Historically, there has been little code contributed back to the open source community," said Dan Cauchy, Executive Director of Automotive Grade Linux. "Often, this was because the internal procedures or IT infrastructure weren't in place to support open source contributions. The rise of software-defined vehicles has led to a growing trend of automakers not just using, but also contributing, to open source software. Many organizations are also establishing Open Source Program Offices to streamline and organize open source activities to better support business goals."

Automakers including Toyota, Honda, and Volvo have already established Open Source Program Offices. The new AGL OSPO Expert Group provides a neutral space for them to share pain points and collaborate on solutions, exchange information, and develop best practices that can help other automakers build their own OSPOs. "Toyota has been participating in AGL and the broader open source community for over a decade," said Masato Endo, Group Manager of Open Source Program Group, Toyota. "We established an OSPO earlier this year to promote the use of open source software internally and to help guide how and where we contribute. We are looking forward to working with other open source leaders to solve common problems, collaborate on best practices, and invigorate open source activities in the automotive industry."

The AGL OSPO EG is led by Toyota with support from Panasonic and AISIN Corporation.

Patents

Open Source Fights Back: 'We Won't Get Patent-Trolled Again' (zdnet.com) 64

ZDNet's Steven Vaughan-Nichols reports: [...] At KubeCon North America 2024 this week, CNCF executive director Priyanka Sharma said in her keynote, "Patent trolls are not contributors or even adopters in our ecosystem. Instead, they prey on cloud-native adopters by abusing the legal system. We are here to tell the world that these patent trolls don't stand a chance because CNCF is uniting the ecosystem to deter them. Like a herd of musk oxen, we will run them off our pasture." CNCF CTO Chris Aniszczyk added: "The reason trolls can make money is that many companies find it too expensive to fight back, so they pay trolls a settlement fee to avoid the even higher cost of litigation. Now, when a whole herd of companies band together like musk oxen to drive a troll off, it changes the cost structure of fighting back. It disrupts their economic model."

How? Jim Zemlin, the Linux Foundation's executive director, said, "We don't negotiate with trolls. Instead, with United Patents, we go to the PTO and crush those patents. We strive to invalidate them by working with developers who have prior art, bringing this to the attention of the USPTO, and killing patents. No negotiation, no settlement. We destroy the very asset that made patent trolls' business work. Together, since we've started this effort, 90% of the time, we've been able to go in there and destroy these patents." "It's time for us to band together," said Joanna Lee, CNCF's VP of strategic programs and legal. "We encourage all organizations in our ecosystem to get involved. Join the fight, enhance your own company's protection, protect your customers, enhance our community defense, and save money on legal expenses."

While getting your company and its legal department involved in the effort to fend off patent trolls is important, developers can also help. CNCF announced the Cloud Native Heroes Challenge, a patent troll bounty program in which cloud-native developers and technologists can earn swag and win prizes. They're asking you to find evidence of preexisting technology -- referred to by patent lawyers as "prior art" -- that can kill off bad patents. This could be open-source documentation (including release notes), published standards or specifications, product manuals, articles, blogs, books, or any publicly available information. All entrants who submit an entry that conforms to the contest rules will receive a free "Cloud Native Hero" t-shirt that can be picked up at any future KubeCon+CloudNativeCon. The winner will also receive a $3,000 cash prize.

In the inaugural contest, the CNCF is seeking information that can be used to invalidate Claim 1 from US Patent US-11695823-B1. This is the major patent asserted by Edge Networking Systems against Kubernetes users. As is often the case with such patents, it's much too broad. This patent describes a network architecture that facilitates secure and flexible programmability between a user device and across a network with full lifecycle management of services and infrastructure applications. That describes pretty much any modern cloud system. If you can find prior art that describes such a system before June 13, 2013, you could be a winner. Some such materials have already been found. This is already listed in the "known references" tab of the contest information page and doesn't qualify. If you care about keeping open-source software easy and cheap to use -- or you believe trolls shouldn't be allowed to take advantage of companies that make or use programs -- you can help. I'll be doing some digging myself.

Power

Datacenters Line Up For 750MW of Oklo's Nuclear-Waste-Powered Small Reactors (theregister.com) 62

Datacenter operators are increasingly turning to small modular reactors (SMRs) like those developed by Oklo to meet growing energy demands. According to The Register, Oklo has secured commitments from two major datacenter providers for 750 MW of power, pending regulatory approvals. It brings the firm's planned nuclear build-out to 2.1 gigawatts. From the report: Oklo's designs are, from what we understand, inspired by the Experimental Breeder Reactor II (EBR-II) and utilize liquid-metal cooling. They are capable of producing between 15MW and 50MW of power, depending on the configuration. That means Oklo's datacenter customers plan to deploy somewhere between 15 and 50 of the reactors to satisfy their thirst for electricity. However, they may be waiting a while.

According to Oklo's website, the nuclear startup hopes to bring its first plant online before the end of the decade. Before that can happen, though, Oklo will need to obtain approval from the Nuclear Regulatory Commission -- something for which it says it's already submitted applications. In 2022, the watchdog rejected an Oklo plan to build a small atomic reactor in Idaho, citing "significant information gaps" on safety-related measures.

That said, Oklo has lately received support from US government agencies including the Department of Energy (DoE), which has awarded a site use permit, while Idaho National Laboratory -- home of EBR-II -- has provided fuel material to support the efforts. Speaking of fuel, Oklo's designs may not suffer from the challenges other SMR startups, like Terrapower, have encountered. Oklo's designs are intended to run on recycled nuclear waste products from traditional reactors. In fact, the startup is currently working with DoE national labs to develop new fuel recycling technologies. Oklo hopes to bring a commercial-scale recycling plan online by the early 2030s.

Programming

OpenMP 6.0 Released (phoronix.com) 11

Phoronix's Michael Larabel reports: The OpenMP Architecture Review Board announced from SC24 that OpenMP 6.0 is now available as a major upgrade to the OpenMP specification for multi-process programming within C / C++ / Fortran. A big emphasis on OpenMP 6.0 is making it easier for developers to embrace. OpenMP 6.0 aims to make it easier to support parallel programming in new applications, easier to adapt to new use-cases, and more fine-grained developer control.

OpenMP 6.0 simplifies task programming with support for task execution by free-agent threads, allowing for recording of task graphs for efficient replay, and other improvements. OpenMP 6.0 also brings support for array syntax applications, better control over memory allocations and accessibility, easier writing of asynchronous data transfers, and other improvements for enhanced device support / offloading. There is also easier programming of loop transformations, support for induction, support for C23 / Fortran 2023 / C++23, grater user control of storage resources and memory spaces, and other improvements.

Programming

Will We Care About Frameworks in the Future? (kinlan.me) 67

Paul Kinlan, who leads the Chrome and the Open Web Developer Relations team at Google, asks and answers the question (with a no.): Frameworks are abstractions over a platform designed for people and teams to accelerate their teams new work and maintenance while improving the consistency and quality of the projects. They also frequently force a certain type of structure and architecture to your code base. This isn't a bad thing, team productivity is an important aspect of any software.

I'm of the belief that software development is entering a radical shift that is currently driven by agents like Replit's and there is a world where a person never actually has to manipulate code directly anymore. As I was making broad and sweeping changes to the functionality of the applications by throwing the Agent a couple of prompts here and there, the software didn't seem to care that there was repetition in the code across multiple views, it didn't care about shared logic, extensibility or inheritability of components... it just implemented what it needed to do and it did it as vanilla as it could.

I was just left wondering if there will be a need for frameworks in the future? Do the architecture patterns we've learnt over the years matter? Will new patterns for software architecture appear that favour LLM management?

Android

Android 15's Virtual Machine Mandate is Aimed at Improving Security (androidauthority.com) 52

Google will require all new mobile chipsets launching with Android 15 to support its Android Virtualization Framework (AVF), a significant shift in the operating system's security architecture. The mandate, reports AndroidAuthority that got a hold of Android's latest Vendor Software Requirements document, affects major chipmakers including Qualcomm, MediaTek, and Samsung's Exynos division. New processors like the Snapdragon 8 Elite and Dimensity 9400 must implement AVF support to receive Android certification.

AVF, introduced with Android 13, creates isolated environments for security-sensitive operations including code compilation and DRM applications. The framework also enables full operating system virtualization, with Google demonstrating Chrome OS running in a virtual machine on Android devices.
Java

Java Proposals Would Boost Resistance to Quantum Computing Attacks (infoworld.com) 14

"Java application security would be enhanced through two proposals aimed at resisting quantum computing attacks," reports InfoWorld, "one plan involving digital signatures and the other key encapsulation." The two proposals reside in the OpenJDK JEP (JDK Enhancement Proposal) index.

The Quantum-Resistant Module-Lattice-Based Digital Signature Algorithm proposal calls for enhancing the security of Java applications by providing an implementation of the quantum-resistant module-latticed-based digital signature algorithm (ML-DSA). ML-DSA would secure against future quantum computing attacks by using digital signatures to detect unauthorized modifications to data and to authenticate the identity of signatories. ML-DSA was standardized by the United States National Institute of Standards and Technology (NIST) in FIPS 204.

The Quantum-Resistant Module-Lattice-Based Key Encapsulation Mechanism proposal calls for enhancing application security by providing an implementation of the quantum-resistant module-lattice-based key encapsulation mechanism (ML-KEM). KEMs are used to secure symmetric keys over insecure communication channels using public key cryptography. ML-KEM is designed to be secure against future quantum computing attacks and was standardized by NIST in FIPS 203.

AI

How Samsung Fell Behind in the AI Boom - and Lost $126 Billion in Market Value (cnbc.com) 14

After missing a chance to capitalize on the AI boom, "Samsung's profit has plunged," reports CNBC, and "around $126 billion has been wiped off its market value, according to data from S&P Capital IQ."

It's gotten so bad that "an executive issued a rare public apology about the company's recent financial performance." [A]s AI applications such as OpenAI's ChatGPT rose in popularity, the underlying infrastructure required to train the huge models they rely on became a bigger focus. Nvidia has emerged as the top player in this space with its graphics processing units (GPUs) that have become the gold standard used by tech giants for AI training. A crucial part of that semiconductor architecture is high-bandwidth memory, or HBM. This next generation of memory involves stacking multiple dynamic random access memory (DRAM) chips, but it had a small market before the AI boom. That's where Samsung got caught out and failed to invest...

SK Hynix saw this opportunity. The company aggressively launched HBM chips which were approved for use in Nvidia architecture and, in the process, the South Korean firm established a close relationship with the U.S. giant. Nvidia's CEO even asked the company to speed up supply of its next generation chip, underscoring the importance of HBM to its products. SK Hynix posted record quarterly operating profit in the September quarter...

Analysts said that Samsung is lagging behind competitors for a number of reasons, including underinvestment in HBM and the fact that it is not a first-mover. "It is fair to say that Samsung has not been able to close the gap with SK Hynix on the HBM development roadmap," said Kazunori Ito [director of equity research at Morningstar]. Samsung's ability to make a comeback in the short term appears to be closely linked to Nvidia. A company must pass a strict qualification process before Nvidia approves it as a HBM supplier — and Samsung has not yet completed this verification. But a green light from Nvidia could open the door for Samsung to return to growth and compete more effectively with SK Hynix, according to analysts.

China

TSMC Halts Advanced Chip Shipments To Chinese AI Companies 18

Starting November 11, TSMC plans to stop supplying 7 nm and smaller chips to Chinese companies working on AI processors and GPUs. "The move is reportedly to ensure it remains compliant with US export restrictions," reports The Register. From the report: This will not affect Chinese customers wanting 7 nm chips from TSMC for other applications such as mobile and communications, according to Nikkei, which said the overall impact on the chipmaker's revenue is likely to be minimal. TrendForce further cites another China-based source who claims the move was at the behest of the US Department of Commerce, which informed TSMC that any such shipments should not proceed unless approved and licensed by its BIS (Bureau of Industry and Security). We asked the agency for confirmation.

Any moves by the silicon supremo is likely to be out of caution to pre-empt accusations from Washington that it isn't doing enough to prevent advanced technology from getting into the hands of Chinese entities that have been sanctioned. As TrendForce notes, it "highlights the foundry giant's delicate position in the global semiconductor supply chain amid the heating chip war between the world's two superpowers."
AI

AI's Huge Power Needs Give Oil Majors Incentive To Invest in Renewables, Says Adnoc Boss 13

Surging AI demand could push major oil companies to reinvest in renewable energy [non-paywalled link], Abu Dhabi National Oil Company CEO Sultan al-Jaber said this week. Al-Jaber's comments came as oil executives from Shell, BP and TotalEnergies met with Microsoft and other tech leaders in Abu Dhabi to discuss AI's growing energy needs and its applications across the sector.

ADNOC announced plans to deploy autonomous AI agents across its operations through EnergyAI, developed with Microsoft and UAE's G42. The system will analyze seismic data and model underground carbon storage potential. The state oil giant committed $23 billion to low-carbon technology development using AI. Tech companies have pledged to power their AI data centers with renewable energy to meet climate targets. "We need a model that integrates all forms of energy," said al-Jaber, citing needs for renewable power, battery storage, natural gas, and nuclear energy in some locations.
Science

Invisible, Super Stretchy Nanofibers Discovered In Natural Spider Silk (phys.org) 8

Long-time Slashdot reader yet-another-lobbyist writes: Phys.org has an article on the recent discovery of super stretchy nanofibers in natural spider silk! The thinnest natural spider silk nanofibrils ever seen are only a few molecular layers thin, about 5 nm. They are too thin to be seen even with a very powerful optical microscope. Researchers used atomic force microscopy (AFM) not only to visualize them, but also to probe their stretchiness and strength.

Even the original article is available without a paywall. Mechanical tests of molecularly thin materials — pretty cool!

The doctoral candidate's advisor thought it would be impossible to perform the measurements, according to the article, which quotes him as saying "It's actually kind of crazy to think that it's even possible.... We humans think we're so great and we can invent things, but if you just take a step outside, you find so many things that are more exciting."

That advisor — long term spider-silk researcher of Hannes Schniepp (also a co-author on the paper) — adds that the tip of the needle was so sharp, its end was only a few atoms thick. "You would not see the end of it in the best optical microscope. It will just disappear because it's so small that you can't even see it. It's probably one of the highest developed technologies on the planet." If humans find a way to replicate the structure of spider silk, it could be manufactured for use in practical applications. "You could make a super bungee cord from it," said Schniepp. "Or a shield around a structure where you have something incoming at high velocity and you need to absorb a lot of energy. Things like that."
AI

AI Bug Bounty Program Finds 34 Flaws in Open-Source Tools (scworld.com) 23

Slashdot reader spatwei shared this report from SC World: Nearly three dozen flaws in open-source AI and machine learning (ML) tools were disclosed Tuesday as part of [AI-security platform] Protect AI's huntr bug bounty program.

The discoveries include three critical vulnerabilities: two in the Lunary AI developer toolkit [both with a CVSS score of 9.1] and one in a graphical user interface for ChatGPT called Chuanhu Chat. The October vulnerability report also includes 18 high-severity flaws ranging from denial-of-service to remote code execution... Protect AI's report also highlights vulnerabilities in LocalAI, a platform for running AI models locally on consumer-grade hardware, LoLLMs, a web UI for various AI systems, LangChain.js, a framework for developing language model applications, and more.

In the article, Protect AI's security researchers point out that these open-source tools are "downloaded thousands of times a month to build enterprise AI Systems."

The three critical vulnerabilties have already been addressed by their respective companies, according to the article.

Slashdot Top Deals