Books

697-Page Book Publishes a Poet's 2,000 Amazon Reviews Posthumously (clereviewofbooks.com) 16

The Cleveland Review of Books ponders a new 697-page hardcover collection of American poet/author Kevin Killian's.... reviews from Amazon. (Over 2,000 of 'em — written over the course of 16 years.)
In 2012, he wrote three substantial paragraphs about the culinary perfection that can be found in a German Potato Salad Can (15 oz., Pack of 12). Often, he'd open with something like "as an American boy growing up in rural France." Killian grew up on Long Island, New York. He didn't take himself (or much else) too seriously....

[Killian] was also a member of the New Narrative Movement... Writers acknowledge the subjectivity of, and the author's active presence in, the text... Amazon reviews are a near-perfect vehicle for New Narrative's tenets... Killian camouflaged his reviews in the cadence of the Amazon everyman. He embraced all the stylistic quirks, choppy sentence fragments and run-ons, either darting from point to point like a distracted squirrel or leaning heavily into declarative statements.... About the biographer of Elia Kazan, he tells us, "Schickel is in love with the sound of his voice, and somewhere in the shredded coleslaw of his prose, a decent book lies unavailable to us, about the real Elia Kazan...."

[T]he writing can move from very funny to strangely poignant. One of my favorites, his review of MacKenzie Smelling Salts, begins with a tragically tongue-in-cheek anecdote about his Irish grandfather:

"My Irish grandfather used to keep a bottle of MacKenzie's smelling salts next to his desk. He was the principal at Bushwick High School (in Brooklyn, NY) in the 1930s and 1940s, before it became a dangerous place to live in, and way before Bushwick regained its current state of desirable area for new gentrification. And he kept one at home as well, in case of a sudden shock. At school, he would press the saturated cotton under the nostrils of poor girls who realized they were pregnant in health class, before he expelled them."

He ends with his own reasons for using smelling salts, citing wildly diverging examples: his grief upon learning of the death of Paul Walker from the Fast & Furious film franchise abuts Killian's disappointment at not being selected for the 2014 Whitney Biennial. Apparently, both were deeply traumatic experiences for Kevin... ["it took my wife a minute or two to locate the MacKenzie's, but passing it under my nose, as though she were my grandfather ministering to the pregnant girls of yore..."]

No one wants to be forgotten. I do not think it's a coincidence Killian started writing the reviews after his heart attack. Why did he keep going? Most likely, it was because he enjoyed the writing and got something out of it — pleasure, practice, and a bit of notoriety. But mainly, I think the project grew out of habit and compulsion. In a similar way, the graffiti art of Keith Haring, Jean-Paul Basquiat, and Banksy began in subway tunnels, one tag and mural at a time, until it grew into bodies of work collected and coveted by museums worldwide. In Killian's case, the global commerce platform was his ugly brick wall, his subway platform, and his train car. Coming away, I like to imagine him gleefully typing, manipulating the Amazon review forums into something that had little to do with the consumerism they had been created to support: Killian tagging a digital wall to remind everyone KEVIN WAS HERE.

The book reviewer points out that the collection's final review, for the memoir Never Mind the Moon: My Time at the Royal Opera House, is dated a month before Killian died.

"Unfortunately, the editors of this volume did not preserve the Helpful/Not Helpful ratings, only the stars."

Putting it all in perspective, the book critic notes that "In 2023, Amazon reported that one hundred million customers submitted one or more product reviews to the site. The content of most is dross, median." Though the critic then also acknowledges that "I haven't read any of Killian's other work."
KDE

KDE Plasma 6.3 Released 33

Today, the KDE Project announced the release of KDE Plasma 6.3, featuring improved fractional scaling, enhanced Night Light color accuracy, better CPU usage monitoring, and various UI and security refinements.

Some of the key features of Plasma 6.3 include:
- Improved fractional scaling with KWin to lead to an all-around better desktop experience with fractional scaling as well as when making use of KWin's zoom effect.
- Screen colors are more accurate with the KDE Night Light feature.
- CPU usage monitoring within the KDE System Monitor is now more accurate and consuming fewer CPU resources.
- KDE will now present a notification when the kernel terminated an app because the system ran out of memory.
- Various improvements to the Discover app, including a security enhancement around sandboxed apps.
- The drawing tablet area of KDE System Settings has been overhauled with new features and refinements.
- Many other enhancements and fixes throughout KDE Plasma 6.3.

You can read the announcement here.
Security

Ransomware Payments Dropped 35% In 2024 (therecord.media) 44

An anonymous reader quotes a report from CyberScoop: Ransomware payments saw a dramatic 35% drop last year compared to 2023, even as the overall frequency of ransomware attacks increased, according to a new report released by blockchain analysis firm Chainalysis. The considerable decline in extortion payments is somewhat surprising, given that other cybersecurity firms have claimed that 2024 saw the most ransomware activity to date. Chainalysis itself warned in its mid-year report that 2024's activity was on pace to reach new heights, but attacks in the second half of the year tailed off. The total amount in payments that Chainalysis tracked in 2024 was $812.55 million, down from 2023's mark of $1.25 billion.

The disruption of major ransomware groups, such as LockBit and ALPHV/BlackCat, were key to the reduction in ransomware payments. Operations spearheaded by agencies like the United Kingdom's National Crime Agency (NCA) and the Federal Bureau of Investigation (FBI) caused significant declines in LockBit activity, while ALPHV/BlackCat essentially rug-pulled its affiliates and disappeared after its attack on Change Healthcare. [...] Additionally, [Chainalysis] says more organizations have become stronger against attacks, with many choosing not to pay a ransom and instead using better cybersecurity practices and backups to recover from these incidents. [...]
Chainalysis also says ransomware operators are letting funds sit in wallets, refraining from moving any money out of fear they are being watched by law enforcement.

You can read the full report here.
Books

Cory Doctorow's Prescient Novella About Health Insurance and Murder (theguardian.com) 175

Five years ago, journalist and sci-fi author Cory Doctorow published a short story that explored the radicalization of individuals denied healthcare coverage. As The Guardian notes in a recent article, the story "might seem eerily similar" to the recent shooting of UnitedHealthcare's CEO. While it appears that the alleged shooter never read the story, Doctorow said: "I feel like the most important thing about that is that it tells you that this is not a unique insight." Doctorow continued: "that the question that I had is a question other people have had." As an activist in favor of liberalizing copyright laws and a proponent of the Creative Commons organization, it's important to note that Doctorow advocates for systemic reform through collective action rather than violence. Here's an excerpt from the The Guardian's article: In Radicalized, one of four novellas comprising a science fiction novel of the same name, Doctorow charts the journey of a man who joins an online forum for fathers whose partners or children have been denied healthcare coverage by their insurers after his wife is diagnosed with breast cancer and denied coverage for an experimental treatment. Slowly, over the course of the story, the men of the forum become radicalized by their grief and begin plotting -- and executing -- murders of health insurance executives and politicians who vote against universal healthcare.

In the wake of the December 4 shooting of UnitedHealthcare CEO Brian Thompson, which unleashed a wave of outrage at the U.S. health system, Doctorow's novella has been called prescient. When the American Prospect magazine republished the story last week, it wrote: "It is being republished with permission for reasons that will become clear if you read it." But Doctorow doesn't think he was on to something that no one else in the U.S. understood. [...]

In one part of the story, a man whose young daughter died after an insurance company refused to pay for brain surgery bombs the insurer's headquarters. "It's not vengeance. I don't have a vengeful bone in my body. Nothing I do will bring Lisa back, so why would I want revenge? This is a public service. There's another dad just like me," he shares in a video message on the forum. "And right now, that dad is talking to someone at Cigna, or Humana, or BlueCross BlueShield, and the person on the phone is telling that dad that his little girl has. To. Die. Someone in that building made the decision to kill my little girl, and everyone else in that building went along with it. Not one of them is innocent, and not one of them is afraid. They're going to be afraid, after this."

"Because they must know in their hearts," he goes on. "Them, their lobbyists, the men in Congress who enabled them. They're parents. They know. Anyone who hurt their precious children, they'd hunt that person down like a dog. The only amazing thing about any of this is that no one has done it yet. I'm going to make a prediction right now, that even though I'm the first, I sure as hell will not be the last. There's more to come."

The Internet

Cloudflare 2024: Global Traffic Up, Google Still King, US Churning Out Bots (theregister.com) 11

Cloudflare's 2024 internet traffic report highlights a 17.2% global increase in traffic, with Google maintaining its position as the most visited service and the U.S. responsible for 34.6% of bot traffic. The Register reports: One surprise (or perhaps not) is that IPv6 traffic is actually down as a percentage of the packets that passed through Cloudflare's network. It says that 28.5 percent of global traffic was IPv6 during 2024, whereas last year's report put this figure at 33.75 percent. The company also reveals that a fifth of all TCP connections (20.7 percent) are unexpectedly terminated before any useful data can be exchanged. Causes of this could vary from DoS attacks, quirky client behavior, or a network interrupting a connection to filter content.

Coudflare says about half of these incidents were connections closed "Post SYN" -- after its server has received a client's SYN packet, but before a subsequent acknowledgement (ACK) or any useful data. These can be attributed to DoS attacks or internet scanning, while Post-ACK or Post-PSH anomalies are more often associated with connection tampering activity such as filtering, especially if they occur at high rates in specific networks. Mobile device traffic accounted for about 41.3 percent of the total, which is roughly the same as last year. This is largely split between the Apple and Android ecosystems, with iOS on almost a third and Android accounting for two-thirds. [...]

Google's Chrome appears to be the most popular browser by far, accounting for 65.8 percent of all requests during 2024. Just 15.5 percent came from Apple's Safari browser, which leads the way on iOS devices, naturally. Microsoft's Edge accounted for 6.9 percent of browsing, while Mozilla Firefox stood at 4 percent. For search engines, Google also claimed the top spot, with a greater than 88 percent share of all search traffic that passed through Cloudflare. Yandex and Baidu were next with 3.1 percent and 2.7 percent, respectively, while Bing trailed with 2.6 percent. DuckDuckGo accounted for 0.9 percent of searches.
You can read Cloudflare's full Year in Review here.
Slashdot.org

25 Years Ago Today: Slashdot Parodied by Suck.com (archive.org) 22

25 years ago today, the late, great Suck.com played a prank on Slashdot. Their daily column of pop-culture criticism was replaced by... Suckdot, a parody site satirically filled with Slashdot-style headlines like "Linux Possibly Defamed Somewhere." RabidZelot was one of a bunch to report: "In Richmond, California, this afternoon, this dude said something bad about Linux at the Hilltop Mall near the fountains right after the first showing of Phantom Menace let out. He was last seen heading towards Sears and has a 'Where Do You Want to Go Today?' T-shirt and brown hair. Let us know when you spot him."

( Read More... | 0 of 72873 comments)

There's more Slashdot-style news blurbs like "Red Hat Reports Income". (In which Red Hat founder Bob Young finds a quarter on the way to the conference room, and adds it to the company's balance sheet...) Its list of user-submitted "Ask Suckdot" questions include geek-mocking topics like "Is Overclocking Worth That Burning Smell?" and "HOW DO I TURN OFF SHIFT_LOCK?" And somewhere there's even a parody of Jon Katz (an early contributor to Slashdot's content) — though clicking "Read More" on the essay leads to a surprising message from the parodist admitting defeat. "Slashdot has roughly 60 million links on its front page. I'm simply not going to waste any more of my life making fun of each and every one of them. Half the time you can't tell the real Slashdot from the parody anyway."

Suck.com was a fixture in the early days of the web, launched in 1995 (and pre-dating the launch of Slashdot by two years). It normally published link-heavy commentary every weekday for nearly six years. Contributing writer Greg Knauss was apparently behind much of the Suckdot parody — even taking a jab at Slashdot's early online podcast, "Geeks in Space" (1999-2001). [Suckdot informs its readers in 1999 that "The latest installment of Geeks Jabbering at a Mic is up..."] Other Suckdot headlines?
  • Minneapolis-St. Paul Star-Tribune Uses Words "Red" and "Hat" in Article
  • BSD Repeatedly Ignored
  • DVD Encryption Cracked: Godzilla for Everybody!
  • Linus Ascends Bodily Into Heaven
  • iMac: Ha Ha Ha Ha Wimp

There were no hard feelings. Seven months later Slashdot was even linking to Greg Knauss's Suck.com essay proclaiming that "Mozilla is dead, or might as well be..."

So whatever happened to Suck.com? Though it stopped publishing in 2001, an outpouring of nostalgia in 2005 apparently prompted its owners at Lycos.com to continue hosting its content through 2018. (This unofficial history notes that one fan scrambling to archive the site was Aaron Swartz.) Though it's not clear what happened next, here in 2024 its original domain is now up for sale — at an asking price of $1 million.

But all of Suck.com's original content is still available online — including its Suckdot parody — at archive.org. Which, mercifully, is still here a full 28 years after launching in 1996...


Open Source

Slashdot's Interview with Bruce Perens: How He Hopes to Help 'Post Open' Developers Get Paid (slashdot.org) 61

Bruce Perens, original co-founder of the Open Source Initiative, has responded to questions from Slashdot readers about a new alternative he's developing that hopefully helps "Post Open" developers get paid.

But first, "One of the things that's clear from the Slashdot patter is that people are not aware of what I've been doing, in general," Perens says. "So, let's start by filling that in..."

Read on for the rest of his wide-ranging answers....
Red Hat Software

Red Hat is Becoming an Official Microsoft 'Windows Subsystem for Linux' Distro (microsoft.com) 48

"You can use any Linux distribution inside of the Windows Subsystem for Linux" Microsoft recently reminded Windows users, "even if it is not available in the Microsoft Store, by importing it with a tar file."

But being an official distro "makes it easier for Windows Subsystem for Linux users to install and discover it with actions like wsl --list --online and wsl --install," Microsoft pointed out this week. And "We're excited to announce that Red Hat will soon be delivering a Red Hat Enterprise Linux WSL distro image in the coming months..."

Thank you to the Red Hat team as their feedback has been invaluable as we built out this new architecture, and we're looking forwards to the release...! Ron Pacheco, senior director, Red Hat Enterprise Linux Ecosystem, Red Hat says:

"Developers have their preferred platforms for developing applications for multiple operating systems, and WSL is an important platform for many of them. Red Hat is committed to driving greater choice and flexibility for developers, which is why we're working closely with the Microsoft team to bring Red Hat Enterprise Linux, the largest commercially available open source Linux distribution, to all WSL users."

Read Pacheco's own blog post here.

But in addition Microsoft is also releasing "a new way to make WSL distros," they announced this week, "with a new architecture that backs how WSL distros are packaged and installed." Up until now, you could make a WSL distro by either creating an appx package and distributing it via the Microsoft Store, or by importing a .tar file with wsl -import. We wanted to improve this by making it possible to create a WSL distro without needing to write Windows code, and for users to more easily install their distros from a file or network share which is common in enterprise scenarios... With the tar based architecture, you can start with the same .tar file (which can be an exported Linux container!) and just edit it to add details to make it a WSL distro... These options will describe key distro attributes, like the name of the distro, its icon in Windows, and its out of box experience (OOBE) which is what happens when you run WSL for the first time. You'll notice that the oobe_command option points to a file which is a Linux executable, meaning you can set up your full experience just in Linux if you wish.
Desktops (Apple)

ChatGPT For macOS Now Works With Third-Party Apps, Including Apple's Xcode 6

An update to OpenAI's ChatGPT app for macOS adds integration with third-party apps, including developer tools such as VS Code, Terminal, iTerm2 and Apple's Xcode. 9to5Mac reports: In a demo seen by 9to5Mac, ChatGPT was able to understand code from an Xcode project and then provide code suggestions without the user having to manually copy and paste content into the ChatGPT app. It can even read content from more than one app at the same time, which is very useful for working with developer tools. According to OpenAI, the idea is to expand integration to more apps in the future. For now, integration with third-party apps is coming exclusively to the Mac version of ChatGPT, but there's another catch. The feature requires a paid ChatGPT subscription, at least for now.

ChatGPT Plus and Team subscribers will receive access to integration with third-party apps on macOS starting today, while access for Enterprise and Education users will be rolled out "in the next few weeks." OpenAI told 9to5Mac that it wants to make the feature available to everyone in the future, although there's no estimate of when this will happen. For privacy reasons, users can control at any time when and which apps ChatGPT can read.
The app can be downloaded here.
The Gimp

GIMP 3.0 Enters RC Testing After 20 Years (tomshardware.com) 55

GIMP 3.0, the long-awaited upgrade from the popular open-source image editor, has entered the release candidate phase, signaling that a stable version may be available by the end of this year or early 2025. Tom's Hardware reports: So, what has changed with the debut of GIMP 3? The new interface is still quite recognizable to classic GIMP users but has been considerably smoothed out and is far more scalable to high-resolution displays than it used to be. Several familiar icons have been carefully converted to SVGs or Scalable Vector Graphics, enabling supremely high-quality, scalable assets.

While PNGs, or Portable Network Graphics, are also known to be high-quality due to their lack of compression, they are still suboptimal compared to SVGs when SVGs are applicable. The work of converting GIMP's tool icons to SVG is still in progress per the original blog post, but it's good that developer Denis Rangelov has already started on the work.

Many aspects of the GIMP 3.0 update are almost wholly on the backend for ensuring project and plugin compatibility with past projects made with previous versions of GIMP. To summarize: a public GIMP API is being stabilized to make it easier to port GIMP 2.10-based plugins and scripts to GIMP 3.0. Several bugs related to color accuracy have been fixed to improve color management while still maintaining compatibility with past GIMP projects.
You can read the GIMP team's blog post here.
Open Source

New 'Open Source AI Definition' Criticized for Not Opening Training Data (slashdot.org) 38

Long-time Slashdot reader samj — also a long-time Debian developertells us there's some opposition to the newly-released Open Source AI definition. He calls it a "fork" that undermines the original Open Source definition (which was originally derived from Debian's Free Software Guidelines, written primarily by Bruce Perens), and points us to a new domain with a petition declaring that instead Open Source shall be defined "solely by the Open Source Definition version 1.9. Any amendments or new definitions shall only be recognized with clear community consensus via an open and transparent process."

This move follows some discussion on the Debian mailing list: Allowing "Open Source AI" to hide their training data is nothing but setting up a "data barrier" protecting the monopoly, disabling anybody other than the first party to reproduce or replicate an AI. Once passed, OSI is making a historical mistake towards the FOSS ecosystem.
They're not the only ones worried about data. This week TechCrunch noted an August study which "found that many 'open source' models are basically open source in name only. The data required to train the models is kept secret, the compute power needed to run them is beyond the reach of many developers, and the techniques to fine-tune them are intimidatingly complex. Instead of democratizing AI, these 'open source' projects tend to entrench and expand centralized power, the study's authors concluded."

samj shares the concern about training data, arguing that training data is the source code and that this new definition has real-world consequences. (On a personal note, he says it "poses an existential threat to our pAI-OS project at the non-profit Kwaai Open Source Lab I volunteer at, so we've been very active in pushing back past few weeks.")

And he also came up with a detailed response by asking ChatGPT. What would be the implications of a Debian disavowing the OSI's Open Source AI definition? ChatGPT composed a 7-point, 14-paragraph response, concluding that this level of opposition would "create challenges for AI developers regarding licensing. It might also lead to a fragmentation of the open-source community into factions with differing views on how AI should be governed under open-source rules." But "Ultimately, it could spur the creation of alternative definitions or movements aimed at maintaining stricter adherence to the traditional tenets of software freedom in the AI age."

However the official FAQ for the new Open Source AI definition argues that training data "does not equate to a software source code." Training data is important to study modern machine learning systems. But it is not what AI researchers and practitioners necessarily use as part of the preferred form for making modifications to a trained model.... [F]orks could include removing non-public or non-open data from the training dataset, in order to train a new Open Source AI system on fully public or open data...

[W]e want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information — like decisions about their health. Similarly, much of the world's Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.

Read on for the rest of their response...
Wireless Networking

West Virginia Town of Green Bank Has Become a Refuge For Electrosensitive People (washingtonpost.com) 183

An anonymous reader quotes a report from the Washington Post: Brandon Barrett arrived here two weeks ago, sick but hopeful, like dozens before him. Just a few years back, he could dead lift 660 pounds. After an injury while training to be a professional dirt-bike rider, he opened a motorcycle shop just north of Buffalo. When he wasn't working, he would cleanse his mind through rigorous meditation. In 2019, he began getting sick. And then sicker. Brain fog. Memory issues. Difficulty focusing. Depression. Anxiety. Fatigue. Brandon was pretty sure he knew why: the cell tower a quarter-mile behind his shop and all the electromagnetic radiation it produces, that cellphones produce, that WiFi routers produce, that Bluetooth produces, that the whole damn world produces. He thought about the invisible waves that zip through our airspace -- maybe they pollute our bodies, somehow? [...]

Then Brandon read about Green Bank, an unincorporated speck on the West Virginia map, hidden in the Allegheny Mountains, about a four-hour drive southwest of D.C. There are no cell towers there, by design. He read that other sick people had moved here and gotten better, that the area's electromagnetic quietude is protected by the federal government. Perhaps it could protect Brandon. It's quiet here so that scientists can listen to corners of the universe, billions of light-years away. In the 1950s, the federal government snatched up farmland to build the Green Bank Observatory. It's now home to the Robert C. Byrd Green Bank Radio Telescope, the largest steerable telescope in the world at 7,600 metric tons and a height of 485 feet. Its 2.3-acre dish can study quasars and pulsars, map asteroids and planets, and search for evidence of extraterrestrial life.

The observatory's machines are so sensitive that terrestrial radio waves would interfere with their astronomical exploration, like a shout (a bunch of WiFi signals) drowning out a whisper (signals from the clouds of hydrogen hanging out between galaxies). So in 1958, the Federal Communications Commission created the National Radio Quiet Zone, a 13,000-square-mile area encompassing wedges of both Virginia and West Virginia, where radio transmissions are restricted to varying degrees. At its center is a 10-mile zone around the observatory where WiFi, cellphones and cordless phones -- among many other types of wave-emitting equipment -- are outlawed. Wired internet is okay, as are televisions -- though you must have a cable or satellite provider. It's not a place out of 100 years ago. More like 30. If you want to make plans to meet someone, you make them in person. Some people move here to work at the observatory. Others come because they feel like they have to. These are the 'electrosensitives,' as they often refer to themselves. They are ill, and Green Bank is their Lourdes. The electrosensitives guess that they number at least 75 in Pocahontas County, which has a population of roughly 7,500.
Literary Hub, the BBC, Slate, and the Washingtonian have non-paywalled articles about Green Bank and the "wi-fi refugees" that shelter there.
Earth

US Scientists Identify Cause of Massive Crab Die-Off (cnn.com) 85

A long-time Slashdot reader writes: Recent reports have indicated a near-complete collapse in the population of Snow Crabs in the Bering Sea. Scientists with the US Government's National Oceanographic and Atmospheric Administration have concluded that warming in the environment has led to vast numbers of snow crabs starving to death.

There has been a lot of back-and-forth, a lot of argument on whether or how much humanity has had an effect on the fundamental ecology of our planet... Here is a fine example of anthropogenic change to the planet's weather, ecosystems and even the planet's very ability to feed us.

From the government's findings on the NOAA web site: "What is particularly noteworthy is these boreal conditions associated with the snow crab collapse are more than 200 times likely to occur in the present climate (1.0 degrees -1.5 degrees of warming rate) than in the preindustrial era," said Mike Litzow, lead author and director of the Alaska Fisheries Science Center's Kodiak Lab. "Even more concerning is that Arctic conditions conducive for snow crabs to retain their dominant role in the southeastern Bering Sea are expected to continue to decline in the future." [...] Litzow and his team expect to see Arctic conditions in only 8 percent of future years in the southeastern Bering Sea.
The warmer temperatures brought existential threats including including a fatal disease and more crab-eating predators, their study found. CNN reports that the crabs' "horrific demise appears to be just one impact of the massive transition unfolding in the region, scientists reported... Parts of the Bering Sea are literally becoming less Arctic." Billions of crabs ultimately starved to death, devastating Alaska's fishing industry in the years that followed... The decline of the Alaskan snow crab signals a wider ecosystem change in the Arctic, as oceans warm and sea ice disappears. The ocean around Alaska is now becoming inhospitable for several marine species, including red king crab and sea lions, experts say...

The Arctic region has warmed four times faster than the rest of the planet, scientists have reported. Litzow called what's happening in the Bering Sea a "bellwether" of what's to come. "All of us need to recognize the impacts of climate change," he said.

Programming

'GitHub Actions' Artifacts Leak Tokens, Expose Cloud Services and Repositories (securityweek.com) 19

Security Week brings news about CI/CD workflows using GitHub Actions in build processes. Some workflows can generate artifacts that "may inadvertently leak tokens for third party cloud services and GitHub, exposing repositories and services to compromise, Palo Alto Networks warns." [The artifacts] function as a mechanism for persisting and sharing data across jobs within the workflow and ensure that data is available even after the workflow finishes. [The artifacts] are stored for up to 90 days and, in open source projects, are publicly available... The identified issue, a combination of misconfigurations and security defects, allows anyone with read access to a repository to consume the leaked tokens, and threat actors could exploit it to push malicious code or steal secrets from the repository. "It's important to note that these tokens weren't part of the repository code but were only found in repository-produced artifacts," Palo Alto Networks' Yaron Avital explains...

"The Super-Linter log file is often uploaded as a build artifact for reasons like debuggability and maintenance. But this practice exposed sensitive tokens of the repository." Super-Linter has been updated and no longer prints environment variables to log files.

Avital was able to identify a leaked token that, unlike the GitHub token, would not expire as soon as the workflow job ends, and automated the process that downloads an artifact, extracts the token, and uses it to replace the artifact with a malicious one. Because subsequent workflow jobs would often use previously uploaded artifacts, an attacker could use this process to achieve remote code execution (RCE) on the job runner that uses the malicious artifact, potentially compromising workstations, Avital notes.

Avital's blog post notes other variations on the attack — and "The research laid out here allowed me to compromise dozens of projects maintained by well-known organizations, including firebase-js-sdk by Google, a JavaScript package directly referenced by 1.6 million public projects, according to GitHub. Another high-profile project involved adsys, a tool included in the Ubuntu distribution used by corporations for integration with Active Directory." (Avital says the issue even impacted projects from Microsoft, Red Hat, and AWS.) "All open-source projects I approached with this issue cooperated swiftly and patched their code. Some offered bounties and cool swag."

"This research was reported to GitHub's bug bounty program. They categorized the issue as informational, placing the onus on users to secure their uploaded artifacts." My aim in this article is to highlight the potential for unintentionally exposing sensitive information through artifacts in GitHub Actions workflows. To address the concern, I developed a proof of concept (PoC) custom action that safeguards against such leaks. The action uses the @actions/artifact package, which is also used by the upload-artifact GitHub action, adding a crucial security layer by using an open-source scanner to audit the source directory for secrets and blocking the artifact upload when risk of accidental secret exposure exists. This approach promotes a more secure workflow environment...

As this research shows, we have a gap in the current security conversation regarding artifact scanning. GitHub's deprecation of Artifacts V3 should prompt organizations using the artifacts mechanism to reevaluate the way they use it. Security defenders must adopt a holistic approach, meticulously scrutinizing every stage — from code to production — for potential vulnerabilities. Overlooked elements like build artifacts often become prime targets for attackers. Reduce workflow permissions of runner tokens according to least privilege and review artifact creation in your CI/CD pipelines. By implementing a proactive and vigilant approach to security, defenders can significantly strengthen their project's security posture.

The blog post also notes protection and mitigation features from Palo Alto Networks....
Music

Suno & Udio To RIAA: Your Music Is Copyrighted, You Can't Copyright Styles (torrentfreak.com) 85

AI music generators Suno and Udio responded to the lawsuits filed by the major recording labels, arguing that their platforms are tools for making new, original music that "didn't and often couldn't previously exist."

"Those genres and styles -- the recognizable sounds of opera, or jazz, or rap music -- are not something that anyone owns," the companies said. "Our intellectual property laws have always been carefully calibrated to avoid allowing anyone to monopolize a form of artistic expression, whether a sonnet or a pop song. IP rights can attach to a particular recorded rendition of a song in one of those genres or styles. But not to the genre or style itself." TorrentFreak reports: "[The labels] frame their concern as one about 'copies' of their recordings made in the process of developing the technology -- that is, copies never heard or seen by anyone, made solely to analyze the sonic and stylistic patterns of the universe of pre-existing musical expression. But what the major record labels really don't want is competition." The labels' position is that any competition must be legal, and the AI companies state quite clearly that the law permits the use of copyrighted works in these circumstances. Suno and Udio also make it clear that snippets of copyrighted music aren't stored as a library of pre-existing content in the neural networks of their AI models, "outputting a collage of 'samples' stitched together from existing recordings" when prompted by users.

"[The neural networks were] constructed by showing the program tens of millions of instances of different kinds of recordings," Suno explains. "From analyzing their constitutive elements, the model derived a staggeringly complex collection of statistical insights about the auditory characteristics of those recordings -- what types of sounds tend to appear in which kinds of music; what the shape of a pop song tends to look like; how the drum beat typically varies from country to rock to hip-hop; what the guitar tone tends to sound like in those different genres; and so on." These models are vast stores, not of copyrighted music, the defendants say, but information about what musical styles consist of, and it's from that information new music is made.

Most copyright lawsuits in the music industry are about reproduction and public distribution of identified copyright works, but that's certainly not the case here. "The Complaint explicitly disavows any contention that any output ever generated by Udio has infringed their rights. While it includes a variety of examples of outputs that allegedly resemble certain pre-existing songs, the Complaint goes out of its way to say that it is not alleging that those outputs constitute actionable copyright infringement." With Udio declaring that, as a matter of law, "that key point makes all the difference," Suno's conclusion is served raw. "That concession will ultimately prove fatal to Plaintiffs' claims. It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process, invisible to the public, in the service of creating an ultimately non-infringing new product." Noting that Congress enacted the first copyright law in 1791, Suno says that in the 233 years since, not a single case has ever reached a contrary conclusion.

In addition to addressing allegations unique to their individual cases, the AI companies accuse the labels of various types of anti-competitive behavior. Imposing conditions to prevent streaming services obtaining licensed music from smaller labels at lower rates, seeking to impose a "no AI" policy on licensees, to claims that they "may have responded to outreach from potential commercial counterparties by engaging in one or more concerted refusals to deal." The defendants say this type of behavior is fueled by the labels' dominant control of copyrighted works and by extension, the overall market. Here, however, ownership of copyrighted music is trumped by the existence and knowledge of musical styles, over which nobody can claim ownership or seek to control. "No one owns musical styles. Developing a tool to empower many more people to create music, by scrupulously analyzing what the building blocks of different styles consist of, is a quintessential fair use under longstanding and unbroken copyright doctrine. "Plaintiffs' contrary vision is fundamentally inconsistent with the law and its underlying values."
You can read Suno and Udio's answers to the RIAA's lawsuits here (PDF) and here (PDF).
Networking

Is Modern Software Development Mostly 'Junky Overhead'? (tailscale.com) 117

Long-time Slashdot theodp says this "provocative" blog post by former Google engineer Avery Pennarun — now the CEO/founder of Tailscale — is "a call to take back the Internet from its centralized rent-collecting cloud computing gatekeepers."

Pennarun writes: I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that's 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep. In modern computing, we tolerate long builds, and then Docker builds, and uploading to container stores, and multi-minute deploy times before the program runs, and even longer times before the log output gets uploaded to somewhere you can see it, all because we've been tricked into this idea that everything has to scale. People get excited about deploying to the latest upstart container hosting service because it only takes tens of seconds to roll out, instead of minutes. But on my slow computer in the 1990s, I could run a perl or python program that started in milliseconds and served way more than 0.2 requests per second, and printed logs to stderr right away so I could edit-run-debug over and over again, multiple times per minute.

How did we get here?

We got here because sometimes, someone really does need to write a program that has to scale to thousands or millions of backends, so it needs all that stuff. And wishful thinking makes people imagine even the lowliest dashboard could be that popular one day. The truth is, most things don't scale, and never need to. We made Tailscale for those things, so you can spend your time scaling the things that really need it. The long tail of jobs that are 90% of what every developer spends their time on. Even developers at companies that make stuff that scales to billions of users, spend most of their time on stuff that doesn't, like dashboards and meme generators.

As an industry, we've spent all our time making the hard things possible, and none of our time making the easy things easy. Programmers are all stuck in the mud. Just listen to any professional developer, and ask what percentage of their time is spent actually solving the problem they set out to work on, and how much is spent on junky overhead.

Tailscale offers a "zero-config" mesh VPN — built on top of WireGuard — for a secure network that's software-defined (and infrastructure-agnostic). "The problem is developers keep scaling things they don't need to scale," Pennarun writes, "and their lives suck as a result...."

"The tech industry has evolved into an absolute mess..." Pennarun adds at one point. "Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don't have to."

Their conclusion? "Modern software development is mostly junky overhead."
Privacy

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim (theregister.com) 23

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained.

For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here].

According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks.
GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]."

Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."
AI

Microsoft Unveils a Large Language Model That Excels At Encoding Spreadsheets 38

Microsoft has quietly announced the first details of its new "SpreadsheetLLM," claiming it has the "potential to transform spreadsheet data management and analysis, paving the way for more intelligent and efficient user interactions." You can read more details about the model in a pre-print paper available here. Jasper Hamill reports via The Stack: One of the problems with using LLMs in spreadsheets is that they get bogged down by too many tokens (basic units of information the model processes). To tackle this, Microsoft developed SheetCompressor, an "innovative encoding framework that compresses spreadsheets effectively for LLMs." "It significantly improves performance in spreadsheet table detection tasks, outperforming the vanilla approach by 25.6% in GPT4's in-context learning setting," Microsoft added. The model is made of three modules: structural-anchor-based compression, inverse index translation, and data-format-aware aggregation.

The first of these modules involves placing "structural anchors" throughout the spreadsheet to help the LLM understand what's going on better. It then removes "distant, homogeneous rows and columns" to produce a condensed "skeleton" version of the table. Index translation addresses the challenge caused by spreadsheets with numerous empty cells and repetitive values, which use up too many tokens. "To improve efficiency, we depart from traditional row-by-row and column-by-column serialization and employ a lossless inverted index translation in JSON format," Microsoft wrote. "This method creates a dictionary that indexes non-empty cell texts and merges addresses with identical text, optimizing token usage while preserving data integrity." [...]

After conducting a "comprehensive evaluation of our method on a variety of LLMs" Microsoft found that SheetCompressor significantly reduces token usage for spreadsheet encoding by 96%. Moreover, SpreadsheetLLM shows "exceptional performance in spreadsheet table detection," which is the "foundational task of spreadsheet understanding." The new LLM builds on the Chain of Thought methodology to introduce a framework called "Chain of Spreadsheet" (CoS), which can "decompose" spreadsheet reasoning into a table detection-match-reasoning pipeline.
Books

Appeals Court Seems Lost On How Internet Archive Harms Publishers (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: The Internet Archive (IA) went before a three-judge panel Friday to defend its open library's controlled digital lending (CDL) practices after book publishers last year won a lawsuit claiming that the archive's lending violated copyright law. In the weeks ahead of IA's efforts to appeal that ruling, IA was forced to remove 500,000 books from its collection, shocking users. In an open letter to publishers, more than 30,000 readers, researchers, and authors begged for access to the books to be restored in the open library, claiming the takedowns dealt "a serious blow to lower-income families, people with disabilities, rural communities, and LGBTQ+ people, among many others," who may not have access to a local library or feel "safe accessing the information they need in public."

During a press briefing following arguments in court Friday, IA founder Brewster Kahle said that "those voices weren't being heard." Judges appeared primarily focused on understanding how IA's digital lending potentially hurts publishers' profits in the ebook licensing market, rather than on how publishers' costly ebook licensing potentially harms readers. However, lawyers representing IA -- Joseph C. Gratz, from the law firm Morrison Foerster, and Corynne McSherry, from the nonprofit Electronic Frontier Foundation -- confirmed that judges were highly engaged by IA's defense. Arguments that were initially scheduled to last only 20 minutes stretched on instead for an hour and a half. Ultimately, judges decided not to rule from the bench, with a decision expected in the coming months or potentially next year. McSherry said the judges' engagement showed that the judges "get it" and won't make the decision without careful consideration of both sides.

"They understand this is an important decision," McSherry said. "They understand that there are real consequences here for real people. And they are taking their job very, very seriously. And I think that's the best that we can hope for, really." On the other side, the Association of American Publishers (AAP), the trade organization behind the lawsuit, provided little insight into how the day went. When reached for comment, AAP simply said, "We thought it was a strong day in court, and we look forward to the opinion." [...] "There is no deadline for them to make a decision," Gratz said, but it "probably won't happen until early fall" at the earliest. After that, whichever side loses will have an opportunity to appeal the case, which has already stretched on for four years, to the Supreme Court. Since neither side seems prepared to back down, the Supreme Court eventually weighing in seems inevitable.

Chrome

Chromebooks Will Get Gemini and New Google AI Features (wired.com) 9

Google is introducing the Gemini AI chatbot to Chromebook Plus models, enhancing features like text rewriting, image editing, and hands-free control. Here are a few of the top new features coming to ChromeOS, as summarized by Wired: The first notable feature is Help Me Write, which works in any text box. Select text in any text box and right-click -- you'll see a box next to the standard right-click context menu. You can ask Google's AI to rewrite the selected text, rephrase it in a specific way, or change the tone. I tried to use it on a few sentences in this story but did not like any of the suggestions it gave me, so your mileage may vary. Or maybe I'm a better writer than Google's AI. Who knows?

Google's bringing the same generative AI wallpaper system you'll find in Android to ChromeOS. You can access this feature in ChromeOS's wallpaper settings and generate images based on specific parameters. Weirdly, you can create these when you're in a video-calling app too. You'll see a menu option next to the system tray whenever the microphone and video camera are being accessed -- tap on it and click "Create with AI" and you can generate an image for your video call's background. I'm not sure why I'd want a background of a "surreal bicycle made of flowers in pink and purple," but there you go. AI!

Here's something a little more useful: Magic Editor in Google Photos. Yep, the same feature that debuted in Google's Pixel 8 smartphones is now available on Chromebook Plus laptops. In the Google Photos app, you can press Edit on a photo and you'll see the option for Magic Editor. (You'll need to download more editing tools to get started.) This feature lets you erase unwanted objects in your photos, move a subject to another area of the frame, and fill in the backgrounds of photos. I successfully erased a paint can in the background of a photo of my dog, and it worked pretty quickly.

Then there's Gemini. It's available as a stand-alone app, and you can ask it to do pretty much anything. Write a cover letter, break down complex topics, ask for travel tips for a specific country. Just, you know, double-check the results and make sure there aren't any hallucinations. If you want to tap into Google's Gemini Advanced model, the company says it is offering 12 months free for new Chromebook Plus owners through the end of the year, so you have some time to redeem that offer. This is technically an upgrade from Google One, and it nets you Gemini for Workspace, 2 terabytes of storage, and a few other perks.
New features coming to all Chromebooks include easy setup with Android phones via QR code for sharing Wi-Fi credentials, integration of Google Tasks into the system tray, a Game Dashboard for mapping controls and recording gameplay as GIFs, and a built-in screen recorder tool. Upcoming enhancements also include Hands-Free Control using face gestures, the Help Me Read feature with Gemini for summarizing websites and PDFs, and an Overview screen to manage open browser windows, tabs, and apps.

You can check if your Chromebook is compatible with the Chromebook Plus OS update here.

Slashdot Top Deals