Operating Systems

'Something Has Gone Seriously Wrong,' Dual-Boot Systems Warn After Microsoft Update (arstechnica.com) 144

Ars Technica's Dan Goodwin writes: Last Tuesday, loads of Linux users -- many running packages released as early as this year -- started reporting their devices were failing to boot. Instead, they received a cryptic error message that included the phrase: "Something has gone seriously wrong." The cause: an update Microsoft issued as part of its monthly patch release. It was intended to close a 2-year-old vulnerability in GRUB, an open source boot loader used to start up many Linux devices. The vulnerability, with a severity rating of 8.6 out of 10, made it possible for hackers to bypass secure boot, the industry standard for ensuring that devices running Windows or other operating systems don't load malicious firmware or software during the bootup process. CVE-2022-2601 was discovered in 2022, but for unclear reasons, Microsoft patched it only last Tuesday. [...]

With Microsoft maintaining radio silence, those affected by the glitch have been forced to find their own remedies. One option is to access their EFI panel and turn off secure boot. Depending on the security needs of the user, that option may not be acceptable. A better short-term option is to delete the SBAT Microsoft pushed out last Tuesday. This means users will still receive some of the benefits of Secure Boot even if they remain vulnerable to attacks that exploit CVE-2022-2601. The steps for this remedy are outlined here (thanks to manutheeng for the reference).

The Courts

Ticketmaster's Nontransferable 'SafeTix' Are Anticompetitive, DOJ Suit Claims (theverge.com) 43

The Department of Justice has amended its antitrust lawsuit against Ticketmaster and Live Nation, alleging that Ticketmaster's introduction of nontransferable tickets and the SafeTix system was primarily intended to stifle competition from rival platforms like StubHub and SeatGeek, rather than merely to reduce ticket fraud. "The complaint, which was amended on Monday after 10 states joined the DOJ's lawsuit, cites internal Ticketmaster documents obtained during the legal process," notes The Verge. From the report: In 2019, Ticketmaster rolled out SafeTix, which replaced static barcodes on electronic tickets with encrypted barcodes that refresh every 15 seconds. Ticketmaster marketed SafeTix as a way of reducing ticket fraud, but the complaint claims reducing competition was "a primary motivation" for the new ticketing system. [...] The amended complaint includes new information about Ticketmaster's dominance of the events market. One internal Live Nation document cited in the complaint notes that Ticketmaster is the primary ticketer for approximately 80 percent of arenas across the country that host NBA or NHL teams. As of 2022, Live Nation-promoted events accounted for 70 percent of all amphitheater shows across the country, according to internal Live Nation events mentioned in the complaint.

The DOJ alleges that because of Ticketmaster's conduct, consumers have "paid more and continue to pay more for fees relating to tickets to live events than they would have paid in a free and open competitive market." The exact amount of monetary harm is still unknown, the complaint claims, and will require discovery from Ticketmaster and Live Nation's books, as well as from its third-party competitors.

Programming

'GitHub Actions' Artifacts Leak Tokens, Expose Cloud Services and Repositories (securityweek.com) 19

Security Week brings news about CI/CD workflows using GitHub Actions in build processes. Some workflows can generate artifacts that "may inadvertently leak tokens for third party cloud services and GitHub, exposing repositories and services to compromise, Palo Alto Networks warns." [The artifacts] function as a mechanism for persisting and sharing data across jobs within the workflow and ensure that data is available even after the workflow finishes. [The artifacts] are stored for up to 90 days and, in open source projects, are publicly available... The identified issue, a combination of misconfigurations and security defects, allows anyone with read access to a repository to consume the leaked tokens, and threat actors could exploit it to push malicious code or steal secrets from the repository. "It's important to note that these tokens weren't part of the repository code but were only found in repository-produced artifacts," Palo Alto Networks' Yaron Avital explains...

"The Super-Linter log file is often uploaded as a build artifact for reasons like debuggability and maintenance. But this practice exposed sensitive tokens of the repository." Super-Linter has been updated and no longer prints environment variables to log files.

Avital was able to identify a leaked token that, unlike the GitHub token, would not expire as soon as the workflow job ends, and automated the process that downloads an artifact, extracts the token, and uses it to replace the artifact with a malicious one. Because subsequent workflow jobs would often use previously uploaded artifacts, an attacker could use this process to achieve remote code execution (RCE) on the job runner that uses the malicious artifact, potentially compromising workstations, Avital notes.

Avital's blog post notes other variations on the attack — and "The research laid out here allowed me to compromise dozens of projects maintained by well-known organizations, including firebase-js-sdk by Google, a JavaScript package directly referenced by 1.6 million public projects, according to GitHub. Another high-profile project involved adsys, a tool included in the Ubuntu distribution used by corporations for integration with Active Directory." (Avital says the issue even impacted projects from Microsoft, Red Hat, and AWS.) "All open-source projects I approached with this issue cooperated swiftly and patched their code. Some offered bounties and cool swag."

"This research was reported to GitHub's bug bounty program. They categorized the issue as informational, placing the onus on users to secure their uploaded artifacts." My aim in this article is to highlight the potential for unintentionally exposing sensitive information through artifacts in GitHub Actions workflows. To address the concern, I developed a proof of concept (PoC) custom action that safeguards against such leaks. The action uses the @actions/artifact package, which is also used by the upload-artifact GitHub action, adding a crucial security layer by using an open-source scanner to audit the source directory for secrets and blocking the artifact upload when risk of accidental secret exposure exists. This approach promotes a more secure workflow environment...

As this research shows, we have a gap in the current security conversation regarding artifact scanning. GitHub's deprecation of Artifacts V3 should prompt organizations using the artifacts mechanism to reevaluate the way they use it. Security defenders must adopt a holistic approach, meticulously scrutinizing every stage — from code to production — for potential vulnerabilities. Overlooked elements like build artifacts often become prime targets for attackers. Reduce workflow permissions of runner tokens according to least privilege and review artifact creation in your CI/CD pipelines. By implementing a proactive and vigilant approach to security, defenders can significantly strengthen their project's security posture.

The blog post also notes protection and mitigation features from Palo Alto Networks....
Social Networks

India's Influencers Fear a New Law Could Make them Register with the Government (restofworld.org) 25

Indian influencers It's the largest country on earth — home to 1.4 billion people. But "The Indian government has plans to classify social media creators as 'digital news broadcasters,'" according to the nonprofit site RestofWorld.org.

While there's "no clarity" on the government's next move, the proposed legislation would require social media creators "to register with the government, set up a content evaluation committee that checks all content before it is published, and appoint complaint handlers — all at their own expense. Any failures in compliance could lead to criminal charges, including jail term." On July 26, the Hindustan Times reported that the government plans to tweak the proposed Broadcasting Services (Regulation) Bill, which aims to combine all regulations for broadcasters under one law. As per a new version of the bill, which has been reviewed by Rest of World, the government defines "digital news broadcaster" as "any person who broadcasts news and current affairs programs through an online paper, news portal, website, social media intermediary, or other similar medium as part of a systematic business, professional or commercial activity."

Creators and digital rights activists believe the potential legislation will tighten the government's grip over online content and threaten the last bastion of press freedom for independent journalists in the country. Over 785 Indian creators have sent a letter to the government seeking more transparency in the process of drafting the bill. Creators have also stormed social media with hashtags like #KillTheBill, and made videos to educate their followers about the proposal.

One YouTube creator told the site that if the government requires them to appoint a "grievance redressal officer," they might simply film themselves, responding to grievances — to "make content out of it".
Data Storage

Ask Slashdot: What Network-Attached Storage Setup Do You Use? 135

"I've been somewhat okay about backing up our home data," writes long-time Slashdot reader 93 Escort Wagon.

But they could use some good advice: We've got a couple separate disks available as local backup storage, and my own data also gets occasionally copied to encrypted storage at BackBlaze. My daughter has her own "cloud" backups, which seem to be a manual push every once in a while of random files/folders she thinks are important. Including our media library, between my stuff, my daughter's, and my wife's... we're probably talking in the neighborhood of 10 TB for everything at present. The whole setup is obviously cobbled together, and the process is very manual. Plus it's annoying since I'm handling Mac, Linux, and Windows backups completely differently (and sub-optimally). Also, unsurprisingly, the amount of data we possess does seem to be increasing with time.

I've been considering biting the bullet and buying an NAS [network-attached storage device], and redesigning the entire process — both local and remote. I'm familiar with Synology and DSM from work, and the DS1522+ looks appealing. I've also come across a lot of recommendations for QNAP's devices, though. I'm comfortable tackling this on my own, but I'd like to throw this out to the Slashdot community.

What NAS do you like for home use. And what disks did you put in it? What have your experiences been?

Long-time Slashdot reader AmiMoJo asks "Have you considered just building one?" while suggesting the cheapest option is low-powered Chinese motherboards with soldered-in CPUs. And in the comments on the original submission, other Slashdot readers shared their examples:
  • destined2fail1990 used an AMD Threadripper to build their own NAS with 10Gbps network connectivity.
  • DesertNomad is using "an ancient D-Link" to connect two Synology DS220 DiskStations
  • Darth Technoid attached six Seagate drives to two Macbooks. "Basically, I found a way to make my older Mac useful by simply leaving it on all the time, with the external drives attached."

But what's your suggestion? Share your own thoughts and experiences. What NAS do you like for home use? What disks would you put in it?

And what have your experiences been?

Programming

GitHub Promises 'Additional Guardrails' After Wednesday's Update Triggers Short Outage (githubstatus.com) 12

Wednesday GitHub "broke itself," reports the Register, writing that "the Microsoft-owned code-hosting outfit says it made a change involving its database infrastructure, which sparked a global outage of its various services."

Or, as the Verge puts it, GitHub experienced "some major issues" which apparently lasted for 36 minutes: When we first published this story, navigating to the main GitHub website showed an error message that said "no server is currently available to service your request," but the website was working again soon after. (The error message also featured an image of an angry unicorn.) GitHub's report of the incident also listed problems with things like pull requests, GitHub Pages, Copilot, and the GitHub API.
GitHub attributed the downtime to "an erroneous configuration change rolled out to all GitHub.com databases that impacted the ability of the database to respond to health check pings from the routing service. As a result, the routing service could not detect healthy databases to route application traffic to. This led to widespread impact on GitHub.com starting at 23:02 UTC." (Downdetector showed "more than 10,000 user reports of problems," according to the Verge, "and that the problems were reported quite suddenly.")

GitHub's incident report adds that "Given the severity of this incident, follow-up items are the highest priority work for teams at this time." To prevent recurrence we are implementing additional guardrails in our database change management process. We are also prioritizing several repair items such as faster rollback functionality and more resilience to dependency failures.
Transportation

US Presses the 'Reset Button' On Technology That Lets Cars Talk To Each Other (npr.org) 95

An anonymous reader quotes a report from NPR: Safety advocates have been touting the potential of technology that allows vehicles to communicate wirelessly for years. So far, the rollout has been slow and uneven. Now the U.S. Department of Transportation is releasing a roadmap it hopes will speed up deployment of that technology -- and save thousands of lives in the process. "This is proven technology that works," Shailen Bhatt, head of the Federal Highway Administration, said at an event Friday to mark the release of the deployment plan (PDF) for vehicle-to-everything, or V2X, technology across U.S. roads and highways. V2X allows cars and trucks to exchange location information with each other, and potentially cyclists and pedestrians, as well as with the roadway infrastructure itself. Users could send and receive frequent messages to and from each other, continuously sharing information about speed, position, and road conditions -- even in situations with poor visibility, including around corners or in dense fog or heavy rain. [...]

Despite enthusiasm from safety advocates and federal regulators, the technology has faced a bumpy rollout. During the Obama administration, the National Highway Traffic Safety Administration proposed making the technology mandatory on cars and light trucks. But the agency later dropped that idea during the Trump administration. The deployment of V2X has been "hampered by regulatory uncertainty," said John Bozzella, president and CEO of the Alliance for Automotive Innovation, a trade group that represents automakers. But he's optimistic that the new plan will help. "This is the reset button," Bozzella said at Friday's announcement. "This deployment plan is a big deal. It is a crucial piece of this V2X puzzle." The plan lays out some goals and targets for the new technology. In the short-term, the plan aims to have V2X infrastructure in place on 20% of the National Highway System by 2028, and for 25% of the nation's largest metro areas to have V2X enabled at signalized intersections. V2X technology still faces some daunting questions, including how to pay for the rollout of critical infrastructure and how to protect connected vehicles from cyberattack. But safety advocates say it's past time to find the answers.

Programming

'The Best, Worst Codebase' 29

Jimmy Miller, programmer and co-host of the future of coding podcast, writes in a blog: When I started programming as a kid, I didn't know people were paid to program. Even as I graduated high school, I assumed that the world of "professional development" looked quite different from the code I wrote in my spare time. When I lucked my way into my first software job, I quickly learned just how wrong and how right I had been. My first job was a trial by fire, to this day, that codebase remains the worst and the best codebase I ever had the pleasure of working in. While the codebase will forever remain locked by proprietary walls of that particular company, I hope I can share with you some of its most fun and scary stories.

[...] Every morning at 7:15 the employees table was dropped. All the data completely gone. Then a csv from adp was uploaded into the table. During this time you couldn't login to the system. Sometimes this process failed. But this wasn't the end of the process. The data needed to be replicated to headquarters. So an email was sent to a man, who every day would push a button to copy the data.

[...] But what is a database without a codebase. And what a magnificent codebase it was. When I joined everything was in Team Foundation Server. If you aren't familiar, this was a Microsoft-made centralized source control system. The main codebase I worked in was half VB, half C#. It ran on IIS and used session state for everything. What did this mean in practice? If you navigated to a page via Path A or Path B you'd see very different things on that page. But to describe this codebase as merely half VB, half C# would be to do it a disservice. Every javascript framework that existed at the time was checked into this repository. Typically, with some custom changes the author believed needed to be made. Most notably, knockout, backbone, and marionette. But of course, there was a smattering of jquery and jquery plugins.
Businesses

Sonos Lays Off 100 Employees as Its App Crisis Continues (theverge.com) 52

An anonymous reader shares a report: Sonos laid off approximately 100 employees this morning, a source familiar with the situation tells The Verge. Those affected -- I'm told the marketing division took a significant hit -- abruptly lost access to the company's internal network. Sonos is also in the process of winding down some of its customer support offices, including one in Amsterdam that will close later this year.

Sonos confirmed the layoffs to The Verge on Wednesday afternoon, providing a statement from CEO Patrick Spence. [...] These latest cuts come as Sonos continues to grapple with the fallout from its disastrous mobile app redesign. On Sonos' earnings call last week, CEO Patrick Spence stressed that fixing the app is the company's number one priority -- so much so that two hardware launches planned for later this year have now been delayed to keep all focus on the app.
Further reading: Sonos' $30M App Fail is Cautionary Tale Against Rushing Unnecessary Updates.
AI

New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat (neurosciencenews.com) 129

ZipNada writes: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can generate sophisticated language, they are unlikely to pose existential threats. However, the potential misuse of AI, such as generating fake news, still requires attention. The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) -- the premier international conference in natural language processing -- reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe. "The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the new study on the 'emergent abilities' of LLMs.

Professor Iryna Gurevych added: "... our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."
Science

WHO To Scrap Weak PFAS Drinking Water Guidelines After Alleged Corruption (theguardian.com) 20

An anonymous reader quotes a report from The Guardian: The World Health Organization (WHO) is poised to scrap controversial drinking water guidelines proposed for two toxic PFAS "forever chemicals." The move follows allegations that the process of developing the figures was corrupted by industry-linked researchers aiming to undercut strict new US PFAS limits and weaken standards in the developing world. Many independent scientists charged that the proposed WHO drinking water guidelines for perfluorooctanoic acid (PFOA) and perfluorooctane sulfonate (PFOS) were weak, did not fully protect human health, ignored credible research, and were far above limits set by regulators in the US and EU. The guidelines would have allowed far more PFAS in drinking water than what is allowed by the US Environmental Protection Agency.

Though the earlier guidelines were drafts, and proposed rules all go through a revision process, the WHO is conducting an entirely new review of scientific literature and disbanded the panel of scientists who developed the draft guidelines. It established a new panel with fewer industry-linked scientists and more regulatory officials, moves that have not happened in other revisions, said Betsy Southerland, a former EPA manager in the agency's water division. "This is unprecedented, but the WHO got unprecedented criticism," Southerland said. The WHO told the Guardian in a statement that the moves are part of "an ongoing process" and will include guidelines for other PFAS compounds.

Scientists critical of the limits charged that the WHO ignored high-quality research to create a sense of doubt about the science around PFAS. EPA and EU regulators carried out an exhaustive literature review to find all human and animal studies, and used the best of those papers to establish their limits, Southerland said. The WHO, however, ignored all human studies and determined most animal studies were "too flawed" to use, Southerland said. The organization concluded there was not enough research to set health-based guidelines, which she called a "shocking decision." "There is far more health data for these chemicals than has ever been available for any pollutant in the history of the WHO," Southerland said. Instead, the WHO largely based its guidelines on its review of technological research, but ignored most of those studies as well, Southerland said. The body concluded filtration systems could reliably remove PFOA and PFOS at 100ppt, even though US water utilities remove it below four ppt. The decisions bear industry's prints, researchers say.

Social Networks

Deep-Live-Cam Goes Viral, Allowing Anyone To Become a Digital Doppelganger (arstechnica.com) 17

An anonymous reader quotes a report from Ars Technica: Over the past few days, a software package called Deep-Live-Cam has been going viral on social media because it can take the face of a person extracted from a single photo and apply it to a live webcam video source while following pose, lighting, and expressions performed by the person on the webcam. While the results aren't perfect, the software shows how quickly the tech is developing -- and how the capability to deceive others remotely is getting dramatically easier over time. The Deep-Live-Cam software project has been in the works since late last year, but example videos that show a person imitating Elon Musk and Republican Vice Presidential candidate J.D. Vance (among others) in real time have been making the rounds online. The avalanche of attention briefly made the open source project leap to No. 1 on GitHub's trending repositories list (it's currently at No. 4 as of this writing), where it is available for download for free. [...]

Like many open source GitHub projects, Deep-Live-Cam wraps together several existing software packages under a new interface (and is itself a fork of an earlier project called "roop"). It first detects faces in both the source and target images (such as a frame of live video). It then uses a pre-trained AI model called "inswapper" to perform the actual face swap and another model called GFPGAN to improve the quality of the swapped faces by enhancing details and correcting artifacts that occur during the face-swapping process. The inswapper model, developed by a project called InsightFace, can guess what a person (in a provided photo) might look like using different expressions and from different angles because it was trained on a vast dataset containing millions of facial images of thousands of individuals captured from various angles, under different lighting conditions, and with diverse expressions.

During training, the neural network underlying the inswapper model developed an "understanding" of facial structures and their dynamics under various conditions, including learning the ability to infer the three-dimensional structure of a face from a two-dimensional image. It also became capable of separating identity-specific features, which remain constant across different images of the same person, from pose-specific features that change with angle and expression. This separation allows the model to generate new face images that combine the identity of one face with the pose, expression, and lighting of another.

United States

The Nation's Best Hackers Found Vulnerabilities in Voting Machines - But No Time To Fix Them (politico.com) 189

Hackers at the DEF CON conference in Las Vegas identified vulnerabilities in voting machines slated for use in the 2024 U.S. election, but fixes are unlikely to be implemented before November 5, organizers said. The annual "Voting Village" event, held away from the main conference floor due to security concerns, drew election officials and cybersecurity experts. Organizers plan to release a detailed report on the vulnerabilities found.

Catherine Terranova, an event organizer, said major systemic changes are difficult to make 90 days before an election, particularly given heightened scrutiny of election security in 2024. The process of addressing vulnerabilities involves manufacturer approval, recertification by authorities, and updating individual devices. This typically takes longer than the time remaining before the election, according to Scott Algeier, executive director of the Information Technology-Information Sharing and Analysis Center. The event comes amid ongoing concerns about foreign targeting of U.S. elections, including a recent hack of former President Donald Trump's campaign, reportedly by Iran.
Space

Milky Way May Escape Fated Collision With Andromeda Galaxy (science.org) 33

sciencehabit shares a report from Science.org: For years, astronomers thought it was the Milky Way's destiny to collide with its near neighbor the Andromeda galaxy a few billion years from now. But a new simulation finds a 50% chance the impending crunch will end up a near-miss, at least for the next 10 billion years. It's been known that Andromeda is heading toward our home Galaxy since 1912, heading pretty much straight at the Milky Way at a speed of 110 kilometers per second. Such galaxy mergers, which can be seen in progress elsewhere in the universe, are spectacularly messy affairs. Although most stars survive unscathed, the galaxies' spiral structures are obliterated, sending streams of stars spinning off into space. After billions of years, the merged galaxies typically settle into a single elliptical galaxy: a giant featureless blob of stars. A study from 2008 suggested a Milky Way-Andromeda merger was inevitable within the next 5 billion years, and that in the process the Sun and Earth would get gravitationally grabbed by Andromeda for a time before ending up in the distant outer suburbs of the resulting elliptical, which the researchers dub "Milkomeda."

In the new simulation, researchers made use of the most recent and best estimates of the motion and mass of the four largest galaxies in the Local Group. They then plugged those into simulations developed by the Institute for Computational Cosmology at Durham University. First, they ran the simulation including just the Milky Way and Andromeda and found that they merged in slightly less than half of the cases -- lower odds than other recent estimates. When they included the effect of the Triangulum galaxy, the Local Group's third largest, the merger probability increased to about two-thirds. But with the inclusion of the Large Magellanic Cloud, a satellite galaxy of the Milky Way that is the fourth largest in the Local Group, those chances dropped back down to a coin flip. And if the cosmic smashup does happen, it won't be for about 8 billion years. "As it stands, proclamations of the impending demise of our Galaxy appear greatly exaggerated," the researchers write. Meanwhile, if the accelerating expansion of the universe continues unabated, all other galaxies will disappear beyond our cosmic event horizon, leaving Milkomeda as the sole occupant of the visible universe.
The study is available as a preprint on arXiv.
Apple

Apple Threatens To Remove Patreon From App Store Over Billing Dispute (techcrunch.com) 83

Apple has threatened to remove crowdfunding app Patreon from the App Store if creators continue to use unsupported third-party billing options or disable transactions on iOS, instead of using Apple's own in-app purchasing system. From a report: In a blog post and email to Patreon creators about upcoming changes to membership in the iOS app, the company says it's begun a 16-month-long migration process to move all creators to Apple's subscription billing by November 2025. Patreon also informed creators it will switch them over to subscription billing as of November 2024, but they will be able to decide whether to price their memberships at a higher fee to cover Apple's commission or decide if they want to absorb the fee themselves. In addition, creators can opt to delay the migration in their Patreon settings to November 2025, the company said. However, if creators choose the latter option, they won't be able to offer memberships in the iOS app until they adopt Apple's subscription billing, as Apple rules will apply as of this November.
IT

Co-Founder of DDoSecrets Was Dark Web Drug Kingpin (404media.co) 25

A co-founder of transparency activism organization Distributed of Denial of Secrets (DDoSecrets) was a dark web drug kingpin who ran the successor to the infamous Silk Road marketplace and was later convicted of child abuse imagery crimes. From a report: The co-founder was Thomas White, who was prosecuted for administering the Silk Road 2.0 drug marketplace and for possessing images of child sexual abuse material. He decided to reveal his involvement in DDoSecrets to 404 Media after serving a five year prison sentence. "I was told, in no uncertain terms, that if I spoke out publicly against Ross Ulbricht's excessive sentence, [DDoSecrets] or anything similar, that I would spend much more time in prison," he said. "Now I can freely speak again, it is important to use it or lose it. So #FreeRoss."

The news provides more insights into the origins of DDoSecrets, which has filled the void left by Wikileaks to become the most significant site publishing massive data dumps at this time. The other co-founder is Emma Best, who for years has archived, cataloged, and distributed large amounts of hacked information online. "Emma and I have been communicating for many years, and both know the difficulty in finding and verifying leaked material. It was a shared vision to make this process easier for people better placed than ourselves, to use the data to counteract the veil of secrecy protecting many bad actors in society," White told 404 Media in an email in July.

Biotech

Can Food Scientists Re-Invent Sugar? (msn.com) 102

The Wall Street Journal visits scientists at Harvard University's Wyss Institute for Biologically Inspired Engineering who are researching a "sugar-to-fiber" enzyme (normally used by plants to create stalks). They're testing a version they've "encased in spherical nanoparticles — tiny mesh-like cages made of pectin that allow the enzyme to be added to food without being activated until it reaches the intestine.

"Once there, a change in pH causes the cage to expand, freeing the enzyme to float through its holes and start converting sugar to fiber." The Wyss Institute's goal for its enzyme product was to reduce the sugar absorbed from food by 30%, though it has the potential to remove even more than that, says Sam Inverso, director of business development partnerships at the Wyss Institute. The enzyme's ability to turn sugar into fiber is also key, as most Americans don't get nearly enough fiber in their diet, says Adama Sesay, a senior engineer at the Wyss Institute who worked on the project...

The Wyss Institute is now licensing the technology to a company to help bring its enzyme product to market, a process that entails additional testing and work to secure regulatory approval. Inverso says that the aim is for the product to be available to U.S. food manufacturers within the next two years, and that other encapsulated enzymes could follow: products that reduce lactose absorption after drinking milk, or cut gluten after eating bread. For now the enzyme works better in solid food than in a liquid. Producing it in large quantities and at low cost is still a ways off — currently it's 100 times more expensive than raw sugar, Inverso says.

And the Journal notes they're not the only ones working on the problem: San Francisco-based startup Biolumen recently launched a product called Monch Monch, a drink mix made of fibrous, microscopic sponges designed to soak up sugar and prevent it from reaching the bloodstream. At mealtime consumers can blend a teaspoon of Monch Monch, which has no taste, smell or color, into drinks from water to wine. Once it has reached the stomach, the sponges start to swell and sequester sugar, reducing its burden on the body, says Dr. Robert Lustig, Biolumen's co-founder and chief medical officer... One gram of Monch Monch can sequester six grams of sugar, says Lustig... The product, introduced as a dietary supplement, can also be used as a food ingredient under a Food and Drug Administration principle known as "generally recognized as safe." Packets of Monch Monch are available for purchase online, and Biolumen says it is in talks with U.S. food manufacturers it declined to name about its use in other products...

Food companies are betting on other solutions for now. Cereal startup Magic Spoon uses allulose, a natural sugar found in figs and raisins that is growing in popularity, helped by FDA guidance that allows it to be excluded from sugar or added-sugar totals on nutrition labels. Ingredient company Tate & Lyle, which makes allulose from corn kernels, says the sweetener tastes like sugar and adds bulk and caramel color, but passes through the body without being metabolized... Chicago-based Blommer Chocolate recently launched a line of reduced-sugar chocolate and confectionery products made with Incredo, a sugar that has been physically altered to taste sweeter using a mineral carrier that dissolves faster in saliva and targets the sweet-taste receptors on the tongue. Incredo's use enables manufacturers to use up to 50% less sugar, the company says.

The article even notes that "researchers still working to reduce sugar are peddling new technologies, like individual sugar crystals modified to dissolve more quickly in the mouth, making food taste sweeter."
Ubuntu

Ubuntu Will Start Shipping With the Latest Upstream Linux Kernel - Even Release Candidates (omgubuntu.co.uk) 31

Here's a question from the blog OMG Ubuntu. "Ever get miffed reading about a major new Ubuntu release only to learn it doesn't come with the newest Linux kernel?

"Well, that'll soon be a thing of the past." Canonical's announced a big shift in kernel selection process for future Ubuntu release, an "aggressive kernel version commitment policy" pivot that means it will ship the latest upstream kernel code in development at the time of a new Ubuntu release.

Yes, even if that upstream kernel hasn't yet seen a formal stable release (and received the requisite newspaper-graphic-topped rundown on this blog). Which is a huge change. Currently, new Ubuntu releases include the most recent stable Linux kernel release at the time of the kernel freeze milestone in the Ubuntu development cycle.

Here's the official announcement by Canonical's Brett Grandbois. "Ubuntu will now ship the absolute latest available version of the upstream Linux kernel at the specified Ubuntu release freeze date, even if upstream is still in Release Candidate status..." It is actually expected that Late Releases will be the exception rather than the norm and in most releases these guidelines will not be necessary as the upstream kernel will release with enough time for the Ubuntu kernel to stabilize. However, adopting a more aggressive kernel version commitment policy does require us to be prepared for a possible Late Release situation and therefore informing the community on what they can expect.
Republicans

Trump's Campaign 'Says It Has Been Hacked', Reports CNN (cnn.com) 210

CNN reports: Former President Donald Trump's campaign said Saturday in a statement that it had been hacked.

Politico reported earlier Saturday that it had received emails from an anonymous account with documents from inside Trump's campaign operation. "These documents were obtained illegally from foreign sources hostile to the United States, intended to interfere with the 2024 election and sow chaos throughout our Democratic process," Trump campaign spokesperson Steven Cheung said in a statement to CNN.

Cheung pointed to a recent report published by Microsoft that said Iranian operatives had ramped up their attempts to influence and monitor the US presidential election by creating fake news outlets targeting liberal and conservative voters and by trying to hack an unnamed presidential campaign... Still, it's not clear whether Iran was responsible for the hack. CNN has reached out to the Iranian mission to the United Nations for comment...

Politico reported it had received emails that contained internal communications from a senior Trump campaign official and a [271-page] research dossier the campaign had put together on Trump's running mate, Ohio Sen. JD Vance. The dossier included what the Trump campaign identified as Vance's potential vulnerabilities...

In 2016, days before the Democratic National Convention, WikiLeaks published nearly 20,000 emails from the Democratic National Committee server.

Security

USPS Text Scammers Duped His Wife, So He Hacked Their Operation (wired.com) 61

Security researcher Grant Smith uncovered a large-scale smishing scam where scammers posing as the USPS tricked victims into providing their credit card details through fake websites. Smith hacked into the scammers' systems, gathered evidence, and collaborated with the USPS and a US bank to protect over 438,000 unique credit cards from fraudulent activity. Wired reports: The flood of text messages started arriving early this year. They carried a similar thrust: The United States Postal Service is trying to deliver a parcel but needs more details, including your credit card number. All the messages pointed to websites where the information could be entered. Like thousands of others, security researcher Grant Smith got a USPS package message. Many of his friends had received similar texts. A couple of days earlier, he says, his wife called him and said she'd inadvertently entered her credit card details. With little going on after the holidays, Smith began a mission: Hunt down the scammers. Over the course of a few weeks, Smith tracked down the Chinese-language group behind the mass-smishing campaign, hacked into their systems, collected evidence of their activities, and started a months-long process of gathering victim data and handing it to USPS investigators and a US bank, allowing people's cards to be protected from fraudulent activity.

In total, people entered 438,669 unique credit cards into 1,133 domains used by the scammers, says Smith, a red team engineer and the founder of offensive cybersecurity firm Phantom Security. Many people entered multiple cards each, he says. More than 50,000 email addresses were logged, including hundreds of university email addresses and 20 military or government email domains. The victims were spread across the United States -- California, the state with the most, had 141,000 entries -- with more than 1.2 million pieces of information being entered in total. "This shows the mass scale of the problem," says Smith, who is presenting his findings at the Defcon security conference this weekend and previously published some details of the work. But the scale of the scamming is likely to be much larger, Smith says, as he didn't manage to track down all of the fraudulent USPS websites, and the group behind the efforts have been linked to similar scams in at least half a dozen other countries.

Slashdot Top Deals