United States

US Invests $20 Billion More to Finance Clean-Energy Projects (msn.com) 86

Thursday America's Environmental Protection Agency "awarded $20 billion to help finance clean-energy projects across the country," reports the Washington Post. The money comes from the Greenhouse Gas Reduction Fund established by President Biden's signature climate law, the Inflation Reduction Act. The fund seeks to leverage public and private dollars to invest in clean-energy technologies such as solar panels, heat pumps and more.

The program is potentially one of the most consequential — yet least understood — parts of the climate law...

Simply put, the program allows people to access low-interest loans for clean-energy projects that they might not otherwise have received. Imagine a community group that wants to install electric vehicle charging stations at its neighborhood recreation center but can't get a loan from a bank or a lender. As is often the case, potential lenders say they're hesitant to support a novel green technology or a business without a track record of success. Low-income and minority communities have long encountered such obstacles in trying to attract private capital. The program aims to overcome this problem by providing a huge influx of federal cash — $27 billion in total — for nonprofit organizations to dole out to clean-energy projects nationwide. Each nonprofit will serve as a "green bank" that offers more favorable lending rates than commercial banks. "It's just really hard to get banks to bring capital into low-income communities, especially for these new projects that they're not used to financing," said Adrian Deveny, the founder of the firm Climate Vision and the former director of energy and environmental policy for Senate Majority Leader Charles E. Schumer (D-N.Y.), a key architect of the Inflation Reduction Act....

The EPA is awarding money to eight nonprofits, which have committed to leverage nearly $7 in private capital for every $1 of federal investment. The nonprofits have also pledged to ensure that at least 70 percent of the funds will benefit disadvantaged communities, and that the financed projects will reduce up to 40 million metric tons of carbon dioxide a year — equivalent to the annual emissions of nearly 9 million gasoline-powered cars... [The nonprofit] Coalition for Green Capital, will use a $5 billion award to establish a "national green bank," co-founder and CEO Reed Hundt said. "We're going to be able to cause about $100 billion of total additional investment over a seven-year time period with that number, because we can leverage it," Hundt said.

The Media

Mock 'News' Sites With Russian Ties Pop Up in U.S. (rawstory.com) 199

An anonymous reader shared this story from the New York Times: Into the depleted field of journalism in America, a handful of websites have appeared in recent weeks with names suggesting a focus on news close to home: D.C. Weekly, the New York News Daily, the Chicago Chronicle and a newer sister publication, the Miami Chronicle. In fact, they are not local news organizations at all. They are Russian creations, researchers and government officials say, meant to mimic actual news organizations to push Kremlin propaganda by interspersing it among an at-times odd mix of stories about crime, politics and culture.

While Russia has long sought ways to influence public discourse in the United States, the fake news organizations — at least five, so far — represent a technological leap in its efforts to find new platforms to dupe unsuspecting American readers. The sites, the researchers and officials said, could well be the foundations of an online network primed to surface disinformation ahead of the American presidential election in November...

The Miami Chronicle's website first appeared on Feb. 26. Its tagline falsely claims to have delivered "the Florida News since 1937."

Amid some true reports, the site published a story last week about a "leaked audio recording" of Victoria Nuland, the U.S. under secretary of state for political affairs, discussing a shift in American support for Russia's beleaguered opposition after the death of the Russian dissident Aleksei A. Navalny. The recording is a crude fake, according to administration officials who would speak only anonymously to discuss intelligence matters.

From the Raw Story: The network was discovered by Clemson University's Media Forensics Hub by researchers Patrick Warren and Darren Linvill, who tell the Times that its websites are designed to lend journalistic credibility to slickly produced propaganda. "The page is just there to look realistic enough to fool a casual reader into thinking they're reading a genuine, U.S.-branded article," Linvill told the Times.
Data Storage

Study Finds That We Could Lose Science If Publishers Go Bankrupt (arstechnica.com) 66

A recent survey found that academic organizations are failing to preserve digital material -- "including science paid for with taxpayer money," reports Ars Technica, highlighting the need for improved archiving standards and responsibilities in the digital age. From the report: The work was done by Martin Eve, a developer at Crossref. That's the organization that organizes the DOI system, which provides a permanent pointer toward digital documents, including almost every scientific publication. If updates are done properly, a DOI will always resolve to a document, even if that document gets shifted to a new URL. But it also has a way of handling documents disappearing from their expected location, as might happen if a publisher went bankrupt. There are a set of what's called "dark archives" that the public doesn't have access to, but should contain copies of anything that's had a DOI assigned. If anything goes wrong with a DOI, it should trigger the dark archives to open access, and the DOI updated to point to the copy in the dark archive. For that to work, however, copies of everything published have to be in the archives. So Eve decided to check whether that's the case.

Using the Crossref database, Eve got a list of over 7 million DOIs and then checked whether the documents could be found in archives. He included well-known ones, like the Internet Archive at archive.org, as well as some dedicated to academic works, like LOCKSS (Lots of Copies Keeps Stuff Safe) and CLOCKSS (Controlled Lots of Copies Keeps Stuff Safe). The results were... not great. When Eve broke down the results by publisher, less than 1 percent of the 204 publishers had put the majority of their content into multiple archives. (The cutoff was 75 percent of their content in three or more archives.) Fewer than 10 percent had put more than half their content in at least two archives. And a full third seemed to be doing no organized archiving at all. At the individual publication level, under 60 percent were present in at least one archive, and over a quarter didn't appear to be in any of the archives at all. (Another 14 percent were published too recently to have been archived or had incomplete records.)

The good news is that large academic publishers appear to be reasonably good about getting things into archives; most of the unarchived issues stem from smaller publishers. Eve acknowledges that the study has limits, primarily in that there may be additional archives he hasn't checked. There are some prominent dark archives that he didn't have access to, as well as things like Sci-hub, which violates copyright in order to make material from for-profit publishers available to the public. Finally, individual publishers may have their own archiving system in place that could keep publications from disappearing. The risk here is that, ultimately, we may lose access to some academic research.

Transportation

California Approves Waymo Robotaxi Services In LA, SF Neighboring Cities (reuters.com) 12

The California Public Utilities Commission (CPUC) approved Alphabet's Waymo robotaxi service to operate in Los Angeles and some cities near San Francisco. Reuters reports: Waymo, which already operates in San Francisco and Phoenix, applied on Jan 19 to expand its driverless services, saying it would work with policymakers, first responders and community organizations. Last month, the CPUC suspended the application "for further staff review." "Waymo may begin fared driverless passenger service operations in the specified areas of Los Angeles and the San Francisco Peninsula, effective today," the regulator said on a notice posted to its website Friday.
Transportation

Waymo's Application To Expand California Robotaxi Operations Paused By Regulators (techcrunch.com) 15

The California Public Utilities Commission's Consumer Protection and Enforcement Division (CPED) has suspended Waymo's application to expand its robotaxi service in Los Angeles and San Mateo counties, putting "an abrupt halt to the company's aspirations to expand where it can operate -- at least until June 2024," reports TechCrunch. It does not, however, change the autonomous car company's ability to commercially operate its fleet in San Francisco. From the report: The CPED said on its website that the application has been suspended for further staff review. The "suspension" of an advice letter is a procedural part of the CPUC's standard and robust review process, according to Waymo. San Mateo County Board of Supervisors vice president David J. Canepa took a different stance, however.

"Since Waymo has stalled any meaningful discussions on its expansion plans into Silicon Valley, the CPUC has put the brakes on its application to test robotaxi service virtually unfettered both in San Mateo and Los Angeles counties," Canepa said. "This will provide the opportunity to fully engage the autonomous vehicle maker on our very real public safety concerns that have caused all kinds of dangerous situations for firefighters and police in neighboring San Francisco."

Waymo noted that it has reached out to two dozen government and business organizations as part of its outreach effort, including officials in cities throughout San Mateo County such as Burlingame, Daly City and Foster City, the San Mateo County Sheriff's Office and local chambers of commerce. [...] The city of South San Francisco, Los Angeles County Department of Transportation, San Francisco County Transportation Authority, San Mateo County Office of the County Attorney and the San Francisco Taxi Workers Alliance have sent letters opposing the expansion.

Cloud

Why Companies Are Leaving the Cloud (infoworld.com) 176

InfoWorld reports: Don't look now, but 25% of organizations surveyed in the United Kingdom have already moved half or more of their cloud-based workloads back to on-premises infrastructures. This is according to a recent study by Citrix, a Cloud Software Group business unit. The survey questioned 350 IT leaders on their current approaches to cloud computing. The survey also showed that 93% of respondents had been involved with a cloud repatriation project in the past three years. That is a lot of repatriation. Why?

Security issues and high project expectations were reported as the top motivators (33%) for relocating some cloud-based workloads back to on-premises infrastructures such as enterprise data centers, colocation providers, and managed service providers (MSPs). Another significant driver was the failure to meet internal expectations, at 24%... Those surveyed also cited unexpected costs, performance issues, compatibility problems, and service downtime. The most common motivator for repatriation I've been seeing is cost. In the survey, more than 43% of IT leaders found that moving applications and data from on-premises to the cloud was more expensive than expected.

Although not a part of the survey, the cost of operating applications and storing data on the cloud has also been significantly more expensive than most enterprises expected. The cost-benefit analysis of cloud versus on-premises infrastructure varies greatly depending on the organization... The cloud is a good fit for modern applications that leverage a group of services, such as serverless, containers, or clustering. However, that doesn't describe most enterprise applications.

The article cautions, "Don't feel sorry for the public cloud providers."

"Any losses from repatriation will be quickly replaced by the vast amounts of infrastructure needed to build and run AI-based systems... As I've said a few times here, cloud conferences have become genAI conferences, which will continue for several years."
Electronic Frontier Foundation

EFF Challenges 'Legal Bullying' of Sites Reporting on Alleged Appin 'Hacking-for-Hire' (eff.org) 16

Long-time Slashdot reader v3rgEz shared this report from MuckRock: Founded in 2003, Appin has been described as a cybersecurity company and an educational consulting firm. Appin was also, according to Reuters reporting and extensive marketing materials, a prolific "hacking for hire" service, stealing information from politicians and militaries as well as businesses and even unfaithful spouses.

Legal letters, being sent to newsrooms and organizations around the world, are trying to remove that story from the internet — and are often succeeding.

Reuters investigation, published in November, was based in part on corroborated marketing materials, detailing a range of "hacking for hire" services Appin provided. After publication, Reuters was targeted by a legal campaign to shut down critical reporting, an effort which expanded to target news organizations around the world, including MuckRock. With the help of the Electronic Frontier Foundation, MuckRock is now sharing more details on this effort while continuing to host materials the Association of Appin Training Centers has gone to great lengths to remove from the web.

The original story, by Reuters' staff writers Raphael Satter, Zeba Siddiqui and Chris Bing, is no longer available on the Reuters website. Following a preliminary court ruling issued in New Delhi, the story has been replaced with an editor's note, stating that Reuters "stands by its reporting and plans to appeal the decision." The story has since been reposted on Distributed Denial of Secrets, while the primary source materials that Reuters reporters and editors used in their reporting are available on MuckRock's DocumentCloud service.

Representatives of the company's founders denied the assertions in the Reuters story, insisting instead that rogue actors "were misusing the Appin name."

TechDirt titled their article "Sorry Appin, We're Not Taking Down Our Article About Your Attempts To Silence Reporters."

And Thursday the EFF wrote its own take on "a campaign of bullying and censorship seeking to wipe out stories about the mercenary hacking campaigns of a less well-known company, Appin Technology, in general, and the company's cofounder, Rajat Khare, in particular." These efforts follow a familiar pattern: obtain a court order in a friendly international jurisdiction and then misrepresent the force and substance of that order to bully publishers around the world to remove their stories. We are helping to push back on that effort, which seeks to transform a very limited and preliminary Indian court ruling into a global takedown order. We are representing Techdirt and MuckRock Foundation, two of the news entities asked to remove Appin-related content from their sites... On their behalf, we challenged the assertions that the Indian court either found the Reuters reporting to be inaccurate or that the order requires any entities other than Reuters and Google to do anything. We requested a response — so far, we have received nothing...

At the time of this writing, more than 20 of those stories have been taken down by their respective publications, many at the request of an entity called "Association of Appin Training Centers (AOATC)...." It is not clear who is behind The Association of Appin Training Centers, but according to documents surfaced by Reuters, the organization didn't exist until after the lawsuit was filed against Reuters in Indian court....

If a relatively obscure company like AOATC or an oligarch like Rajat Khare can succeed in keeping their name out of the public discourse with strategic lawsuits, it sets a dangerous precedent for other larger, better-resourced, and more well-known companies such as Dark Matter or NSO Group to do the same. This would be a disaster for civil society, a disaster for security research, and a disaster for freedom of expression.

AI

Ask Slashdot: Could a Form of Watermarking Prevent AI Deep Faking? (msn.com) 67

An opinion piece in the Los Angeles Times imagines a world after "the largest coordinated deepfake attack in history... a steady flow of new deepfakes, mostly manufactured in Russia, North Korea, China and Iran." The breakthrough actually came in early 2026 from a working group of digital journalists from U.S. and international news organizations. Their goal was to find a way to keep deepfakes out of news reports... Journalism organizations formed the FAC Alliance — "Fact Authenticated Content" — based on a simple insight: There was already far too much AI fakery loose in the world to try to enforce a watermarking system for dis- and misinformation. And even the strictest labeling rules would simply be ignored by bad actors. But it would be possible to watermark pieces of content that deepfakes.

And so was born the voluntary FACStamp on May 1, 2026...

The newest phones, tablets, cameras, recorders and desktop computers all include software that automatically inserts the FACStamp code into every piece of visual or audio content as it's captured, before any AI modification can be applied. This proves that the image, sound or video was not generated by AI. You can also download the FAC app, which does the same for older equipment... [T]o retain the FACStamp, your computer must be connected to the non-profit FAC Verification Center. The center's computers detect if the editing is minor — such as cropping or even cosmetic face-tuning — and the stamp remains. Any larger manipulation, from swapping faces to faking backgrounds, and the FACStamp vanishes.

It turned out that plenty of people could use the FACStamp. Internet retailers embraced FACStamps for videos and images of their products. Individuals soon followed, using FACStamps to sell goods online — when potential buyers are judging a used pickup truck or secondhand sofa, it's reassuring to know that the image wasn't spun out or scrubbed up by AI.

The article envisions the world of 2028, with the authentication stamp appearing on everything from social media posts to dating app profiles: Even the AI industry supports the use of FACStamps. During training runs on the internet, if an AI program absorbs excessive amounts of AI-generated rather than authentic data, it may undergo "model collapse" and become wildly inaccurate. So the FACStamp helps AI companies train their models solely on reality. A bipartisan group of senators and House members plans to introduce the Right to Reality Act when the next Congress opens in January 2029. It will mandate the use of FACStamps in multiple sectors, including local government, shopping sites and investment and real estate offerings. Counterfeiting a FACStamp would become a criminal offense. Polling indicates widespread public support for the act, and the FAC Alliance has already begun a branding campaign.
But all this leaves Slashdot reader Bruce66423 with a question. "Is it really technically possible to achieve such a clear distinction, or would, in practice, AI be able to replicate the necessary authentication?"
The Media

Did a US Hedge Fund Help Destroy Local Journalism? (editorandpublisher.com) 125

"What is lost when billionaires with no background nor interest in a civic mission, who are only concerned with profiteering, take over our most influential news organizations? What new models of news gathering, and dissemination show promise for our increasingly digital age? What can the public do to preserve and support vibrant journalism?"

That's a synopsis posted about the documentary Stripped for Parts: American Journalism on the Brink, cited by the long-standing news industry magazine Editor and Publisher (which dates back to 1901). This week its podcast interviewed filmmaker Rick Goldsmith about his 90-minute documentary, which they say "tells the tale" of how hedge fund Alden Global Capital clandestinely entered into the news publishing industry in a big way — and then "dismantled local newspapers 'piece by piece,' creating a crises within the communities they serve, leaving 'news deserts' and 'ghost papers' in their wake." [Goldsmith] spent more than 5-years creating his latest work... a film that tells the tale of how newspapers business model is faltering, not just because of the loss of advertising and digital disruption; but also to capitalist greed, as hedge funds and corporate America buy them, sell their assets and leave the communities they serve without their local "voice" and a final check on power.
On the podcast, Goldsmith notes that in many cases a paper's assets "were the newspaper buildings and the printing presses... These were worth in many cases more than the newspapers themselves." After laying off staff, the hedge fund could also downsize out of those buildings.

By 2021 Alden owned 100 newspapers and 200 more publications — and then acquired Tribune Publishing to become America's second-largest newspaper publisher.

The hedge fund currently owns several newspapers in the San Francisco Bay Area, according to SFGate: At first, Goldsmith's documentary might seem like it's delivering more bad news. But it avoids despair, offering hope on the horizon for news deserts where aggressive reporting is needed. It introduces the notion that the traditional capitalist business model is failing the news industry, and that nonprofit organizations must be providers of local coverage.
Earth

AI and Satellite Imagery Used To Create Clearest Map Yet of Human Activity At Sea (theverge.com) 5

An anonymous reader quotes a report from The Verge: Using satellite imagery and AI, researchers have mapped human activity at sea with more precision than ever before. The effort exposed a huge amount of industrial activity that previously flew under the radar, from suspicious fishing operations to an explosion of offshore energy development. The maps were published today in the journal Nature. The research led by Google-backed nonprofit Global Fishing Watch revealed that a whopping three-quarters of the world's industrial fishing vessels are not publicly tracked. Up to 30 percent of transport and energy vessels also escape public tracking. Those blind spots could hamper global conservation efforts, the researchers say. To better protect the world's oceans and fisheries, policymakers need a more accurate picture of where people are exploiting resources at sea.

Until now, Global Fishing Watch and other organizations relied primarily on the maritime Automatic Identification System (AIS) to see what was happening at sea. The system tracks vessels that carry a box that sends out radio signals, and the data has been used in the past to document overfishing and forced labor on vessels. Even so, there are major limitations with the system. Requirements to carry AIS vary by country and vessel type. And it's pretty easy for someone to turn the box off when they want to avoid detection, or cruise through locations where signal strength is spotty. To fill in the blanks, Kroodsma and his colleagues analyzed 2,000 terabytes of imagery from the European Space Agency's Sentinel-1 satellite constellation. Instead of taking traditional optical imagery, which is like snapping photos with a camera, Sentinel-1 uses advanced radar instruments to observe the surface of the Earth. Radar can penetrate clouds and "see" in the dark -- and it was able to spot offshore activity that AIS missed.

Since 2,000 terabytes is an enormous amount of data to crunch, the researchers developed three deep-learning models to classify each detected vessel, estimate their size, and sort out different kinds of offshore infrastructure. They monitored some 15 percent of the world's oceans where 75 percent of industrial activity takes place, paying attention to both vessel movements and the development of stationary offshore structures like oil rigs and wind turbines between 2017 and 2021. While fishing activity dipped at the onset of the covid-19 pandemic in 2020, they found dense vessel traffic in areas that "previously showed little to no vessel activity" in public tracking systems -- particularly around South and Southeast Asia, and the northern and western coasts of Africa.

A boom in offshore energy development was also visible in the data. Wind turbines outnumbered oil structures by the end of 2020. Turbines made up 48 percent of all ocean infrastructure by the following year, while oil structures accounted for 38 percent. Nearly all of the offshore wind development took place off the coasts of northern Europe and China. In the Northeast US, clean energy opponents have tried to falsely link whale deaths to upcoming offshore wind development even though evidence points to vessel strikes being the problem. Oil structures have a lot more vessels swarming around them than wind turbines. Tank vessels are used at times to transport oil to shore as an alternative to pipelines. The number of oil structures grew 16 percent over the five years studied. And offshore oil development was linked to five times as much vessel traffic globally as wind turbines in 2021. "The actual amount of vessel traffic globally from wind turbines is tiny, compared to the rest of traffic," Kroodsma says.

Education

Amazon, Microsoft, and Google Help Teachers Incorporate AI Into CS Education 16

Long-time Slashdot reader theodp writes: Earlier this month, Amazon came under fire as the Los Angeles Times reported on a leaked confidential document that "reveals an extensive public relations strategy by Amazon to donate to community groups, school districts, institutions and charities" to advance the company's business objectives. "We will not fund organizations that have positioned themselves antagonistically toward our interests," explained Amazon officials of the decision to cut off donations to the Cheech Marin Center for Chicano Art and Culture after it ran an exhibit ("Burn Them All Down") that the artist called a commentary on how public officials were not listening to community concerns about the growing number of Amazon warehouses in Southern California's Inland Empire neighborhoods...

Interestingly on the same day the Los Angeles Times was sounding the alarm on Amazon philanthropy, the White House and National Science Foundation (NSF) held a White House-hosted event on K-12 AI education. There it was announced that the Amazon-backed nonprofit Computer Science Teachers Association (CSTA) will develop new K-12 computer science standards that incorporate AI into foundational computer science education with support from the NSF, Amazon, Google, and Microsoft. CSTA separately announced it had received a $1.5 million donation from Amazon to "support efforts to update the CSTA K-12 Computer Science Standards to reflect the rapid advancements in technologies like artificial intelligence (AI)," adding that the CSTA standards — which CSTA credited Microsoft Philanthropies for helping to advance — "serve as a model for CS teaching and learning across grades K-12" in 42 states.

The announcements, the White House noted, came during Computer Science Education Week, the signature event of which is Amazon, Google, and Microsoft-backed Code.org's Hour of Code (which was AI-themed this year), for which Amazon, Google, and Microsoft — not teachers — provided the event's signature tutorials used by the nation's K-12 students. Amazon, Google, and Microsoft are also advisors to Code.org's TeachAI initiative, which was launched in May "to provide thought leadership to guide governments and educational leaders in aligning education with the needs of an increasingly AI-driven world and connecting the discussion of teaching with AI to teaching about AI and computer science."
Businesses

OpenAI's Nonprofit Arm Showed Revenue of $45,000 Last Year (cnbc.com) 20

Despite being valued at $86 billion by private investors, OpenAI reported $44,485 in revenue in 2022, almost entirely from investment income. CNBC reports: That's from the nonprofit parent's 990 filing with the Internal Revenue Service, a form that has to be filled out by organizations wishing to maintain their tax-exempt status. Federal standards don't require audited financial statements from nonprofits. In its home state of California, OpenAI was able to avoid submitting audited financials for 2022 because the foundation's stated revenue was below the $2 million reporting threshold. The last time OpenAI filed with the state was 2017, when revenue was $33.2 million, or more than 700 times what the foundation reported for 2022.

For all its talk of openness, OpenAI's financials remain a black box. Created as a nonprofit in 2015, OpenAI launched a so-called capped-profit entity in 2019, enabling it to raise billions of dollars in outside funding and attain attributes of a tech startup, such as the ability to hand out equity to employees. The for-profit side of the house went on to develop ChatGPT, the chatbot that took the world by storm late last year and kicked off the generative AI boom. [...]

Thad Calabrese, a professor of public and nonprofit financial management at New York University, said OpenAI's current status is confusing, and is unlike anything he has seen in the nonprofit world. He said OpenAI could give up its nonprofit status, and he cited the Blue Cross Blue Shield Association, which in 1994 allowed associated nonprofit medical insurance plans to switch into for-profit entities. "There's no real need to have the nonprofit," Calabrese said. "If you want to be a startup, be a startup." Regarding OpenAI's reporting with the IRS, he said "fundamentally you can't really get a holistic sense of these organizations when you don't have consolidated financial statements."

AI

Meta, IBM Create Industrywide AI Alliance To Share Technology (bloomberg.com) 6

Meta and IBM are joining more than 40 companies and organizations to create an industry group dedicated to open source artificial intelligence work, aiming to share technology and reduce risks. From a report: The coalition, called the AI Alliance, will focus on the responsible development of AI technology, including safety and security tools, according to a statement Tuesday. The group also will look to increase the number of open source AI models -- rather than the proprietary systems favored by some companies -- develop new hardware and team up with academic researchers.

Proponents of open source AI technology, which is made public by developers for others to use, see the approach as a more efficient way to cultivate the highly complex systems. Over the past few months, Meta has been releasing open source versions of its large language models, which are the foundation of AI chatbots.

Databases

Online Atrocity Database Exposed Thousands of Vulnerable People In Congo (theintercept.com) 6

An anonymous reader quotes a report from The Intercept: A joint project of Human Rights Watch and New York University to document human rights abuses in the Democratic Republic of the Congo has been taken offline after exposing the identities of thousands of vulnerable people, including survivors of mass killings and sexual assaults. The Kivu Security Tracker is a "data-centric crisis map" of atrocities in eastern Congo that has been used by policymakers, academics, journalists, and activists to "better understand trends, causes of insecurity and serious violations of international human rights and humanitarian law," according to the deactivated site. This includes massacres, murders, rapes, and violence against activists and medical personnel by state security forces and armed groups, the site said. But the KST's lax security protocols appear to have accidentally doxxed up to 8,000 people, including activists, sexual assault survivors, United Nations staff, Congolese government officials, local journalists, and victims of attacks, an Intercept analysis found. Hundreds of documents -- including 165 spreadsheets -- that were on a public server contained the names, locations, phone numbers, and organizational affiliations of those sources, as well as sensitive information about some 17,000 "security incidents," such as mass killings, torture, and attacks on peaceful protesters.

The data was available via KST's main website, and anyone with an internet connection could access it. The information appears to have been publicly available on the internet for more than four years. [...] The spreadsheets, along with the main KST website, were taken offline on October 28, after investigative journalist Robert Flummerfelt, one of the authors of this story, discovered the leak and informed Human Rights Watch and New York University's Center on International Cooperation. HRW subsequently assembled what one source close to the project described as a "crisis team." Last week, HRW and NYU's Congo Research Group, the entity within the Center on International Cooperation that maintains the KST website, issued a statement that announced the takedown and referred in vague terms to "a security vulnerability in its database," adding, "Our organizations are reviewing the security and privacy of our data and website, including how we gather and store information and our research methodology." The statement made no mention of publicly exposing the identities of sources who provided information on a confidential basis. [...] The Intercept has not found any instances of individuals affected by the security failures, but it's currently unknown if any of the thousands of people involved were harmed.
"We deeply regret the security vulnerability in the KST database and share concerns about the wider security implications," Human Rights Watch's chief communications officer, Mei Fong, told The Intercept. Fong said in an email that the organization is "treating the data vulnerability in the KST database, and concerns around research methodology on the KST project, with the utmost seriousness." Fong added, "Human Rights Watch did not set up or manage the KST website. We are working with our partners to support an investigation to establish how many people -- other than the limited number we are so far aware of -- may have accessed the KST data, what risks this may pose to others, and next steps. The security and confidentiality of those affected is our primary concern."
United States

One-Third of US Newspapers As of 2005 Will Be Gone By 2024 (axios.com) 109

Sara Fischer reports via Axios: The decline of local newspapers accelerated so rapidly in 2023 that analysts now believe the U.S. will have lost one-third of the newspapers it had as of 2005 by the end of next year -- rather than in 2025, as originally predicted. There are roughly 6,000 newspapers left in America, down from 8,891 in 2005, according to a new report from Northwestern's Medill School of Journalism, Media, Integrated Marketing Communications. "We're almost at a one-third loss now and we'll certainly hit that pace next year," said the report's co-authors -- Penelope Muse Abernathy, a visiting professor at Medill, and Sarah Stonbely, director of Medill's State of Local News Project. Of the papers that still survive, a majority (4,790) publish weekly, not daily.

Over the past two years, newspapers continued to vanish at an average rate of more than two per week, leaving 204 U.S. counties, or 6.4%, without any local news outlet. Roughly half of all U.S. counties (1,562) are now only served with one remaining local news source -- typically a weekly newspaper. Abernathy and Stonbely estimate that 228 of those 1,562 counties, or roughly 7% of all U.S. counties, are at high risk of losing their last remaining local news outlet.

There isn't enough investment in digital news replacements to stop the spread of news deserts in America. The footprint for alternative local news outlets is tiny and they are mostly clustered around metro areas that already have some local coverage. The report estimates that -- for outlets focused on state and local news -- there are roughly 550 digital-only news sites, 720 ethnic media organizations and 215 public broadcasting stations in America, compared to 6,000 newspapers.
The authors argue that the dynamic between those with access to quality local news and those who don't "poses a far-reaching crisis for our democracy as it simultaneously struggles with political polarization, a lack of civic engagement and the proliferation of misinformation and information online."
Red Hat Software

CIQ, Oracle and SUSE Unite Behind OpenELA To Take on Red Hat Enterprise Linux (zdnet.com) 18

An anonymous reader shares a report: When Mike McGrath, Red Hat's Red Hat Core Platforms vice president, announced that Red Hat was putting new restrictions on who could access Red Hat Enterprise Linux (RHEL)'s code, other Linux companies that depended on RHEL's code for their own distro releases were, in a word, unhappy. Three of them, CIQ, Oracle, and SUSE, came together to form the Open Enterprise Linux Association (OpenELA). Their united goal was to foster "the development of distributions compatible with Red Hat Enterprise Linux (RHEL) by providing open and free enterprise Linux source code." Now, the first OpenELA code release is available.

As Thomas Di Giacomo, SUSE's chief technology and product officer, said in a statement, "We're pleased to deliver on our promise of making source code available and to continue our work together to provide choice to our customers while we ensure that Enterprise Linux source code remains freely accessible to the public." Why are they doing this? Gregory Kurtzer, CIQ's CEO, and Rocky Linux's founder, explained: "Organizations worldwide standardized on CentOS because it was freely available, followed the Enterprise Linux standard, and was well supported. After CentOS was discontinued, it left not only a gaping hole in the ecosystem but also clearly showed how the community needs to come together and do better. OpenELA is exactly that -- the community's answer to ensuring a collaborative and stable future for all professional IT departments and enterprise use cases."

AI

G7 Nations Will Announce an 'AI Code of Conduct' for Companies Building AI (reuters.com) 42

The seven industrial countries known as the "G7" — America, Canada, Japan, Germany, France, Italy, and Britain — will agree on a code of conduct Monday for companies developing advanced AI systems, reports Reuters.

The news comes "as governments seek to mitigate the risks and potential misuse of the technology," Reuters reports — citing a G7 document. The 11-point code "aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems", the G7 document said. It "is meant to help seize the benefits and address the risks and challenges brought by these technologies".

The code urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle, as well as tackle incidents and patterns of misuse after AI products have been placed on the market. Companies should post public reports on the capabilities, limitations and the use and misuse of AI systems, and also invest in robust security controls.

Microsoft

To 'Evolve' Windows Authentication, Microsoft Wants to Eventually Disable NTLM in Windows 11 (neowin.net) 68

An anonymous reader shared this report from Neowin: The various versions of Windows have used Kerberos as its main authentication protocol for over 20 years. However, in certain circumstances, the OS has to use another method, NTLM (NT LAN Manager). Today, Microsoft announced that it is expanding the use of Kerberos, with the plan to eventually ditch the use of NTLM altogether.

In a blog post, Microsoft stated that NTLM continues to be used by some businesses and organizations for Windows authentication because it "doesn't require local network connection to a Domain Controller." It also is "the only protocol supported when using local accounts" and it "works when you don't know who the target server is." Microsoft states:

These benefits have led to some applications and services hardcoding the use of NTLM instead of trying to use other, more modern authentication protocols like Kerberos. Kerberos provides better security guarantees and is more extensible than NTLM, which is why it is now a preferred default protocol in Windows. The problem is that while businesses can turn off NTLM for authentication, those hardwired apps and services could experience issues. That's why Microsoft has added two new authentication features to Kerberos.

Microsoft's blog post calls it "the evolution of Windows authentication," arguing that "As Windows evolves to meet the needs of our ever-changing world, the way we protect users must also evolve to address modern security challenges..." So, "our team is building new features for Windows 11."
  • Initial and Pass Through Authentication Using Kerberos, or IAKerb, "a public extension to the industry standard Kerberos protocol that allows a client without line-of-sight to a Domain Controller to authenticate through a server that does have line-of-sight."
  • A local Key Distribution Center (KDC) for Kerberos, "built on top of the local machine's Security Account Manager so remote authentication of local user accounts can be done using Kerberos."
  • "We are also fixing hard-coded instances of NTLM built into existing Windows components... shifting these components to use the Negotiate protocol so that Kerberos can be used instead of NTLM... NTLM will continue to be available as a fallback to maintain existing compatibility."
  • "We are also introducing improved NTLM auditing and management functionality to give your organization more insight into your NTLM usage and better control for removing it."

"Reducing the use of NTLM will ultimately culminate in it being disabled in Windows 11. We are taking a data-driven approach and monitoring reductions in NTLM usage to determine when it will be safe to disable."


Encryption

Mathematician Warns US Spies May Be Weakening Next-Gen Encryption (newscientist.com) 78

Matthew Sparkes reports via NewScientist: A prominent cryptography expert has told New Scientist that a US spy agency could be weakening a new generation of algorithms designed to protect against hackers equipped with quantum computers. Daniel Bernstein at the University of Illinois Chicago says that the US National Institute of Standards and Technology (NIST) is deliberately obscuring the level of involvement the US National Security Agency (NSA) has in developing new encryption standards for "post-quantum cryptography" (PQC). He also believes that NIST has made errors -- either accidental or deliberate -- in calculations describing the security of the new standards. NIST denies the claims.

Bernstein alleges that NIST's calculations for one of the upcoming PQC standards, Kyber512, are "glaringly wrong," making it appear more secure than it really is. He says that NIST multiplied two numbers together when it would have been more correct to add them, resulting in an artificially high assessment of Kyber512's robustness to attack. "We disagree with his analysis," says Dustin Moody at NIST. "It's a question for which there isn't scientific certainty and intelligent people can have different views. We respect Dan's opinion, but don't agree with what he says." Moody says that Kyber512 meets NIST's "level one" security criteria, which makes it at least as hard to break as a commonly used existing algorithm, AES-128. That said, NIST recommends that, in practice, people should use a stronger version, Kyber768, which Moody says was a suggestion from the algorithm's developers.

NIST is currently in a period of public consultation and hopes to reveal the final standards for PQC algorithms next year so that organizations can begin to adopt them. The Kyber algorithm seems likely to make the cut as it has already progressed through several layers of selection. Given its secretive nature, it is difficult to say for sure whether or not the NSA has influenced the PQC standards, but there have long been suggestions and rumors that the agency deliberately weakens encryption algorithms. In 2013, The New York Times reported that the agency had a budget of $250 million for the task, and intelligence agency documents leaked by Edward Snowden in the same year contained references to the NSA deliberately placing a backdoor in a cryptography algorithm, although that algorithm was later dropped from official standards.

Open Source

Europe Mulls Open Sourcing TETRA Emergency Services' Encryption Algorithms (theregister.com) 18

Jessica Lyons Hardcastle reports via The Register: The European Telecommunications Standards Institute (ETSI) may open source the proprietary encryption algorithms used to secure emergency radio communications after a public backlash over security flaws found this summer. "The ETSI Technical Committee in charge of TETRA algorithms is discussing whether to make them public," Claire Boyer, a spokesperson for the European standards body, told The Register. The committee will discuss the issue at its next meeting on October 26, she said, adding: "If the consensus is not reached, it will go to a vote."

TETRA is the Terrestrial Trunked Radio protocol, which is used in Europe, the UK, and other countries to secure radio communications used by government agencies, law enforcement, military and emergency services organizations. In July, a Netherlands security biz uncovered five vulnerabilities in TETRA, two deemed critical, that could allow criminals to decrypt communications, including in real-time, to inject messages, deanonymize users, or set the session key to zero for uplink interception. At the time ETSI downplayed the flaws, which it said had been fixed last October, and noted that "it's not aware of any active exploitation of operational networks."

At the time ETSI downplayed the flaws, which it said had been fixed last October, and noted that "it's not aware of any active exploitation of operational networks." It did, however, face criticism from the security community over its response to the vulnerabilities -- and the proprietary nature of the encryption algorithms, which makes it more difficult for proper pentesting of the emergency network system.
"This whole idea of secret encryption algorithms is crazy, old-fashioned stuff," said security author Kim Zetter who first reported the story. "It's very 1960s and 1970s and quaint. If you're not publishing [intentionally] weak algorithms, I don't know why you would keep the algorithms secret."

Slashdot Top Deals