Privacy

Hackers Are Sending Fraudulent Police Data Requests To Tech Giants To Steal People's Private Information (gizmodo.com) 14

An anonymous reader quotes a report from TechCrunch: The FBI is warning that hackers are obtaining private user information — including emails and phone numbers — from U.S.-based tech companies by compromising government and police email addresses to submit "emergency" data requests. The FBI's public notice filed this week is a rare admission from the federal government about the threat from fraudulent emergency data requests, a legal process designed to help police and federal authorities obtain information from companies to respond to immediate threats affecting someone's life or property. The abuse of emergency data requests is not new, and has been widely reported in recent years. Now, the FBI warns that it saw an "uptick" around August in criminal posts online advertising access to or conducting fraudulent emergency data requests, and that it was going public for awareness.

"Cyber-criminals are likely gaining access to compromised US and foreign government email addresses and using them to conduct fraudulent emergency data requests to US based companies, exposing the personal information of customers to further use for criminal purposes," reads the FBI's advisory. [...] The FBI said in its advisory that it had seen several public posts made by known cybercriminals over 2023 and 2024, claiming access to email addresses used by U.S. law enforcement and some foreign governments. The FBI says this access was ultimately used to send fraudulent subpoenas and other legal demands to U.S. companies seeking private user data stored on their systems. The advisory said that the cybercriminals were successful in masquerading as law enforcement by using compromised police accounts to send emails to companies requesting user data. In some cases, the requests cited false threats, like claims of human trafficking and, in one case, that an individual would "suffer greatly or die" unless the company in question returns the requested information.

The FBI said the compromised access to law enforcement accounts allowed the hackers to generate legitimate-looking subpoenas that resulted in companies turning over usernames, emails, phone numbers, and other private information about their users. But not all fraudulent attempts to file emergency data requests were successful, the FBI said. The FBI said in its advisory that law enforcement organizations should take steps to improve their cybersecurity posture to prevent intrusions, including stronger passwords and multi-factor authentication. The FBI said that private companies "should apply critical thinking to any emergency data requests received," given that cybercriminals "understand the need for exigency."

Crime

Criminal Charges Announced Over Multi-Year Fraud Scheme in a Carbon Credits Market (marketwatch.com) 52

This week the U.S. Attorney's Office for the Southern District of New York unsealed charges over a "scheme to commit fraud" in carbon markets, which they say fraudulently netted one company "tens of millions of dollars" worth of credits — which led to "securing an investment of over $100 million."

MarketWatch reports: Ken Newcombe had spent years building a program to distribute more environmentally friendly cookstoves for free to rural communities in Africa and Southeast Asia. The benefit for his company, C-Quest Capital, would be the carbon credits it would receive in exchange for reducing the amount of fuel people burned in order to cook food — credits the company could then sell for a profit to big oil companies like BP.

But when Newcombe tried to ramp up the program, federal prosecutors said in an indictment made public Wednesday, he quickly realized that the stoves wouldn't deliver the emissions savings he had promised investors. Rather than admit his mistake, he and his partners cooked the books instead, prosecutors said... That allowed them to obtain carbon credits worth tens of millions of dollars that they didn't deserve, prosecutors said. On the basis of the fraudulently gained credits, prosecutors said, C-Quest was able to secure $250 million in funding from an outside investor.

"The alleged actions of the defendants and their co-conspirators risked undermining the integrity of [the global market for carbon credits], which is an important part of the fight against climate change," said Damian Williams, the U.S. attorney for the Southern District of New York.

From announced by the U.S. Attorney's Office: U.S. Attorney Damian Williams said... "The alleged actions of the defendants and their co-conspirators risked undermining the integrity of that market, which is an important part of the fight against climate change. Protecting the sanctity and integrity of the financial markets continues to be a cornerstone initiative for this Office, and we will continue to be vigilant in rooting out fraud in the market for carbon credits...."

While most carbon credits are created through, and trade in compliance markets, there is also a voluntary carbon market. Voluntary markets revolve around companies and entities that voluntarily set goals to reduce or offset their carbon emissions, often to align with goals from employees or shareholders. In voluntary markets, the credits are issued by non-governmental organizations, using standards for measuring emission reductions that they develop based on input from market participants, rather than on mandates from governments. The non-governmental organizations issue voluntary carbon credits to project developers that run projects that reduce emissions or remove greenhouse gases from the atmosphere.

CQC was a for-profit company that ran projects to generate carbon credits — including a type of credit known as a voluntary carbon unit ("VCU") — by reducing emissions of greenhouse gases. CQC profited by selling VCUs it obtained, often to companies seeking to offset the impact of greenhouse gases they emit in the course of operating their businesses.

The company itself was not charged due to "voluntary and timely self-disclosure of misconduct," according to the announcement, along with "full and proactive cooperation, timely and appropriate remediation, and agreement to cancel or void certain voluntary carbon units.
Security

Attackers Exploit Critical Zimbra Vulnerability Using CC'd Email Addresses (arstechnica.com) 6

An anonymous reader quotes a report from Ars Technica: Attackers are actively exploiting a critical vulnerability in mail servers sold by Zimbra in an attempt to remotely execute malicious commands that install a backdoor, researchers warn. The vulnerability, tracked as CVE-2024-45519, resides in the Zimbra email and collaboration server used by medium and large organizations. When an admin manually changes default settings to enable the postjournal service, attackers can execute commands by sending maliciously formed emails to an address hosted on the server. Zimbra recently patched the vulnerability. All Zimbra users should install it or, at a minimum, ensure that postjournal is disabled.

On Tuesday, Security researcher Ivan Kwiatkowski first reported the in-the-wild attacks, which he described as "mass exploitation." He said the malicious emails were sent by the IP address 79.124.49[.]86 and, when successful, attempted to run a file hosted there using the tool known as curl. Researchers from security firm Proofpoint took to social media later that day to confirm the report. On Wednesday, security researchers provided additional details that suggested the damage from ongoing exploitation was likely to be contained. As already noted, they said, a default setting must be changed, likely lowering the number of servers that are vulnerable. [...]

Proofpoint has explained that some of the malicious emails used multiple email addresses that, when pasted into the CC field, attempted to install a webshell-based backdoor on vulnerable Zimbra servers. The full cc list was wrapped as a single string and encoded using the base64 algorithm. When combined and converted back into plaintext, they created a webshell at the path: /jetty/webapps/zimbraAdmin/public/jsp/zimbraConfig.jsp. Proofpoint went on to say: "Once installed, the webshell listens for inbound connection with a pre-determined JSESSIONID Cookie field; if present, the webshell will then parse the JACTION cookie for base64 commands. The webshell has support for command execution via exec or download and execute a file over a socket connection."

Privacy

NIST Proposes Barring Some of the Most Nonsensical Password Rules (arstechnica.com) 180

Ars Technica's Dan Goodin reports: Last week, NIST released its second public draft of SP 800-63-4, the latest version of its Digital Identity Guidelines. At roughly 35,000 words and filled with jargon and bureaucratic terms, the document is nearly impossible to read all the way through and just as hard to understand fully. It sets both the technical requirements and recommended best practices for determining the validity of methods used to authenticate digital identities online. Organizations that interact with the federal government online are required to be in compliance. A section devoted to passwords injects a large helping of badly needed common sense practices that challenge common policies. An example: The new rules bar the requirement that end users periodically change their passwords. This requirement came into being decades ago when password security was poorly understood, and it was common for people to choose common names, dictionary words, and other secrets that were easily guessed.

Since then, most services require the use of stronger passwords made up of randomly generated characters or phrases. When passwords are chosen properly, the requirement to periodically change them, typically every one to three months, can actually diminish security because the added burden incentivizes weaker passwords that are easier for people to set and remember. Another requirement that often does more harm than good is the required use of certain characters, such as at least one number, one special character, and one upper- and lowercase letter. When passwords are sufficiently long and random, there's no benefit from requiring or restricting the use of certain characters. And again, rules governing composition can actually lead to people choosing weaker passcodes.

The latest NIST guidelines now state that:
- Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords and
- Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator. ("Verifiers" is bureaucrat speak for the entity that verifies an account holder's identity by corroborating the holder's authentication credentials. Short for credential service provider, "CSPs" are a trusted entity that assigns or registers authenticators to the account holder.) In previous versions of the guidelines, some of the rules used the words "should not," which means the practice is not recommended as a best practice. "Shall not," by contrast, means the practice must be barred for an organization to be in compliance.
Several other common sense practices mentioned in the document include: 1. Verifiers and CSPs SHALL require passwords to be a minimum of eight characters in length and SHOULD require passwords to be a minimum of 15 characters in length.
2. Verifiers and CSPs SHOULD permit a maximum password length of at least 64 characters.
3. Verifiers and CSPs SHOULD accept all printing ASCII [RFC20] characters and the space character in passwords.
4. Verifiers and CSPs SHOULD accept Unicode [ISO/ISC 10646] characters in passwords. Each Unicode code point SHALL be counted as a single character when evaluating password length.
5. Verifiers and CSPs SHALL NOT impose other composition rules (e.g., requiring mixtures of different character types) for passwords.
6. Verifiers and CSPs SHALL NOT require users to change passwords periodically. However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
7. Verifiers and CSPs SHALL NOT permit the subscriber to store a hint that is accessible to an unauthenticated claimant.
8. Verifiers and CSPs SHALL NOT prompt subscribers to use knowledge-based authentication (KBA) (e.g., "What was the name of your first pet?") or security questions when choosing passwords.
9. Verifiers SHALL verify the entire submitted password (i.e., not truncate it).

Government

AI Smackdown: How a New FTC Rule Also Fights Fake Product Reviews (salon.com) 29

Salon looks closer at a new $51,744-per-violation AI regulation officially approved one month ago by America's FTC — calling it a financial blow "If you're a digital media company whose revenue comes from publishing AI-generated articles and fake product reviews.

But they point out the rules also ban "product review suppression." Per the ruling, that means it's a violation for "anyone to use an unfounded or groundless legal threat, a physical threat, intimidation, or a public false accusation in response to a consumer review... to (1) prevent a review or any portion thereof from being written or created, or (2) cause a review or any portion thereof to be removed, whether or not that review or a portion thereof is replaced with other content."

Finally... The rule makes it a violation for a business to "provide compensation or other incentives in exchange for, or conditioned expressly or by implication on, the writing or creation of consumer reviews expressing a particular sentiment, whether positive or negative, regarding the product, service or business...." [T]he new rule also prevents secretly advertising for yourself while pretending to be an independent outlet or company. It bars "the creation or operation of websites, organizations or entities that purportedly provide independent reviews or opinions of products or services but are, in fact, created and controlled by the companies offering the products or services."

In an earlier statement, FTC Consumer Protection Bureau head Sam Levine, said the new rule "should help level the playing field for honest companies. We're using all available means to attack deceptive advertising in the digital age," he said.

Thanks to long-time Slashdot reader mspohr for sharing the article.
Privacy

Chinese Spies Spent Months Inside Aerospace Engineering Firm's Network Via Legacy IT (theregister.com) 16

The Register's Jessica Lyons reports: Chinese state-sponsored spies have been spotted inside a global engineering firm's network, having gained initial entry using an admin portal's default credentials on an IBM AIX server. In an exclusive interview with The Register, Binary Defense's Director of Security Research John Dwyer said the cyber snoops first compromised one of the victim's three unmanaged AIX servers in March, and remained inside the US-headquartered manufacturer's IT environment for four months while poking around for more boxes to commandeer. It's a tale that should be a warning to those with long- or almost-forgotten machines connected to their networks; those with shadow IT deployments; and those with unmanaged equipment. While the rest of your environment is protected by whatever threat detection you have in place, these legacy services are perfect starting points for miscreants.

This particular company, which Dwyer declined to name, makes components for public and private aerospace organizations and other critical sectors, including oil and gas. The intrusion has been attributed to an unnamed People's Republic of China team, whose motivation appears to be espionage and blueprint theft. It's worth noting the Feds have issued multiple security alerts this year about Beijing's spy crews including APT40 and Volt Typhoon, which has been accused of burrowing into American networks in preparation for destructive cyberattacks.

After discovering China's agents within its network in August, the manufacturer alerted local and federal law enforcement agencies and worked with government cybersecurity officials on attribution and mitigation, we're told. Binary Defense was also called in to investigate. Before being caught and subsequently booted off the network, the Chinese intruders uploaded a web shell and established persistent access, thus giving them full, remote access to the IT network -- putting the spies in a prime position for potential intellectual property theft and supply-chain manipulation. If a compromised component makes it out of the supply chain and into machinery in production, whoever is using that equipment or vehicle will end up feeling the brunt when that component fails, goes rogue, or goes awry.

"The scary side of it is: With our supply chain, we have an assumed risk chain, where whoever is consuming the final product -- whether it is the government, the US Department of the Defense, school systems â" assumes all of the risks of all the interconnected pieces of the supply chain," Dwyer told The Register. Plus, he added, adversarial nations are well aware of this, "and the attacks continually seem to be shifting left." That is to say, attempts to meddle with products are happening earlier and earlier in the supply-chain pipeline, thus affecting more and more victims and being more deep-rooted in systems. Breaking into a classified network to steal designs or cause trouble is not super easy. "But can I get into a piece of the supply chain at a manufacturing center that isn't beholden to the same standards and accomplish my goals and objectives?" Dwyer asked. The answer, of course, is yes. [...]

Programming

GitHub Actions Typosquatting: a High-Impact Supply Chain Attack-in-Waiting? (csoonline.com) 4

GitHub Actions let developers "automate software builds and tests," writes CSO Online, "by setting up workflows that trigger when specific events are detected, such as when new code is committed to the repository."

They also "can be reused and shared with others on the GitHub Marketplace, which currently lists thousands of public Actions that developers can use instead of coding their own. Actions can also be included as dependencies inside other Actions, creating an ecosystem similar to other open-source component registries." Researchers from Orca Security recently investigated the impact typosquatting can have in the GitHub Actions ecosystem by registering 14 GitHub organizations with names that are misspellings of popular Actions owners — for example, circelci instead of circleci, actons instead of actions, google-github-actons instead of google-github-actions... One might think that developers making typos is not very common, but given the scale of GitHub — over 100 million developers with over 420 million repositories — even a statistically rare occurrence can mean thousands of potential victims. For example, the researchers found 194 workflow files calling the "action" organization instead of "actions"; moreover, 12 public repositories started referencing the researchers' fake "actons" organization within two months of setting it up.

"Although the number may not seem that high, these are only the public repositories we can search for and there could be multiple more private ones, with numbers increasing over time," the researchers wrote... Ultimately this is a low-cost high-impact attack. Having the ability to execute malicious actions against someone else's code is very powerful and can result in software supply chain attacks, with organizations and users that then consume the backdoored code being impacted as well...

Out of the 14 typosquatted organizations that Orca set up for their proof-of-concept, GitHub only suspended one over a three-month period — circelci — and that's likely because someone reported it. CircleCI is one of the most popular CI/CD platforms.

Thanks to Slashdot reader snydeq for sharing the article.
Social Networks

Washington Post Calls Telegram 'a Haven for Free Speech - and Child Predators' (yahoo.com) 82

The Washington Post writes that Telegram's "anything-goes approach" to its 950 million users "has also made it one of the internet's largest havens for child predators, experts say...."

"Durov's critics say his public idealism masks an opportunistic business model that allows Telegram to profit from the worst the internet has to offer, including child sexual abuse material, or CSAM... " [Telegram is] an app of choice for political organizing, including by dissidents under repressive regimes. But it is equally appealing for terrorist groups, criminal organizations and sexual predators, who use it as a hub to share and consume nonconsensual pornography, AI "deepfake" nudes, and illegal sexual images and videos of exploited minors, said Alex Stamos, chief information security officer at the cybersecurity firm SentinelOne. "Due to their advertised policy of not cooperating with law enforcement, and the fact that they are known not to scan for CSAM, Telegram has attracted large groups of pedophiles trading and selling child abuse materials," Stamos said.

That reach comes even though many Telegram exchanges don't actually use the strong forms of encryption available on true private messaging apps, he added. Telegram is used for private messaging, public posts and group chats. Only one-to-one conversations can be encrypted in a way that even Telegram can't access them. And that occurs only if users choose the option, meaning the company could turn over everything else to governments if it wanted to... French prosecutors argue that Durov is in fact responsible for Telegram's emergence as a global haven for illegal content, including CSAM, because of his reluctance to moderate it and his refusal to help authorities police it, among other allegations...

David Kaye, a professor at University of California, Irvine School of Law and former U.N. special rapporteur on freedom of expression... said that while Telegram has at times banned groups and taken down [CSAM] content in response to law enforcement, its refusal to share data with investigators sets it apart from most other major tech companies. Unlike U.S.-based platforms, Telegram is not required by U.S. law to report instances of CSAM to the National Center for Missing and Exploited Children, or NCMEC. Many online platforms based overseas do so anyway — but not Telegram. "NCMEC has tried to get them to report, but they have no interest and are known for not wanting to work with [law enforcement agencies] or anyone in this space," a NCMEC spokesperson said.

The Post also writes that Telegram "has repeatedly been revealed to serve as a tool to store, distribute and share child sexual imagery." (They cite several examples, including two different men convicted to minimum sentences of at least 10 years for using the service to purchase CSAM and solicit explicit photos from minors.)
Programming

'GitHub Actions' Artifacts Leak Tokens, Expose Cloud Services and Repositories (securityweek.com) 19

Security Week brings news about CI/CD workflows using GitHub Actions in build processes. Some workflows can generate artifacts that "may inadvertently leak tokens for third party cloud services and GitHub, exposing repositories and services to compromise, Palo Alto Networks warns." [The artifacts] function as a mechanism for persisting and sharing data across jobs within the workflow and ensure that data is available even after the workflow finishes. [The artifacts] are stored for up to 90 days and, in open source projects, are publicly available... The identified issue, a combination of misconfigurations and security defects, allows anyone with read access to a repository to consume the leaked tokens, and threat actors could exploit it to push malicious code or steal secrets from the repository. "It's important to note that these tokens weren't part of the repository code but were only found in repository-produced artifacts," Palo Alto Networks' Yaron Avital explains...

"The Super-Linter log file is often uploaded as a build artifact for reasons like debuggability and maintenance. But this practice exposed sensitive tokens of the repository." Super-Linter has been updated and no longer prints environment variables to log files.

Avital was able to identify a leaked token that, unlike the GitHub token, would not expire as soon as the workflow job ends, and automated the process that downloads an artifact, extracts the token, and uses it to replace the artifact with a malicious one. Because subsequent workflow jobs would often use previously uploaded artifacts, an attacker could use this process to achieve remote code execution (RCE) on the job runner that uses the malicious artifact, potentially compromising workstations, Avital notes.

Avital's blog post notes other variations on the attack — and "The research laid out here allowed me to compromise dozens of projects maintained by well-known organizations, including firebase-js-sdk by Google, a JavaScript package directly referenced by 1.6 million public projects, according to GitHub. Another high-profile project involved adsys, a tool included in the Ubuntu distribution used by corporations for integration with Active Directory." (Avital says the issue even impacted projects from Microsoft, Red Hat, and AWS.) "All open-source projects I approached with this issue cooperated swiftly and patched their code. Some offered bounties and cool swag."

"This research was reported to GitHub's bug bounty program. They categorized the issue as informational, placing the onus on users to secure their uploaded artifacts." My aim in this article is to highlight the potential for unintentionally exposing sensitive information through artifacts in GitHub Actions workflows. To address the concern, I developed a proof of concept (PoC) custom action that safeguards against such leaks. The action uses the @actions/artifact package, which is also used by the upload-artifact GitHub action, adding a crucial security layer by using an open-source scanner to audit the source directory for secrets and blocking the artifact upload when risk of accidental secret exposure exists. This approach promotes a more secure workflow environment...

As this research shows, we have a gap in the current security conversation regarding artifact scanning. GitHub's deprecation of Artifacts V3 should prompt organizations using the artifacts mechanism to reevaluate the way they use it. Security defenders must adopt a holistic approach, meticulously scrutinizing every stage — from code to production — for potential vulnerabilities. Overlooked elements like build artifacts often become prime targets for attackers. Reduce workflow permissions of runner tokens according to least privilege and review artifact creation in your CI/CD pipelines. By implementing a proactive and vigilant approach to security, defenders can significantly strengthen their project's security posture.

The blog post also notes protection and mitigation features from Palo Alto Networks....
The Courts

The Nation's Oldest Nonprofit Newsroom Is Suing OpenAI and Microsoft (engadget.com) 16

The Center for Investigative Reporting (CIR), the nation's oldest nonprofit newsroom, sued OpenAI and Microsoft in federal court on Thursday for allegedly using its content to train AI models without consent or compensation. CIR, founded in 1977 in San Francisco, evolved into a multi-platform newsroom with its flagship distribution platform Reveal. In February, it merged with Mother Jones.

"OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material," said Monika Bauerlein, CEO of the Center for Investigative Reporting, in a statement. "This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it." Bauerlein said that OpenAI and Microsoft treat the work of nonprofit and independent publishers "as free raw material for their products," and added that such moves by generative AI companies hurt the public's access to truthful information in a "disappearing news landscape." Engadget reports: The CIR's lawsuit, which was filed in Manhattan's federal court, accuses OpenAI and Microsoft, which owns nearly half of the company, of violating the Copyright Act and the Digital Millennium Copyright Act multiple times.

News organizations find themselves at an inflection point with generative AI. While the CIR is joining publishers like The New York Times, New York Daily News, The Intercept, AlterNet and Chicago Tribune in suing OpenAI, others publishers have chosen to strike licensing deals with the company. These deals will allow OpenAI to train its models on archives and ongoing content published by these publishers and cite information from them in responses offered by ChatGPT.

Electronic Frontier Foundation

EFF: New License Plate Reader Vulnerabilties Prove The Tech Itself is a Public Safety Threat (eff.org) 97

Automated license plate readers "pose risks to public safety," argues the EFF, "that may outweigh the crimes they are attempting to address in the first place." When law enforcement uses automated license plate readers (ALPRs) to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats. The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials...

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems... Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology. It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023.

Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs... If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage... The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities...

But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife.

The article concludes that public safety agencies should "collect only the data they need for actual criminal investigations.

"They must never store more data than they adequately protect within their limited resources-or they must keep the public safe from data breaches by not collecting the data at all."
Facebook

Meta Accused of Trying To Discredit Ad Researchers (theregister.com) 18

Thomas Claburn reports via The Register: Meta allegedly tried to discredit university researchers in Brazil who had flagged fraudulent adverts on the social network's ad platform. Nucleo, a Brazil-based news organization, said it has obtained government documents showing that attorneys representing Meta questioned the credibility of researchers from NetLab, which is part of the Federal University of Rio de Janeiro (UFRJ). NetLab's research into Meta's ads contributed to Brazil's National Consumer Secretariat (Senacon) decision in 2023 to fine Meta $1.7 million (9.3 million BRL), which is still being appealed. Meta (then Facebook) was separately fined of $1.2 million (6.6 million BRL) related to Cambridge Analytica.

As noted by Nucleo, NetLab's report showed that Facebook, despite being notified about the issues, had failed to remove more than 1,800 scam ads that fraudulently used the name of a government program that was supposed to assist those in debt. In response to the fine, attorneys representing Meta from law firm TozziniFreire allegedly accused the NetLab team of bias and of failing to involve Meta in the research process. Nucleo says that it obtained the administrative filing through freedom of information requests to Senacon. The documents are said to date from December 26 last year and to be part of the ongoing case against Meta. A spokesperson for NetLab, who asked not to be identified by name due to online harassment directed at the organization's members, told The Register that the research group was aware of the Nucleo report. "We were kind of surprised to see the account of our work in this law firm document," the spokesperson said. "We expected to be treated with more fairness for our work. Honestly, it comes at a very bad moment because NetLab particularly, but also Brazilian science in general, is being attacked by far-right groups."

On Thursday, more than 70 civil society groups including NetLab published an open letter decrying Meta's legal tactics. "This is an attack on scientific research work, and attempts at intimidation of researchers and researchers who are performing excellent work in the production of knowledge from empirical analysis that have been fundamental to qualify the public debate on the accountability of social media platforms operating in the country, especially with regard to paid content that causes harm to consumers of these platforms and that threaten the future of our democracy," the letter says. "This kind of attack and intimidation is made even more dangerous by aligning with arguments that, without any evidence, have been used by the far right to discredit the most diverse scientific productions, including NetLab itself." The claim, allegedly made by Meta's attorneys, is that the ad biz was "not given the opportunity to appoint a technical assistant and present questions" in the preparation of the NetLabs report. This is particularly striking given Meta's efforts to limit research into its ad platform.
A Meta spokesperson told The Register: "We value input from civil society organizations and academic institutions for the context they provide as we constantly work toward improving our services. Meta's defense filed with the Brazilian Consumer Regulator questioned the use of the NetLab report as legal evidence, since it was produced without giving us prior opportunity to contribute meaningfully, in violation of local legal requirements."
Crime

British Duo Arrested For SMS Phishing Via Homemade Cell Tower (theregister.com) 25

British police have arrested two individuals involved in an SMS-based phishing campaign using a unique device police described as a "homemade mobile antenna," "an illegitimate telephone mast," and a "text message blaster." This first-of-its-kind device in the UK was designed to send fraudulent texts impersonating banks and other official organizations, "all while allegedly bypassing network operators' anti-SMS-based phishing, or smishing, defenses," reports The Register. From the report: Thousands of messages were sent using this setup, City of London Police claimed on Friday, with those suspected to be behind the operation misrepresenting themselves as banks "and other official organizations" in their texts. [...] Huayong Xu, 32, of Alton Road in Croydon, was arrested on May 23 and remains the only individual identified by police in this investigation at this stage. He has been charged with possession of articles for use in fraud and will appear at Inner London Crown Court on June 26. The other individual, who wasn't identified and did not have their charges disclosed by police, was arrested on May 9 in Manchester and was bailed. [...]

Without any additional information to go on, it's difficult to make any kind of assumption about what these "text message blaster" devices might be. However, one possibility, judging from the messaging from the police, is that the plod are referring to an IMSI catcher aka a Stingray, which acts as a cellphone tower to communicate with people's handhelds. But those are intended primarily for surveillance. What's more likely is that the suspected UK device is perhaps some kind of SIM bank or collection of phones programmed to spam out shedloads of SMSes at a time.

Privacy

Bangladeshi Police Agents Accused of Selling Citizens' Personal Information on Telegram (techcrunch.com) 5

An anonymous reader shares a report: Two senior officials working for anti-terror police in Bangladesh allegedly collected and sold classified and personal information of citizens to criminals on Telegram, TechCrunch has learned. The data allegedly sold included national identity details of citizens, cell phone call records and other "classified secret information," according to a letter signed by a senior Bangladeshi intelligence official, seen by TechCrunch.

The letter, dated April 28, was written by Brigadier General Mohammad Baker, who serves as a director of Bangladesh's National Telecommunications Monitoring Center, or NTMC, the country's electronic eavesdropping agency. Baker confirmed the legitimacy of the letter and its contents in an interview with TechCrunch. "Departmental investigation is ongoing for both the cases," Baker said in an online chat, adding that the Bangladeshi Ministry of Home Affairs ordered the affected police organizations to take "necessary action against those officers." The letter, which was originally written in Bengali and addressed to the senior secretary of the Ministry of Home Affairs Public Security Division, alleges the two police agents accessed and passed "extremely sensitive information" of private citizens on Telegram in exchange for money.

AI

OpenAI Disrupts Five Attempts To Misuse Its AI For 'Deceptive Activity' (reuters.com) 16

An anonymous reader quotes a report from Reuters: Sam Altman-led OpenAI said on Thursday it had disrupted five covert influence operations that sought to use its artificial intelligence models for "deceptive activity" across the internet. The artificial intelligence firm said the threat actors used its AI models to generate short comments, longer articles in a range of languages, made up names and bios for social media accounts over the last three months. These campaigns, which included threat actors from Russia, China, Iran and Israel, also focused on issues including Russia's invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, among others.

The deceptive operations were an "attempt to manipulate public opinion or influence political outcomes," OpenAI said in a statement. [...] The deceptive campaigns have not benefited from increased audience engagement or reach due to the AI firm's services, OpenAI said in the statement. OpenAI said these operations did not solely use AI-generated material but included manually written texts or memes copied from across the internet.
In a separate announcement on Wednesday, Meta said it had found "likely AI-generated" content used deceptively across its platforms, "including comments praising Israel's handling of the war in Gaza published below posts from global news organizations and U.S. lawmakers," reports Reuters.
Privacy

When a Politician Sues a Blog to Unmask Its Anonymous Commenter 79

Markos Moulitsas is the poll-watching founder of the political blog Daily Kos. Thursday he wrote that in 2021, future third-party presidential candidate RFK Jr. had sued their web site.

"Things are not going well for him." Back in 2021, Robert F. Kennedy Jr. sued Daily Kos to unmask the identity of a community member who posted a critical story about his dalliance with neo-Nazis at a Berlin rally. I updated the story here, here, here, here, and here.

To briefly summarize, Kennedy wanted us to doxx our community member, and we stridently refused.

The site and the politician then continued fighting for more than three years. "Daily Kos lost the first legal round in court," Moulitsas posted in 2021, "thanks to a judge who is apparently unconcerned with First Amendment ramifications given the chilling effect of her ruling."

But even then, Moulitsas was clear on his rights: Because of Section 230 of the Communications Decency Act, [Kennedy] cannot sue Daily Kos — the site itself — for defamation. We are protected by the so-called safe harbor. That's why he's demanding we reveal what we know about "DowneastDem" so they can sue her or him directly.
Moulitsas also stressed that his own 2021 blog post was "reiterating everything that community member wrote, and expanding on it. And so instead of going after a pseudonymous community writer/diarist on this site, maybe Kennedy will drop that pointless lawsuit and go after me... consider this an escalation." (Among other things, the post cited a German-language news account saying Kennedy "sounded the alarm concerning the 5G mobile network and Microsoft founder Bill Gates..." Moulitsas also noted an Irish Times article which confirmed that at the rally Kennedy spoke at, "Noticeable numbers of neo-Nazis, kitted out with historic Reich flags and other extremist accessories, mixed in with the crowd.")

So what happened? Moulitsas posted an update Thursday: Shockingly, Kennedy got a trial court judge in New York to agree with him, and a subpoena was issued to Daily Kos to turn over any information we might have on the account. However, we are based in California, not New York, so once I received the subpoena at home, we had a California court not just quash the subpoena, but essentially signal that if New York didn't do the right thing on appeal, California could very well take care of it.

It's been a while since I updated, and given a favorable court ruling Thursday, it's way past time to catch everyone up.

New York is one of the U.S. states that doesn't have a strict "Dendrite standard" law protecting anonymous speech. But soon the blog founder discovered he had allies: The issues at hand are so important that The New York Times, the E.W.Scripps Company, the First Amendment Coalition, New York Public Radio, and seven other New York media companies joined the appeals effort with their own joint amicus brief. What started as a dispute over a Daily Kos diarist has become a meaningful First Amendment battle, with major repercussions given New York's role as a major news media and distribution center.

After reportedly spending over $1 million on legal fees, Kennedy somehow discovered the identity of our community member sometime last year and promptly filed a defamation suit in New Hampshire in what seemed a clumsy attempt at forum shopping, or the practice of choosing where to file suit based on the belief you'll be granted a favorable outcome. The community member lives in Maine, Kennedy lives in California, and Daily Kos doesn't publish specifically in New Hampshire. A perplexed court threw out the case this past February on those obvious jurisdictional grounds....

Then, last week, the judge threw out the appeal of that decision because Kennedy's lawyer didn't file in time — and blamed the delay on bad Wi-Fi...

Kennedy tried to dismiss the original case, the one awaiting an appellate decision in New York, claiming it was now moot. His legal team had sued to get the community member's identity, and now that they had it, they argued that there was no reason for the case to continue. We disagreed, arguing that there were important issues to resolve (i.e., Dendrite), and we also wanted lawyer fees for their unconstitutional assault on our First Amendment rights...

On Thursday, in a unanimous decision, a four-judge New York Supreme Court appellate panel ordered the case to continue, keeping the Dendrite issue alive and also allowing us to proceed in seeking damages based on New York's anti-SLAPP law, which prohibits "strategic lawsuits against public participation."

Thursday's blog post concludes with this summation. "Kennedy opened up a can of worms and has spent millions fighting this stupid battle. Despite his losses, we aren't letting him weasel out of this."
IBM

HashiCorp Reportedly Being Acquired By IBM [UPDATE] (cnbc.com) 36

According to the Wall Street Journal, a deal for IBM to acquire HashiCorp could materialize in the next few days. Shares of HashiCorp jumped almost 20% on the news.

UPDATE 4/24/24: IBM has confirmed the deal valued at $6.4 billion. "IBM will pay $35 per share for HashiCorp, a 42.6% premium to Monday's closing price," reports Reuters. "The acquisition will be funded by cash on hand and will add to adjusted core profit within the first full year of closing, expected by the end of 2024." HashiCorp's shares continued to surge Tuesday on the news. CNBC reports: Developers use HashiCorp's software to set up and manage infrastructure in public clouds that companies such as Amazon and Microsoft operate. Organizations also pay HashiCorp for managing security credentials. Founded in 2012, HashiCorp went public on Nasdaq in 2021. The company generated a net loss of nearly $191 million on $583 million in revenue in the fiscal year ending Jan. 31, according to its annual report. In December, Mitchell Hashimoto, co-founder of HashiCorp, whose family name is reflected in the company name, announced that he was leaving.

Revenue jumped almost 23% during that period, compared with 2% for IBM in 2023. IBM executives pointed to a difficult economic climate during a conference call with analysts in January. The hardware, software and consulting provider reports earnings on Wednesday. Cisco held $9 million in HashiCorp shares at the end of March, according to a regulatory filing. Cisco held early acquisition talks with HashiCorp, according to a 2019 report.

Security

Microsoft Employees Exposed Internal Passwords In Security Lapse (techcrunch.com) 24

Zack Whittaker and Carly Page report via TechCrunch: Microsoft has resolved a security lapse that exposed internal company files and credentials to the open internet. Security researchers Can Yoleri, Murat Ozfidan and Egemen Kochisarli with SOCRadar, a cybersecurity company that helps organizations find security weaknesses, discovered an open and public storage server hosted on Microsoft's Azure cloud service that was storing internal information relating to Microsoft's Bing search engine. The Azure storage server housed code, scripts and configuration files containing passwords, keys and credentials used by the Microsoft employees for accessing other internal databases and systems. But the storage server itself was not protected with a password and could be accessed by anyone on the internet.

Yoleri told TechCrunch that the exposed data could potentially help malicious actors identify or access other places where Microsoft stores its internal files. Identifying those storage locations "could result in more significant data leaks and possibly compromise the services in use," Yoleri said. The researchers notified Microsoft of the security lapse on February 6, and Microsoft secured the spilling files on March 5. It's not known for how long the cloud server was exposed to the internet, or if anyone other than SOCRadar discovered the exposed data inside.

Security

NIST Blames 'Growing Backlog of Vulnerabilities' Requiring Analysis on Lack of Support (infosecurity-magazine.com) 22

It's the world's most widely used vulnerability database, reports SC Magazine, offering standards-based data on CVSS severity scores, impacted software and platforms, contributing weaknesses, and links to patches and additional resources.

But "there is a growing backlog of vulnerabilities" submitted to America's National Vulnerability Database and "requiring analysis", according to a new announcement from the U.S. Commerce Department's National Institute of Standards. "This is based on a variety of factors, including an increase in software and, therefore, vulnerabilities, as well as a change in interagency support." From SC Magazine: According to NIST's website, the institute analyzed only 199 of 3370 CVEs it received last month. [And this month another 677 came in — of which 24 have been analyzed.]

Other than a short notice advising it was working to establish a new consortium to improve the NVD, NIST had not provided a public explanation for the problems prior to a statement published [April 2]... "Currently, we are prioritizing analysis of the most significant vulnerabilities. In addition, we are working with our agency partners to bring on more support for analyzing vulnerabilities and have reassigned additional NIST staff to this task as well."

NIST, which had its budget cut by almost 12% this year by lawmakers, said it was committed to continuing to support and manage the NVD, which it described as "a key piece of the nation's cybersecurity infrastructure... We are also looking into longer-term solutions to this challenge, including the establishment of a consortium of industry, government and other stakeholder organizations that can collaborate on research to improve the NVD," the statement said. "We will provide more information as these plans develop..."

A group of cybersecurity professionals have signed an open letter to Congress and Commerce Secretary Gina Raimondo in which they say the enrichment issue is the result of a recent 20% cut in NVD funding.

The article also cites remarks from NVD program manager Tanya Brewer (reported by Infosecurity Magazine) from last week's VulnCon conference on plans to establish a NVD consortium. "We're not going to shut down the NVD; we're in the process of fixing the current problem. And then, we're going to make the NVD robust again and we'll make it grow."

Thanks to Slashdot reader spatwei for sharing the article.
United States

US Energy Department Announces 'Blueprint' for Slashing Emissions From Buildings and Reducing Energy Use (energy.gov) 76

This week America's Department of Energy announced "a comprehensive plan to reduce greenhouse-gas emissions from buildings by 65% by 2035 and 90% by 2050." The U.S. Department of Energy (DOE) led the Blueprint's development in collaboration with the Department of Housing and Urban Development, the Environmental Protection Agency, and other federal agencies. The Blueprint is the first sector-wide strategy for building decarbonization developed by the federal government... "America's building sector accounts for more than a third of the harmful emissions jeopardizing our air and health..." said U.S. Secretary of Energy Jennifer M. Granholm. "As part of a whole-of-government approach, the Department of Energy is outlining for the first time ever a comprehensive federal plan to reduce energy in our homes, schools, and workplaces — lowering utility bills and creating healthier communities while combating the climate crisis."

Buildings account for more than one third of domestic climate pollution and $370 billion in annual energy costs... The Blueprint projects reductions of 90% of total greenhouse gas emissions from the buildings sector, which will save consumers more than $100 billion in annual energy costs and avoid $17 billion in annual health costs.

Just for example, the Department of Energy's Affordable Home Energy Shot program "aims to reduce the upfront cost of upgrading a home by at least 50% and reduce energy bills by 20% within a decade." (Meanwhile, the federal government's role in making more change happen faster includes financing, funding R&D on lower-cost technologies, expanding markets, and "supporting the development and implementation of emissions-reducing building codes and appliance standards.")

Besides the national blueprint, the Department also announced an expansion of its Better Buildings Commercial Building Heat Pump Accelerator initiative. In this program, "manufacturers will produce higher efficiency and life cycle cost-effective heat pump rooftop units and commercial organizations will evaluate and adopt next-generation heat pump technology."

U.S. Secretary of Energy Jennifer M. Granholm said the program "builds on more than a decade of public-private partnerships to get cutting edge clean technologies from lab to market, helping to slash harmful carbon emissions throughout our economy." On average, between 20% and 30% of the nation's energy is wasted, presenting a significant opportunity to increase energy efficiency. Through the Better Buildings Initiative, DOE partners with public and private sector stakeholders to pursue ambitious portfolio-wide energy, waste, water, and/or emissions reduction goals and publicly share solutions. By improving building design, materials, equipment, and operations, energy efficiency gains can be achieved across broad segments of the nation's economy.

The Accelerator initiative was developed with commercial end users like Amazon, IKEA, and Target, and already includes manufacturers AAON, Carrier Global Corp., Lennox International, Rheem Manufacturing Co., Trane Technologies, and York International Corp. The Accelerator aims to bring more efficient, affordable next-generation heat pump rooftop units to market as soon as 2027 — which will slash both emissions and energy costs in half compared to natural gas-fueled heat pumps. If deployed at scale, they could save American businesses and commercial entities $5 billion on utility bills every year.

Slashdot Top Deals