Privacy

Meta Fined $102 Million For Storing 600 Million Passwords In Plain Text (appleinsider.com) 28

Meta has been fined $101.5 million by the Irish Data Protection Commission (DPC) for storing over half a billion user passwords in plain text for years, with some engineers having access to this data for over a decade. The issue, discovered in 2019, predominantly affected non-US users, especially those using Facebook Lite. AppleInsider reports: Meta Ireland was found guilty of infringing four parts of GDPR, including how it "failed to notify the DPC of a personal data breach concerning storage of user passwords in plain text." Meta Ireland did report the failure, but only some months after it was discovered. "It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data," said Graham Doyle, Deputy Commissioner at the DPC, in a statement about the fine. "It must be borne in mind, that the passwords the subject of consideration in this case, are particularly sensitive, as they would enable access to users' social media accounts."

Other than the fine and an official reprimand, the full extent of the DPC's ruling is yet to be released publicly. The details published so far do not reveal whether the passwords included any of US users as well as ones in Ireland or across the rest of the European Union. It's most likely that the issue concerns only non-US users, however. That's because in 2019, Facebook told CNN that the majority of the plain text passwords were for a service called Facebook Lite, which it described as being a cut-down service for areas of the world with slower connectivity.

Businesses

If 23andMe Is Up for Sale, So Is All That DNA (msn.com) 56

23andMe is not doing well. Its stock is on the verge of being delisted. It shut down its in-house drug-development unit last month, only the latest in several rounds of layoffs. Last week, the entire board of directors quit, save for Anne Wojcicki, a co-founder and the company's CEO. Amid this downward spiral, Wojcicki has said she'll consider selling 23andMe -- which means the DNA of 23andMe's 15 million customers would be up for sale, too. The Atlantic: 23andMe's trove of genetic data might be its most valuable asset. For about two decades now, since human-genome analysis became quick and common, the A's, C's, G's, and T's of DNA have allowed long-lost relatives to connect, revealed family secrets, and helped police catch serial killers. Some people's genomes contain clues to what's making them sick, or even, occasionally, how their disease should be treated. For most of us, though, consumer tests don't have much to offer beyond a snapshot of our ancestors' roots and confirmation of the traits we already know about. 23andMe is floundering in part because it hasn't managed to prove the value of collecting all that sensitive, personal information. And potential buyers may have very different ideas about how to use the company's DNA data to raise the company's bottom line. This should concern anyone who has used the service.
China

China-Linked Hackers Breach US Internet Providers in New 'Salt Typhoon' Cyberattack (msn.com) 16

Hackers linked to the Chinese government have broken into a handful of U.S. internet-service providers in recent months in pursuit of sensitive information, WSJ reported Wednesday, citing people familiar with the matter. From the report: The hacking campaign, called Salt Typhoon by investigators, hasn't previously been publicly disclosed and is the latest in a series of incursions that U.S. investigators have linked to China in recent years. The intrusion is a sign of the stealthy success Beijing's massive digital army of cyberspies has had breaking into valuable computer networks in the U.S. and around the globe.

In Salt Typhoon, the actors linked to China burrowed into America's broadband networks. In this type of intrusion, bad actors aim to establish a foothold within the infrastructure of cable and broadband providers that would allow them to access data stored by telecommunications companies or launch a damaging cyberattack. Last week, U.S. officials said they had disrupted a network of more than 200,000 routers, cameras and other internet-connected consumer devices that served as an entry point into U.S. networks for a China-based hacking group called Flax Typhoon. And in January, federal officials disrupted Volt Typhoon, yet another China-linked campaign that has sought to quietly infiltrate a swath of U.S. critical infrastructure.

"The cyber threat posed by the Chinese government is massive," said Christopher Wray, the Federal Bureau of Investigation's director, speaking earlier this year at a security conference in Germany. "China's hacking program is larger than that of every other major nation, combined." U.S. security officials allege that Beijing has tried and at times succeeded in burrowing deep into U.S. critical infrastructure networks ranging from water-treatment systems to airports and oil and gas pipelines. Top Biden administration officials have issued public warnings over the past year that China's actions could threaten American lives and are intended to cause societal panic. The hackers could also disrupt the U.S.'s ability to mobilize support for Taiwan in the event that Chinese leader Xi Jinping orders his military to invade the island.

Government

California Governor Vetoes Bill Requiring Opt-Out Signals For Sale of User Data (arstechnica.com) 51

An anonymous reader quotes a report from Ars Technica: California Gov. Gavin Newsom vetoed a bill that would have required makers of web browsers and mobile operating systems to let consumers send opt-out preference signals that could limit businesses' use of personal information. The bill approved by the State Legislature last month would have required an opt-out signal "that communicates the consumer's choice to opt out of the sale and sharing of the consumer's personal information or to limit the use of the consumer's sensitive personal information." It would have made it illegal for a business to offer a web browser or mobile operating system without a setting that lets consumers "send an opt-out preference signal to businesses with which the consumer interacts."

In a veto message (PDF) sent to the Legislature Friday, Newsom said he would not sign the bill. Newsom wrote that he shares the "desire to enhance consumer privacy," noting that he previously signed a bill "requir[ing] the California Privacy Protection Agency to establish an accessible deletion mechanism allowing consumers to request that data brokers delete all of their personal information." But Newsom said he is opposed to the new bill's mandate on operating systems. "I am concerned, however, about placing a mandate on operating system (OS) developers at this time," the governor wrote. "No major mobile OS incorporates an option for an opt-out signal. By contrast, most Internet browsers either include such an option or, if users choose, they can download a plug-in with the same functionality. To ensure the ongoing usability of mobile devices, it's best if design questions are first addressed by developers, rather than by regulators. For this reason, I cannot sign this bill." Vetoes can be overridden with a two-thirds vote in each chamber. The bill was approved 59-12 in the Assembly and 31-7 in the Senate. But the State Legislature hasn't overridden a veto in decades.
"It's troubling the power that companies such as Google appear to have over the governor's office," said Justin Kloczko, tech and privacy advocate for Consumer Watchdog, a nonprofit group in California. "What the governor didn't mention is that Google Chrome, Apple Safari and Microsoft Edge don't offer a global opt-out and they make up for nearly 90 percent of the browser market share. That's what matters. And people don't want to install plug-ins. Safari, which is the default browsers on iPhones, doesn't even accept a plug-in."
Privacy

Apple Vision Pro's Eye Tracking Exposed What People Type 7

An anonymous reader quotes a report from Wired: You can tell a lot about someone from their eyes. They can indicate how tired you are, the type of mood you're in, and potentially provide clues about health problems. But your eyes could also leak more secretive information: your passwords, PINs, and messages you type. Today, a group of six computer scientists are revealing a new attack against Apple's Vision Pro mixed reality headset where exposed eye-tracking data allowed them to decipher what people entered on the device's virtual keyboard. The attack, dubbed GAZEploit and shared exclusively with WIRED, allowed the researchers to successfully reconstruct passwords, PINs, and messages people typed with their eyes. "Based on the direction of the eye movement, the hacker can determine which key the victim is now typing," says Hanqiu Wang, one of the leading researchers involved in the work. They identified the correct letters people typed in passwords 77 percent of the time within five guesses and 92 percent of the time in messages.

To be clear, the researchers did not gain access to Apple's headset to see what they were viewing. Instead, they worked out what people were typing by remotely analyzing the eye movements of a virtual avatar created by the Vision Pro. This avatar can be used in Zoom calls, Teams, Slack, Reddit, Tinder, Twitter, Skype, and FaceTime. The researchers alerted Apple to the vulnerability in April, and the company issued a patch to stop the potential for data to leak at the end of July. It is the first attack to exploit people's "gaze" data in this way, the researchers say. The findings underline how people's biometric data -- information and measurements about your body -- can expose sensitive information and beused as part of the burgeoning surveillance industry.

The GAZEploit attack consists of two parts, says Zhan, one of the lead researchers. First, the researchers created a way to identify when someone wearing the Vision Pro is typing by analyzing the 3D avatar they are sharing. For this, they trained a recurrent neural network, a type of deep learning model, with recordings of 30 people's avatars while they completed a variety of typing tasks. When someone is typing using the Vision Pro, their gaze fixates on the key they are likely to press, the researchers say, before quickly moving to the next key. "When we are typing our gaze will show some regular patterns," Zhan says. Wang says these patterns are more common during typing than if someone is browsing a website or watching a video while wearing the headset. "During tasks like gaze typing, the frequency of your eye blinking decreases because you are more focused," Wang says. In short: Looking at a QWERTY keyboard and moving between the letters is a pretty distinct behavior.

The second part of the research, Zhan explains, uses geometric calculations to work out where someone has positioned the keyboard and the size they've made it. "The only requirement is that as long as we get enough gaze information that can accurately recover the keyboard, then all following keystrokes can be detected." Combining these two elements, they were able to predict the keys someone was likely to be typing. In a series of lab tests, they didn't have any knowledge of the victim's typing habits, speed, or know where the keyboard was placed. However, the researchers could predict the correct letters typed, in a maximum of five guesses, with 92.1 percent accuracy in messages, 77 percent of the time for passwords, 73 percent of the time for PINs, and 86.1 percent of occasions for emails, URLs, and webpages. (On the first guess, the letters would be right between 35 and 59 percent of the time, depending on what kind of information they were trying to work out.) Duplicate letters and typos add extra challenges.
Security

SpyAgent Android Malware Steals Your Crypto Recovery Phrases From Images 32

SpyAgent is a new Android malware that uses optical character recognition (OCR) to steal cryptocurrency wallet recovery phrases from screenshots stored on mobile devices, allowing attackers to hijack wallets and steal funds. The malware primarily targets South Korea but poses a growing threat as it expands to other regions and possibly iOS. BleepingComputer reports: A malware operation discovered by McAfee was traced back to at least 280 APKs distributed outside of Google Play using SMS or malicious social media posts. This malware can use OCR to recover cryptocurrency recovery phrases from images stored on an Android device, making it a significant threat. [...] Once it infects a new device, SpyAgent begins sending the following sensitive information to its command and control (C2) server:

- Victim's contact list, likely for distributing the malware via SMS originating from trusted contacts.
- Incoming SMS messages, including those containing one-time passwords (OTPs).
- Images stored on the device to use for OCR scanning.
- Generic device information, likely for optimizing the attacks.

SpyAgent can also receive commands from the C2 to change the sound settings or send SMS messages, likely used to send phishing texts to distribute the malware. McAfee found that the operators of the SpyAgent campaign did not follow proper security practices in configuring their servers, allowing the researchers to gain access to them. Admin panel pages, as well as files and data stolen from victims, were easily accessible, allowing McAfee to confirm that the malware had claimed multiple victims. The stolen images are processed and OCR-scanned on the server side and then organized on the admin panel accordingly to allow easy management and immediate utilization in wallet hijack attacks.
The Courts

City of Columbus Sues Man After He Discloses Severity of Ransomware Attack (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: A judge in Ohio has issued a temporary restraining order against a security researcher who presented evidence that a recent ransomware attack on the city of Columbus scooped up reams of sensitive personal information, contradicting claims made by city officials. The order, issued by a judge in Ohio's Franklin County, came after the city of Columbus fell victim to a ransomware attack on July 18 that siphoned 6.5 terabytes of the city's data. A ransomware group known as Rhysida took credit for the attack and offered to auction off the data with a starting bid of about $1.7 million in bitcoin. On August 8, after the auction failed to find a bidder, Rhysida released what it said was about 45 percent of the stolen data on the group's dark web site, which is accessible to anyone with a TOR browser.

Columbus Mayor Andrew Ginther said on August 13 that a "breakthrough" in the city's forensic investigation of the breach found that the sensitive files Rhysida obtained were either encrypted or corrupted, making them "unusable" to the thieves. Ginther went on to say the data's lack of integrity was likely the reason the ransomware group had been unable to auction off the data. Shortly after Ginther made his remarks, security researcher David Leroy Ross contacted local news outlets and presented evidence that showed the data Rhysida published was fully intact and contained highly sensitive information regarding city employees and residents. Ross, who uses the alias Connor Goodwolf, presented screenshots and other data that showed the files Rhysida had posted included names from domestic violence cases and Social Security numbers for police officers and crime victims. Some of the data spanned years.

On Thursday, the city of Columbus sued Ross (PDF) for alleged damages for criminal acts, invasion of privacy, negligence, and civil conversion. The lawsuit claimed that downloading documents from a dark web site run by ransomware attackers amounted to him "interacting" with them and required special expertise and tools. The suit went on to challenge Ross alerting reporters to the information, which ii claimed would not be easily obtained by others. "Only individuals willing to navigate and interact with the criminal element on the dark web, who also have the computer expertise and tools necessary to download data from the dark web, would be able to do so," city attorneys wrote. "The dark web-posted data is not readily available for public consumption. Defendant is making it so." The same day, a Franklin County judge granted the city's motion for a temporary restraining order (PDF) against Ross. It bars the researcher "from accessing, and/or downloading, and/or disseminating" any city files that were posted to the dark web. The motion was made and granted "ex parte," meaning in secret before Ross was informed of it or had an opportunity to present his case.

Data Storage

FBI Is Sloppy On Secure Data Storage and Destruction, Warns Watchdog (theregister.com) 11

The Register's Iain Thomson reports: The FBI has made serious slip-ups in how it processes and destroys electronic storage media seized as part of investigations, according to an audit by the Department of Justice Office of the Inspector General. Drives containing national security data, Foreign Intelligence Surveillance Act information and documents classified as Secret were routinely unlabeled, opening the potential for it to be either lost or stolen, the report [PDF] addressed to FBI Director Christopher Wray states. Ironically, this lack of identification might be considered a benefit, given the lax security at the FBI's facility used to destroy such media after they have been finished with.

The OIG report notes that it found boxes of hard drives and removable storage sitting open and unattended for "days or even weeks" because they were only sealed once the boxes were full. This potentially allows any of the 395 staff and contractors with access to the facility to have a rummage around. To deal with this, the FBI is installing wire cages to lock away storage media. In December, the bureau said it would install a video surveillance system at the evidence destruction storage facility to tighten security. As of June this year, it was still processing the paperwork to do so. The OIG also found that FBI agents aren't tracking hard drives and removable storage sent into the central office and the destruction facility. Typically, seized computers are tagged for tracking, but as a cost-saving measure, agents are advised to send in media storage devices containing national security information without the chassis. While there is a requirement to tag removable storage, there isn't the same requirement for internal hard drives. [...]

The FBI has assured the regulator that it has the problem in hand and has drafted a Physical Control and Destruction of Classified and Sensitive Electronic Devices and Material Policy Directive, which will require data to be marked up and destroyed safely. The agency says this policy is in the final editing stage and will be issued as soon as possible.

Space

The Wow! Signal Deciphered. It Was Hydrogen All Along. (universetoday.com) 32

The Wow! signal, detected on August 15, 1977, was an intense radio transmission that appeared artificial and raised the possibility of extraterrestrial contact. However, recent research suggests it may have been caused by a natural astrophysical event involving a magnetar flare striking a hydrogen cloud. Universe Today reports: New research shows that the Wow! Signal has an entirely natural explanation. The research is "Arecibo Wow! I: An Astrophysical Explanation for the Wow! Signal." The lead author is Abel Mendez from the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo. It's available at the pre-print server arxiv.org. Arecibo Wow! is a new effort based on an archival study of data from the now-defunct Arecibo Radio Telescope from 2017 to 2020. The observations from Arecibo are similar to those from Big Ear but "are more sensitive, have better temporal resolution, and include polarization measurements," according to the authors. "Our latest observations, made between February and May 2020, have revealed similar narrowband signals near the hydrogen line, though less intense than the original Wow! Signal," said Mendez.

Arecibo detected signals similar to the Wow! signal but with some differences. They're far less intense and come from multiple locations. The authors say these signals are easily explained by an astrophysical phenomenon and that the original Wow! signal is, too. "We hypothesize that the Wow! Signal was caused by sudden brightening from stimulated emission of the hydrogen line due to a strong transient radiation source, such as a magnetar flare or a soft gamma repeater (SGR)," the researchers write. Those events are rare and rely on precise conditions and alignments. They can cause clouds of hydrogen to brighten considerably for seconds or even minutes.

The researchers say that what Big Ear saw in 1977 was the transient brightening of one of several H1 (neutral hydrogen) clouds in the telescope's line of sight. The 1977 signal was similar to what Arecibo saw in many respects. "The only difference between the signals observed in Arecibo and the Wow! Signal is their brightness. It is precisely the similarity between these spectra that suggests a mechanism for the origin of the mysterious signal," the authors write. These signals are rare because the spatial alignment between source, cloud, and observer is rare. The rarity of alignment explains why detections are so rare. The researchers were able to identify the clouds responsible for the signal but not the source. Their results suggest that the source is much more distant than the clouds that produce the hydrogen signal. "Given the detectability of the clouds as demonstrated in our data, this insight could enable precise location of the signal's origin and permit continuous monitoring for subsequent events," the researchers explain.

IT

110K Domains Targeted in 'Sophisticated' AWS Cloud Extortion Campaign (theregister.com) 33

A sophisticated extortion campaign has targeted 110,000 domains by exploiting misconfigured AWS environment files, security firm Cyble reports. The attackers scanned for exposed .env files containing cloud access keys and other sensitive data. Organizations that failed to secure their AWS environments found their S3-stored data replaced with ransom notes.

The attackers used a series of API calls to verify data, enumerate IAM users, and locate S3 buckets. Though initial access lacked admin privileges, they created new IAM roles to escalate permissions. Cyble researchers noted the attackers' use of AWS Lambda functions for automated scanning operations.
Privacy

Microsoft Copilot Studio Exploit Leaks Sensitive Cloud Data (darkreading.com) 8

An anonymous reader quotes a report from Dark Reading: Researchers have exploited a vulnerability in Microsoft's Copilot Studio tool allowing them to make external HTTP requests that can access sensitive information regarding internal services within a cloud environment -- with potential impact across multiple tenants. Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances, they revealed in a blog post this week. Tracked by Microsoft as CVE-2024-38206, the flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory associated with the vulnerability. The flaw exists when combining an HTTP request that can be created using the tool with an SSRF protection bypass, according to Tenable.

"An SSRF vulnerability occurs when an attacker is able to influence the application into making server-side HTTP requests to unexpected targets or in an unexpected way," Tenable security researcher Evan Grant explained in the post. The researchers tested their exploit to create HTTP requests to access cloud data and services from multiple tenants. They discovered that "while no cross-tenant information appeared immediately accessible, the infrastructure used for this Copilot Studio service was shared among tenants," Grant wrote. Any impact on that infrastructure, then, could affect multiple customers, he explained. "While we don't know the extent of the impact that having read/write access to this infrastructure could have, it's clear that because it's shared among tenants, the risk is magnified," Grant wrote. The researchers also found that they could use their exploit to access other internal hosts unrestricted on the local subnet to which their instance belonged. Microsoft responded quickly to Tenable's notification of the flaw, and it has since been fully mitigated, with no action required on the part of Copilot Studio users, the company said in its security advisory.
Further reading: Slack AI Can Be Tricked Into Leaking Data From Private Channels
Programming

'GitHub Actions' Artifacts Leak Tokens, Expose Cloud Services and Repositories (securityweek.com) 19

Security Week brings news about CI/CD workflows using GitHub Actions in build processes. Some workflows can generate artifacts that "may inadvertently leak tokens for third party cloud services and GitHub, exposing repositories and services to compromise, Palo Alto Networks warns." [The artifacts] function as a mechanism for persisting and sharing data across jobs within the workflow and ensure that data is available even after the workflow finishes. [The artifacts] are stored for up to 90 days and, in open source projects, are publicly available... The identified issue, a combination of misconfigurations and security defects, allows anyone with read access to a repository to consume the leaked tokens, and threat actors could exploit it to push malicious code or steal secrets from the repository. "It's important to note that these tokens weren't part of the repository code but were only found in repository-produced artifacts," Palo Alto Networks' Yaron Avital explains...

"The Super-Linter log file is often uploaded as a build artifact for reasons like debuggability and maintenance. But this practice exposed sensitive tokens of the repository." Super-Linter has been updated and no longer prints environment variables to log files.

Avital was able to identify a leaked token that, unlike the GitHub token, would not expire as soon as the workflow job ends, and automated the process that downloads an artifact, extracts the token, and uses it to replace the artifact with a malicious one. Because subsequent workflow jobs would often use previously uploaded artifacts, an attacker could use this process to achieve remote code execution (RCE) on the job runner that uses the malicious artifact, potentially compromising workstations, Avital notes.

Avital's blog post notes other variations on the attack — and "The research laid out here allowed me to compromise dozens of projects maintained by well-known organizations, including firebase-js-sdk by Google, a JavaScript package directly referenced by 1.6 million public projects, according to GitHub. Another high-profile project involved adsys, a tool included in the Ubuntu distribution used by corporations for integration with Active Directory." (Avital says the issue even impacted projects from Microsoft, Red Hat, and AWS.) "All open-source projects I approached with this issue cooperated swiftly and patched their code. Some offered bounties and cool swag."

"This research was reported to GitHub's bug bounty program. They categorized the issue as informational, placing the onus on users to secure their uploaded artifacts." My aim in this article is to highlight the potential for unintentionally exposing sensitive information through artifacts in GitHub Actions workflows. To address the concern, I developed a proof of concept (PoC) custom action that safeguards against such leaks. The action uses the @actions/artifact package, which is also used by the upload-artifact GitHub action, adding a crucial security layer by using an open-source scanner to audit the source directory for secrets and blocking the artifact upload when risk of accidental secret exposure exists. This approach promotes a more secure workflow environment...

As this research shows, we have a gap in the current security conversation regarding artifact scanning. GitHub's deprecation of Artifacts V3 should prompt organizations using the artifacts mechanism to reevaluate the way they use it. Security defenders must adopt a holistic approach, meticulously scrutinizing every stage — from code to production — for potential vulnerabilities. Overlooked elements like build artifacts often become prime targets for attackers. Reduce workflow permissions of runner tokens according to least privilege and review artifact creation in your CI/CD pipelines. By implementing a proactive and vigilant approach to security, defenders can significantly strengthen their project's security posture.

The blog post also notes protection and mitigation features from Palo Alto Networks....
Privacy

National Public Data Confirms Breach Exposing Social Security Numbers (bleepingcomputer.com) 56

BleepingComputer's Ionut Ilascu reports: Background check service National Public Data confirms that hackers breached its systems after threat actors leaked a stolen database with millions of social security numbers and other sensitive personal information. The company states that the breached data may include names, email addresses, phone numbers, social security numbers (SSNs), and postal addresses.

In the statement disclosing the security incident, National Public Data says that "the information that was suspected of being breached contained name, email address, phone number, social security number, and mailing address(es)." The company acknowledges the "leaks of certain data in April 2024 and summer 2024" and believes the breach is associated with a threat actor "that was trying to hack into data in late December 2023." NPD says they investigated the incident, cooperated with law enforcement, and reviewed the potentially affected records. If significant developments occur, the company "will try to notify" the impacted individuals.

The Almighty Buck

US Fines T-Mobile $60 Million, Its Largest Penalty Ever, Over Unauthorized Data Access (reuters.com) 12

The U.S. Committee on Foreign Investment (CFIUS) fined T-Mobile $60 million, its largest penalty ever, for failing to prevent and report unauthorized access to sensitive data tied to violations of a mitigation agreement from its 2020 merger with Sprint. "The size of the fine, and CFIUS's unprecedented decision to make it public, show the committee is taking a more muscular approach to enforcement as it seeks to deter future violations," reports Reuters. From the report: T-Mobile said in a statement that it experienced technical issues during its post-merger integration with Sprint that affected "information shared from a small number of law enforcement information requests." It stressed that the data never left the law enforcement community, was reported "in a timely manner" and was "quickly addressed." The failure of T-Mobile to report the incidents promptly delayed CFIUS' efforts to investigate and mitigate any potential harm to U.S. national security, they added, without providing further details. "The $60 million penalty announcement highlights the committee's commitment to ramping up CFIUS enforcement by holding companies accountable when they fail to comply with their obligations," one of the U.S. officials said, adding that transparency around enforcement actions incentivizes other companies to comply with their obligations.
Privacy

Federal Appeals Court Finds Geofence Warrants Are 'Categorically' Unconstitutional (eff.org) 41

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): In a major decision on Friday, the federal Fifth Circuit Court of Appeals held (PDF) that geofence warrants are "categorically prohibited by the Fourth Amendment." Closely following arguments EFF has made in a number of cases, the court found that geofence warrants constitute the sort of "general, exploratory rummaging" that the drafters of the Fourth Amendment intended to outlaw. EFF applauds this decision because it is essential that every person feels like they can simply take their cell phone out into the world without the fear that they might end up a criminal suspect because their location data was swept up in open-ended digital dragnet. The new Fifth Circuit case, United States v. Smith, involved an armed robbery and assault of a US Postal Service worker at a post office in Mississippi in 2018. After several months of investigation, police had no identifiable suspects, so they obtained a geofence warrant covering a large geographic area around the post office for the hour surrounding the crime. Google responded to the warrant with information on several devices, ultimately leading police to the two defendants.

On appeal, the Fifth Circuit reached several important holdings. First, it determined that under the Supreme Court's landmark ruling in Carpenter v. United States, individuals have a reasonable expectation of privacy in the location data implicated by geofence warrants. As a result, the court broke from the Fourth Circuit's deeply flawed decision last month in United States v. Chatrie, noting that although geofence warrants can be more "limited temporally" than the data sought in Carpenter, geofence location data is still highly invasive because it can expose sensitive information about a person's associations and allow police to "follow" them into private spaces. Second, the court found that even though investigators seek warrants for geofence location data, these searches are inherently unconstitutional. As the court noted, geofence warrants require a provider, almost always Google, to search "the entirety" of its reserve of location data "while law enforcement officials have no idea who they are looking for, or whether the search will even turn up a result." Therefore, "the quintessential problem with these warrants is that they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search. That is constitutionally insufficient."

Unsurprisingly, however, the court found that in 2018, police could have relied on such a warrant in "good faith," because geofence technology was novel, and police reached out to other agencies with more experience for guidance. This means that the evidence they obtained will not be suppressed in this case.

Crime

Cyber-Heist of 2.9 Billion Personal Records Leads to Class Action Lawsuit (theregister.com) 18

"A lawsuit has accused a Florida data broker of carelessly failing to secure billions of records of people's private information," reports the Register, "which was subsequently stolen from the biz and sold on an online criminal marketplace." California resident Christopher Hofmann filed the potential class-action complaint against Jerico Pictures, doing business as National Public Data, a Coral Springs-based firm that provides APIs so that companies can perform things like background checks on people and look up folks' criminal records. As such National Public Data holds a lot of highly personal information, which ended up being stolen in a cyberattack. According to the suit, filed in a southern Florida federal district court, Hofmann is one of the individuals whose sensitive information was pilfered by crooks and then put up for sale for $3.5 million on an underworld forum in April.

If the thieves are to be believed, the database included 2.9 billion records on all US, Canadian, and British citizens, and included their full names, addresses, and address history going back at least three decades, social security numbers, and the names of their parents, siblings, and relatives, some of whom have been dead for nearly 20 years.

Hofmann's lawsuit says he 'believes that his personally identifiable information was scraped from non-public sources," according to the article — which adds that Hofmann "claims he never provided this sensitive info to National Public Data...

"The Florida firm stands accused of negligently storing the database in a way that was accessible to the thieves, without encrypting its contents nor redacting any of the individuals' sensitive information." Hofmann, on behalf of potentially millions of other plaintiffs, has asked the court to require National Public Data to destroy all personal information belonging to the class-action members and use encryption, among other data protection methods in the future... Additionally, it seeks unspecified monetary relief for the data theft victims, including "actual, statutory, nominal, and consequential damages."
The Courts

US Sues TikTok Over 'Massive-Scale' Privacy Violations of Kids Under 13 (reuters.com) 10

An anonymous reader quotes a report from Reuters: The U.S. Justice Department filed a lawsuit Friday against TikTok and parent company ByteDance for failing to protect children's privacy on the social media app as the Biden administration continues its crackdown on the social media site. The government said TikTok violated the Children's Online Privacy Protection Act that requires services aimed at children to obtain parental consent to collect personal information from users under age 13. The suit (PDF), which was joined by the Federal Trade Commission, said it was aimed at putting an end "to TikTok's unlawful massive-scale invasions of children's privacy." Representative Frank Pallone, the top Democrat on the Energy and Commerce Committee, said the suit "underscores the importance of divesting TikTok from Chinese Communist Party control. We simply cannot continue to allow our adversaries to harvest vast troves of Americans' sensitive data."

The DOJ said TikTok knowingly permitted children to create regular TikTok accounts, and then create and share short-form videos and messages with adults and others on the regular TikTok platform. TikTok collected personal information from these children without obtaining consent from their parents. The U.S. alleges that for years millions of American children under 13 have been using TikTok and the site "has been collecting and retaining children's personal information." The FTC is seeking penalties of up to $51,744 per violation per day from TikTok for improperly collecting data, which could theoretically total billions of dollars if TikTok were found liable.
TikTok said Friday it disagrees "with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We are proud of our efforts to protect children, and we will continue to update and improve the platform."
AI

Meta's AI Safety System Defeated By the Space Bar (theregister.com) 22

Thomas Claburn reports via The Register: Meta's machine-learning model for detecting prompt injection attacks -- special prompts to make neural networks behave inappropriately -- is itself vulnerable to, you guessed it, prompt injection attacks. Prompt-Guard-86M, introduced by Meta last week in conjunction with its Llama 3.1 generative model, is intended "to help developers detect and respond to prompt injection and jailbreak inputs," the social network giant said. Large language models (LLMs) are trained with massive amounts of text and other data, and may parrot it on demand, which isn't ideal if the material is dangerous, dubious, or includes personal info. So makers of AI models build filtering mechanisms called "guardrails" to catch queries and responses that may cause harm, such as those revealing sensitive training data on demand, for example. Those using AI models have made it a sport to circumvent guardrails using prompt injection -- inputs designed to make an LLM ignore its internal system prompts that guide its output -- or jailbreaks -- input designed to make a model ignore safeguards. [...]

It turns out Meta's Prompt-Guard-86M classifier model can be asked to "Ignore previous instructions" if you just add spaces between the letters and omit punctuation. Aman Priyanshu, a bug hunter with enterprise AI application security shop Robust Intelligence, recently found the safety bypass when analyzing the embedding weight differences between Meta's Prompt-Guard-86M model and Redmond's base model, microsoft/mdeberta-v3-base. "The bypass involves inserting character-wise spaces between all English alphabet characters in a given prompt," explained Priyanshu in a GitHub Issues post submitted to the Prompt-Guard repo on Thursday. "This simple transformation effectively renders the classifier unable to detect potentially harmful content."
"Whatever nasty question you'd like to ask right, all you have to do is remove punctuation and add spaces between every letter," Hyrum Anderson, CTO at Robust Intelligence, told The Register. "It's very simple and it works. And not just a little bit. It went from something like less than 3 percent to nearly a 100 percent attack success rate."
The Courts

Courts Close the Loophole Letting the Feds Search Your Phone At the Border (reason.com) 46

On Wednesday, Judge Nina Morrison ruled that cellphone searches at the border are "nonroutine" and require probable cause and a warrant, likening them to more invasive searches due to their heavy privacy impact. As reported by Reason, this decision closes the loophole in the Fourth Amendment's protection against unreasonable searches and seizures, which Customs and Border Protection (CBP) agents have exploited. Courts have previously ruled that the government has the right to conduct routine warrantless searches for contraband at the border. From the report: Although the interests of stopping contraband are "undoubtedly served when the government searches the luggage or pockets of a person crossing the border carrying objects that can only be introduced to this country by being physically moved across its borders, the extent to which those interests are served when the government searches data stored on a person's cell phone is far less clear," the judge declared. Morrison noted that "reviewing the information in a person's cell phone is the best approximation government officials have for mindreading," so searching through cellphone data has an even heavier privacy impact than rummaging through physical possessions. Therefore, the court ruled, a cellphone search at the border requires both probable cause and a warrant. Morrison did not distinguish between scanning a phone's contents with special software and manually flipping through it.

And in a victory for journalists, the judge specifically acknowledged the First Amendment implications of cellphone searches too. She cited reporting by The Intercept and VICE about CPB searching journalists' cellphones "based on these journalists' ongoing coverage of politically sensitive issues" and warned that those phone searches could put confidential sources at risk. Wednesday's ruling adds to a stream of cases restricting the feds' ability to search travelers' electronics. The 4th and 9th Circuits, which cover the mid-Atlantic and Western states, have ruled that border police need at least "reasonable suspicion" of a crime to search cellphones. Last year, a judge in the Southern District of New York also ruled (PDF) that the government "may not copy and search an American citizen's cell phone at the border without a warrant absent exigent circumstances."

Privacy

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim (theregister.com) 23

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained.

For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here].

According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks.
GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]."

Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."

Slashdot Top Deals