Privacy

Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators (wired.com) 82

An anonymous reader quotes a report from Wired: More than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations are demanding that Meta abandon plans to deploy face recognition on its Ray-Ban and Oakley smart glasses, warning that the feature -- reportedly known inside the company as "Name Tag" -- would hand stalkers, abusers, and federal agents the ability to silently identify strangers in public. The coalition, which includes the ACLU, the Electronic Privacy Information Center, Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights, is demanding Meta kill the feature before launch, after internal documents surfaced showing the company hoped to use the current "dynamic political environment" as cover for the rollout, betting that civil society groups would have their resources "focused on other concerns."

Name Tag, as revealed in February by The New York Times, would work through the artificial intelligence assistant built into Meta's smart glasses, allowing wearers to pull up information about people in their field of view. Engineers have reportedly been weighing two versions of the feature: one that would only identify people the wearer is already connected to on a Meta platform, and a broader version that could recognize anyone with a public account on a Meta service such as Instagram. The coalition wants Meta to scrap the feature entirely. In a letter to CEO Mark Zuckerberg on Monday, it argues that face recognition in inconspicuous consumer eyewear "cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards." Bystanders in public have no meaningful way to consent to being identified, it says.

Meta is also urged to disclose any known instances of its wearables being used in stalking, harassment, or domestic violence cases; disclose any past or ongoing discussions with federal law enforcement agencies, including Immigration and Customs Enforcement and Customs and Border Protection, about the use of Meta wearables or data from them; and commit to consulting civil society and independent privacy experts before integrating biometric identification into any consumer device. "People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors," write the groups, which also include Common Cause, Jane Doe Inc., UltraViolet, the National Organization for Women, the New York State Coalition Against Domestic Violence, the Library Freedom Project, and Old Dykes Against Billionaire Tech Bros, among others.

Privacy

New Company Hopes to Build Age-Verification Tech into Vape Cartridges (wired.com) 103

Their goal is to use biometric data and blockchain to build age-verification measures directly into disposable vape cartridges.

Wired reports on a partnership between vape/cartridge manufacturer Ispire Technology and regulatory consulting company Chemular (which specializes in the nicotine market) — which they've named "Ike Tech": [Using blockchain-based security, the e-cig cartridge] would use a camera to scan some form of ID and then also take a video of the user's face. Once it verifies your identity and determines you're old enough to vape, it translates that information into anonymized tokens. That info goes to an identity service like ID.me or Clear. If approved, it bounces back to the app, which then uses a Bluetooth signal to give the vape the OK to turn on.

"Everything is tokenized," [says Ispire CEO Michael Wang]. "As a result of this process, we don't communicate consumer personal private information." He says the process takes about a minute and a half... After that onetime check, the Bluetooth connection on the phone will recognize when the vape cartridge is nearby and keep it unlocked. Move the vape too far away from the phone, and it shuts off again. Based on testing, the companies behind Ike Tech claim this process has a 100 percent success rate in age verification, more or less calling the tech infallible. "The FDA told us it's the holy grail technology they were looking for," Wang says. "That's word-for-word what they said when we met with them...."

Wang says the goal is to implement additional features in the verification process, like geo-fencing, which would force the vape to shut off while near a school or on an airplane. In the future, the plan is to license this biometric verification tech to other e-cig companies. The tech may also grow to include fingerprint readers and expand to other product categories; Wang suggests guns, which have a long history of age-verification features not quite working.

Privacy

Reddit Takes On Bots With 'Human Verification' Requirements (techcrunch.com) 75

Reddit is rolling out human-verification checks for accounts that show signs of bot-like behavior, while also labeling approved automated accounts that provide useful services. The social media company stressed that these checks will only happen if something appears "fishy," and that it is "not conducting sitewide human verification." TechCrunch reports: To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors -- like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules).

To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman's World ID -- or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it's not the company's preferred method.
"If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other."
IOS

iPhone and iPad Are First Consumer Devices Cleared for NATO Classified Data (macrumors.com) 27

Apple's iPhone and iPad running iOS 26 and iPadOS 26 have become the first consumer mobile devices cleared for NATO-restricted classified data. No special software or settings are required. MacRumors reports: Apple's devices are the first and only consumer mobile products that have reached this government certification level after security testing and evaluation by the German government. iPhones and iPads running iOS 26 and iPadOS 26 are now certified for use with classified data in all NATO nations.

In an announcement of the security clearance, Apple touted its security features: "Apple designs security into all of its products from the start, ensuring the most sophisticated protections are built in across hardware, software, and Apple silicon. This unique approach allows Apple users to benefit from industry-leading security protections such as best-in-class encryption, biometric authentication with Face ID, and groundbreaking features like Memory Integrity Enforcement. These same protections are now recognized as meeting stringent government and international security requirements, even for restricted data."

Electronic Frontier Foundation

Congress Wants To Hand Your Parenting To Big Tech 53

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing [Friday] on "examining the effect of technology on America's youth." Witnesses warned about "addictive" online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and "empower parents."

That's a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill's press release contains soothing language, KOSMA doesn't actually give parents more control. Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That's right -- this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem. [...] This bill doesn't just set an age rule. It creates a legal duty for platforms to police families. Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it "shall terminate any existing account or profile" belonging to that user. And "knows" doesn't just mean someone admits their age. The bill defines knowledge to include what is "fairly implied on the basis of objective circumstances" -- in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won't be kids sneaking around -- it will be minors who are following their parents' guidance, and the parents themselves. Imagine a child using their parent's YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, "Cool video -- I'll show this to my 6th grade teacher!" and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn't matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a "family" account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That's more than enough legal risk to make platforms err on the side of cutting people off. Platforms have no way to remove "just the kid" from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child's use, KOSMA forces Big Tech to override that family decision. [...] These companies don't know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
Medicine

The Swedish Start-Up Aiming To Conquer America's Full-Body-Scan Craze (nytimes.com) 23

An anonymous reader quotes a report from DealBook: Fifteen years ago, Daniel Ek broke into America's digital-content wars with his streaming music start-up, Spotify, which has turned into a publicly traded company with a $110 billion market value. Now he and his business partner, the Swedish entrepreneur Hjalmar Nilsonne, aim to crack a higher-stakes consumer market: American health care. The pair plan to bring Neko Health, the health tech start-up they founded in 2018, to New York this spring, DealBook is first to report.

Mr. Ek and Mr. Nilsonne hope to capitalize on the growing number of prevention-minded Americans who are hungry to track their biometric data. Whether through wearables like Oura rings or more intensive screenings, consumers are turning to technology to improve their health and help spot the early onset of some big killers, including cardiovascular and metabolic diseases. The United States will be the third market, after Sweden and Britain, for Neko Health, which offers full-body diagnostic scans and is valued at roughly $1.7 billion.

[...] Mr. Nilsonne and Mr. Ek said Neko Health's big aim was to change the health care model, in which spending across much of the developed world skyrockets but longevity gains have stalled. They want to make their noninvasive scans as routine as an annual checkup. The company, which advertises its service as "a health check for your future self," did not say what the U.S. scans would cost. But in Stockholm, an hourlong visit at one of its clinics costs 2,750 Swedish krona (about $300). Prenuvo's and Ezra's most comprehensive scans can cost $3,999.

[...] Neko Health's technology differs from that of many of its U.S. rivals. It does not use M.R.I. or X-rays, instead relying on scores of sensors and cameras and a mix of proprietary and off-the-shelf technologies to measure heart function and circulation, and to photograph and map every inch of a patient's body looking for cancerous lesions. At the moment, the company's biggest challenge is scaling.

[...] Mr. Nilsonne said Neko Health scans have detected the early onset of diseases or serious medical conditions for thousands of its patients. But the medical community is divided on the need for proactive screening technologies. The fear is that mass adoption could spur a wave of false positives and send healthy people to seek follow-up medical advice, overwhelming an already swamped health care system. Mr. Ek and Mr. Nilsonne believe they have built a better solution. And now they're ready to test it in the U.S. market.

United Kingdom

UK Scraps Mandatory Digital ID Enrollment for Workers After Public Backlash (bbc.com) 33

The UK government has abandoned its controversial plan to require workers to sign up for a mandatory digital ID system to prove their eligibility to work in the country, opting instead to move existing document-based checks -- such as biometric passports -- fully online by 2029.

The reversal follows a dramatic collapse in public support; polling showed approval falling from just over half the population in June to less than a third after Prime Minister Keir Starmer's announcement. Nearly 3 million people signed a parliamentary petition opposing the scheme. The government says it remains committed to mandatory digital right-to-work checks but will no longer require enrollment in a new ID system.
Privacy

NYC Wegmans Is Storing Biometric Data On Shoppers' Eyes, Voices and Faces (gothamist.com) 26

schwit1 shares a report from Gothamist: Wegmans in New York City has begun collecting biometric data from anyone who enters its supermarkets, according to new signage posted at the chain's Manhattan and Brooklyn locations earlier this month. Anyone entering the store could have data on their face, eyes and voices collected and stored by the Rochester-headquartered supermarket chain. The information is used to "protect the safety and security of our patrons and employees," according to the signage. The new scanning policy is an expansion of a 2024 pilot.

The chain had initially said that the scanning system was only for a small group of employees and promised to delete any biometric data it collected from shoppers during the pilot rollout. The new notice makes no such assurances. Wegmans representatives did not reply to questions about how the data would be stored, why it changed its policy or if it would share the data with law enforcement.

United States

You Can't Refuse To Be Scanned by ICE's Facial Recognition App, DHS Document Says (404media.co) 202

An anonymous reader shares a report: Immigration and Customs Enforcement (ICE) does not let people decline to be scanned by its new facial recognition app, which the agency uses to verify a person's identity and their immigration status, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media. The document also says any face photos taken by the app, called Mobile Fortify, will be stored for 15 years, including those of U.S. citizens.

The document provides new details about the technology behind Mobile Fortify, how the data it collects is processed and stored, and DHS's rationale for using it. On Wednesday 404 Media reported that both ICE and Customs and Border Protection (CBP) are scanning peoples' faces in the streets to verify citizenship.

"ICE does not provide the opportunity for individuals to decline or consent to the collection and use of biometric data/photograph collection," the document, called a Privacy Threshold Analysis (PTA), says. A PTA is a document that DHS creates in the process of deploying new technology or updating existing capabilities. It is supposed to be used by DHS's internal privacy offices to determine and describe the privacy risks of a certain piece of tech. "CBP and ICE Privacy are jointly submitting this new mobile app PTA for the ICE Mobile Fortify Mobile App (Mobile Fortify app), a mobile application developed by CBP and made accessible to ICE agents and officers operating in the field," the document, dated February, reads. 404 Media obtained the document (which you can see here) via a Freedom of Information Act (FOIA) request with CBP.

Privacy

Amazon's Ring Plans to Scan Everyone's Face at the Door (msn.com) 106

Amazon will be adding facial recognition to its camera-equipped Ring doorbells for the first time in December, according to the Washington Post.

"While the feature will be optional for Ring device owners, privacy advocates say it's unfair that wherever the technology is in use, anyone within sight will have their faces scanned to determine who's a friend or stranger." The Ring feature is "invasive for anyone who walks within range of your Ring doorbell," said Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center. "They are not consenting to this." Ring spokeswoman Emma Daniels said that Ring's features empower device owners to be responsible users of facial recognition and to comply with relevant laws that "may require obtaining consent prior to identifying people..."

Other companies, including Google, already offer facial recognition for connected doorbells and cameras. You might use similar technology to unlock your iPhone or tag relatives in digital photo albums. But privacy watchdogs said that Ring's use of facial recognition poses added risks, because the company's products are embedded in our neighborhoods and have a history of raising social, privacy and legal questions... It's typically legal to film in public places, including your doorway. And in most of the United States, your permission is not legally required to collect or use your faceprint. Privacy experts said that Ring's use of the technology risks crossing ethical boundaries because of its potential for widespread use in residential areas without people's knowledge or consent.

You choose to unlock your iPhone by scanning your face. A food delivery courier, a child selling candy or someone walking by on the sidewalk is not consenting to have their face captured, stored and compared against Ring's database, said Adam Schwartz, privacy litigation director for the consumer advocacy group Electronic Frontier Foundation. "It's troubling that companies are making a product that by design is taking biometric information from people who are doing the innocent act of walking onto a porch," he said.

Ring's spokesperson said facial recognition won't be available some locations, according to the article, including Texas and Illinois, which passed laws fining companies for collecting face information without permission. But the Washington Post heard another possible worst-case scenario from Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center: databases of identified faces being stolen by cyberthieves, misused by Ring employees, or shared with outsiders such as law enforcement.

Amazon says they're "reuniting lost dogs through the power of AI," in their announcement this week, thanks to "an AI-powered community feature that enables your outdoor Ring cameras to help reunite lost dogs with their families... When a neighbor reports a lost dog in the Ring app, nearby outdoor Ring cameras automatically begin scanning for potential matches."

Amazon calls it an example of their vision for "tools that make it easier for neighbors to look out for each other, and create safer, more connected communities." They're also 10x zoom, enhanced low-light performance, 2K and 4K resolutions, and "advanced AI tuning" for video...
EU

Switzerland Approves Digital ID In Narrow Vote, UK Proposes One Too (theguardian.com) 63

"Swiss voters have backed plans for electronic identity cards by a wafer-thin margin," reports the Guardian, "in the second nationwide vote on the issue." In a referendum on Sunday, 50.4% of voters supported an electronic ID card, while 49.6% were against, confounding pollsters who had forecast stronger support for the "yes" vote. Turnout was 49.55%, higher than expected... [V]oters rejected an earlier version of the e-ID in 2021, largely over objections to the role of private companies in the system. In response to these concerns, the Swiss state will now provide the e-ID, which will be optional and free of charge... To ensure security the e-ID is linked to a single smartphone, users will have to get a new e-ID if they change their device... An ID card containing biometric data — fingerprints — will be available from the end of next year.

Critics of the e-ID scheme raised data protection concerns and said it opened the door to mass surveillance. They also fear the voluntary scheme will become mandatory and disadvantage people without smartphones. The referendum was called after a coalition of rightwing and data-privacy parties collected more than 50,000 signatures against e-ID cards, triggering the vote.

"To further ease privacy concerns, a particular authority seeking information on a person — such as proof of age or nationality, for example — will only be able to check for those specific details," notes the BBC: Supporters of the Swiss system say it will make life much easier for everyone, allowing a range of bureaucratic procedures — from getting a telephone contract to proving you are old enough to buy a bottle of wine — to happen quickly online. Opponents of digital ID cards, who gathered enough signatures to force another referendum on the issue, argue that the measure could still undermine individual privacy. They also fear that, despite the new restrictions on how data is collected and stored, it could still be used to track people and for marketing purposes.
The BBC adds that the UK government also announced plans earlier this week to introduce its own digital ID, "which would be mandatory for employment. The proposed British digital ID would have fewer intended uses than the Swiss version, but has still raised concerns about privacy and data security."

The Guardian reports: The referendum came soon after the UK government announced plans for a digital ID card, which would sit in the digital wallets of smartphones, using state-of-the-art encryption. More than 1.6 million people have signed a petition opposing e-ID cards, which would be mandatory for people working in the UK by 2029.
Thanks to long-time Slashdot reader schwit1 for sharing the news.
The Almighty Buck

Vietnam Shuts Down Millions of Bank Accounts Over Biometric Rules (icobench.com) 23

Longtime Slashdot reader schwit1 shares a report from ICO Bench: As of September 1, 2025, banks across Vietnam are closing accounts deemed inactive or non-compliant with new biometric rules. Authorities estimate that more than 86 million accounts out of roughly 200 million are at risk if users fail to update their identity verification.

The State Bank of Vietnam has also introduced stricter thresholds for transactions:
- Facial authentication is mandatory for online transfers above 10 million VND (about $379).
- Cumulative daily transfers over 20 million VND ($758) also require biometric approval.

The policy is part of the central bank's broader "cashless" strategy, aimed at combating fraud, identity theft, and deepfake-enabled scams. [...] While many Vietnamese citizens have updated their biometric data without issue, the measure has disproportionately affected foreign residents and expatriates who cannot easily return to local branches and dormant accounts that had been left inactive for years.
schwit1 highlights a post on X from Bitcoin expert and TFTC.io founder Marty Bent: "If users don't comply by the 30th they'll lose their money. This is why we bitcoin."
Security

Thieves Busted After Stealing a Cellphone from a Security Expert's Wife (elpais.com) 41

They stole a woman's phone in Barcelona. Unfortunately, her husband was security consultant/penetration tester Martin Vigo, reports Spain's newspaper El Pais.

"His weeks-long investigation coincided with a massive two-year police operation between 2022 and 2024 in six countries where 17 people were arrested: Spain, Argentina, Colombia, Chile, Ecuador, and Peru...." In Vigo's case, the phone was locked and the "Find my iPhone" feature was activated... Once stolen, the phones are likely wrapped in aluminum foil to prevent the GPS from tracking their movements. "Then they go to a safe house where they are gathered together and shipped on pallets outside of Spain, to Morocco or China." This international step is vital to prevent the phone from being blocked if the thieves try to use it again. Carriers in several European countries share lists of the IMEIs (unique numbers for each device) of stolen devices so they can't be used. But Morocco, for example, doesn't share these lists. There, the phone can be reconnected...

With hundreds or thousands of stored phones, another path begins: "They try to get the PIN," says Vigo. Why the PIN? Because with the PIN, you can change the Apple password and access the device's content. The gang had created a system to send thousands of text messages like the one Vigo received. To know who to target with the bait message, the police say, "the organization performed social profiling of the victims, since, in many cases, in addition to the phone, they also had the victim's personal belongings, such as their ID." This is how they obtained the phone numbers to send the malicious SMS...

Each victim received a unique link, and the server knew which victim clicked it... With the first click, the attackers would redirect the user to a website they believed was credible, such as Apple's real iCloud site... [T]he next day you receive another text message, and you click on it, more confidently. However, that link no longer redirects you to the real Apple website, but to a flawless copy created by the criminals: that's where they ask for your PIN, and without thinking, full of hope, you enter it... "The PIN is more powerful than your fingerprint or face. With it, you can delete the victim's biometric information and add your own to access banking apps that are validated this way," says Vigo. Apple Wallet asks you to re-authenticate, and then everything is accessible...

In the press release on the case, the police explained that the gang allegedly used a total of 5,300 fake websites and illegally unlocked around 1.3 million high-end devices, about 30,000 of them in Spain.

Vigo tells El Pais that if the PIN doesn't unlock the device, the criminal gang then sends it to China to be "dismantled and then sent back to Europe for resale. The devices are increasingly valuable because they have more advanced chips, better cameras, and more expensive materials."

To render the phone untraceable in China, "they change certain components and the IMEI. It requires a certain level of sophistication: opening the phone, changing the chip..."
United States

Three US Agencies Get Failing Grades For Not Following IT Best Practices (theregister.com) 19

The Government Accountability Office has issued reports criticizing the Department of Homeland Security, Environmental Protection Agency, and General Services Administration for failing to implement critical IT and cybersecurity recommendations.

DHS leads with 43 unresolved recommendations dating to 2018, including seven priority matters. The EPA has 11 outstanding items, including failures to submit FedRAMP documentation and conduct organization-wide cybersecurity risk assessments. GSA has four pending recommendations.

All three agencies failed to properly log cybersecurity events and conduct required annual IT portfolio reviews. The DHS' HART biometric program remains behind schedule without proper cost accounting or privacy controls, with all nine 2023 recommendations still open.
EU

Google Confirms It Will Sign the EU AI Code of Practice (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: In a rare move, Google has confirmed it will sign the European Union's AI Code of Practice, a framework it initially opposed for being too harsh. However, Google isn't totally on board with Europe's efforts to rein in the AI explosion. The company's head of global affairs, Kent Walker, noted that the code could stifle innovation if it's not applied carefully, and that's something Google hopes to prevent. While Google was initially opposed to the Code of Practice, Walker says the input it has provided to the European Commission has been well-received, and the result is a legal framework it believes can provide Europe with access to "secure, first-rate AI tools." The company claims that the expansion of such tools on the continent could boost the economy by 8 percent (about 1.8 trillion euros) annually by 2034.

These supposed economic gains are being dangled like bait to entice business interests in the EU to align with Google on the Code of Practice. While the company is signing the agreement, it appears interested in influencing the way it is implemented. Walker says Google remains concerned that tightening copyright guidelines and forced disclosure of possible trade secrets could slow innovation. Having a seat at the table could make it easier to bend the needle of regulation than if it followed some of its competitors in eschewing voluntary compliance. [...] The AI Code of Practice aims to provide AI firms with a bit more certainty in the face of a shifting landscape. It was developed with the input of more than 1,000 citizen groups, academics, and industry experts. The EU Commission says companies that adopt the voluntary code will enjoy a lower bureaucratic burden, easing compliance with the block's AI Act, which came into force last year.

Under the terms of the code, Google will have to publish summaries of its model training data and disclose additional model features to regulators. The code also includes guidance on how firms should manage safety and security in compliance with the AI Act. Likewise, it includes paths to align a company's model development with EU copyright law as it pertains to AI, a sore spot for Google and others. Companies like Meta that don't sign the code will not escape regulation. All AI companies operating in Europe will have to abide by the AI Act, which includes the most detailed regulatory framework for generative AI systems in the world. The law bans high-risk uses of AI like intentional deception or manipulation of users, social scoring systems, and real-time biometric scanning in public spaces. Companies that violate the rules in the AI Act could be hit with fines as high as 35 million euros ($40.1 million) or up to 7 percent of the offender's global revenue.

Wireless Networking

Humans Can Be Tracked With Unique 'Fingerprint' Based On How Their Bodies Block Wi-Fi Signals (theregister.com) 38

Researchers from La Sapienza University in Rome have developed "WhoFi," a system that uses the way a person's body distorts Wi-Fi signals to re-identify them across different locations -- even if they're not carrying a phone. By training a deep neural network on these subtle signal distortions, the researchers claim WhoFi is able to achieve up to 95.5% accuracy. The Register reports: "The core insight is that as a Wi-Fi signal propagates through an environment, its waveform is altered by the presence and physical characteristics of objects and people along its path," the authors state in their paper. "These alterations, captured in the form of Channel State Information (CSI), contain rich biometric information." CSI in the context of Wi-Fi devices refers to information about the amplitude and phase of electromagnetic transmissions. These measurements, the researchers say, interact with the human body in a way that results in person-specific distortions. When processed by a deep neural network, the result is a unique data signature.

Researchers proposed a similar technique, dubbed EyeFi, in 2020, and asserted it was accurate about 75 percent of the time. The Rome-based researchers who proposed WhoFi claim their technique makes accurate matches on the public NTU-Fi dataset up to 95.5 percent of the time when the deep neural network uses the transformer encoding architecture. "The encouraging results achieved confirm the viability of Wi-Fi signals as a robust and privacy-preserving biometric modality, and position this study as a meaningful step forward in the development of signal-based Re-ID systems," the authors say.

Privacy

Tinder To Require Facial Recognition Check For New Users In California (axios.com) 42

An anonymous reader quotes a report from Axios: Tinder is mandating new users in California verify their profiles using facial recognition technology starting Monday, executives exclusively tell Axios. The move aims to reduce impersonation and is part of Tinder parent Match Group's broader effort to improve trust and safety amid ongoing user frustration. The Face Check feature prompts users to take a short video selfie during onboarding. The biometric face scan, powered by FaceTec, then confirms the person is real and present and whether their face matches their profile photos. It also checks if the face is used across multiple accounts. If the criteria are met, the user receives a photo verified badge on their profile. The selfie video is then deleted. Tinder stores a non-reversible, encrypted face map to detect duplicate profiles in the future.

Face Check is separate from Tinder's ID Check, which uses a government-issued ID to verify age and identity. "We see this as one part of a set of identity assurance options that are available to users," Match Group's head of trust and safety Yoel Roth says. "Face Check ... is really meant to be about confirming that this person is a real, live person and not a bot or a spoofed account." "Even if in the short term, it has the effect of potentially reducing some top-line user metrics, we think it's the right thing to do for the business," Rascoff said.

Microsoft

Microsoft Authenticator Will Stop Supporting Passwords (cnet.com) 67

Avantare writes: Microsoft Authenticator houses your passwords and lets you sign into all of your Microsoft accounts using a PIN, facial recognition such as Windows Hello, or other biometric data, like a fingerprint. Authenticator can be used in other ways, such as verifying you're logging in if you forgot your password, or using two-factor authentication as an extra layer of security for your Microsoft accounts.
In June, Microsoft stopped letting users add passwords to Authenticator, but here's a timeline of other changes you can expect, according to Microsoft:

July 2025: You won't be able to use the autofill password function.
August 2025: You'll no longer be able to use saved passwords.

Microsoft

Windows Hello Face Unlock No Longer Works in the Dark and Microsoft Says It's Not a Bug (windowscentral.com) 23

Microsoft has disabled Windows Hello's ability to authenticate users in low-light environments through a recent security update that now requires both infrared sensors and color cameras to verify faces. The change forces the system to see a visible face through the webcam before completing authentication with IR sensors.

Windows Hello earlier relied solely on infrared sensors to create 3D facial scans, allowing the feature to work in complete darkness similar to iPhone's Face ID. Microsoft pushed the dual-camera requirement to address a spoofing vulnerability in the biometric system.
AI

Facial Recognition Error Sees Woman Wrongly Accused of Theft (bbc.com) 60

A chain of stores called Home Bargains installed facial recognition software to spot returning shoplifters. Unfortunately, "Facewatch" made a mistake.

"We acknowledge and understand how distressing this experience must have been," an anonymous Facewatch spokesperson tells the BBC, adding that the store using their technology "has since undertaken additional staff training."

A woman was accused by a store manager of stealing about £10 (about $13) worth of items ("Everyone was looking at me"). And then it happened again at another store when she was shopping with her 81-year-old mother on June 4th: "As soon as I stepped my foot over the threshold of the door, they were radioing each other and they all surrounded me and were like 'you need to leave the store'," she said. "My heart sunk and I was anxious and bothered for my mum as well because she was stressed...."

It was only after repeated emails to both Facewatch and Home Bargains that she eventually found there had been an allegation of theft of about £10 worth of toilet rolls on 8 May. Her picture had somehow been circulated to local stores alerting them that they should not allow her entry. Ms. Horan said she checked her bank account to confirm she had indeed paid for the items before Facewatch eventually responded to say a review of the incident showed she had not stolen anything. "Because I was persistent I finally got somewhere but it wasn't easy, it was really stressful," she said. "My anxiety was really bad — it really played with my mind, questioning what I've done for days. I felt anxious and sick. My stomach was turning for a week."

In one email from Facewatch seen by the BBC, the firm told Ms Horan it "relies on information submitted by stores" and the Home Bargains branches involved had since been "suspended from using the Facewatch system". Madeleine Stone, senior advocacy officer at the civil liberties campaign group Big Brother Watch, said they had been contacted by more than 35 people who have complained of being wrongly placed on facial recognition watchlists.

"They're being wrongly flagged as criminals," Ms Stone said.

"They've given no due process, kicked out of stores," adds the senior advocacy officer. "This is having a really serious impact." The group is now calling for the technology to be banned. "Historically in Britain, we have a history that you are innocent until proven guilty but when an algorithm, a camera and a facial recognition system gets involved, you are guilty. The Department for Science, Innovation and Technology said: "While commercial facial recognition technology is legal in the UK, its use must comply with strict data protection laws. Organisations must process biometric data fairly, lawfully and transparently, ensuring usage is necessary and proportionate.

"No one should find themselves in this situation."

Thanks to alanw (Slashdot reader #1,822) for sharing the article.

Slashdot Top Deals