Australia

New South Wales Education Department Caught Unaware After Microsoft Teams Began Collecting Students' Biometric Data (theguardian.com) 47

New submitter optical_phiber writes: In March 2025, the New South Wales (NSW) Department of Education discovered that Microsoft Teams had begun collecting students' voice and facial biometric data without their prior knowledge. This occurred after Microsoft enabled a Teams feature called 'voice and face enrollment' by default, which creates biometric profiles to enhance meeting experiences and transcriptions via its CoPilot AI tool.

The NSW department learned of the data collection a month after it began and promptly disabled the feature and deleted the data within 24 hours. However, the department did not disclose how many individuals were affected or whether they were notified. Despite Microsoft's policy of retaining data only while the user is enrolled and deleting it within 90 days of account deletion, privacy experts have raised serious concerns. Rys Farthing of Reset Tech Australia criticized the unnecessary collection of children's data, warning of the long-term risks and calling for stronger protections.

United States

Montana Becomes First State To Close the Law Enforcement Data Broker Loophole (eff.org) 31

Montana has enacted SB 282, becoming the first state to prohibit law enforcement from purchasing personal data they would otherwise need a warrant to obtain. The landmark legislation closes what privacy advocates call the "data broker loophole," which previously allowed police to buy geolocation data, electronic communications, and other sensitive information from third-party vendors without judicial oversight.

The new law specifically restricts government access to precise geolocation data, communications content, electronic funds transfers, and "sensitive data" including health status, religious affiliation, and biometric information. Police can still access this information through traditional means: warrants, investigative subpoenas, or device owner consent.
Google

Google Will Pay $1.4 Billion to Texas to Settle Claims It Collected User Data Without Permission (apnews.com) 30

Google will pay $1.4 billion to the state of Texas, reports the Associated Press, "to settle claims the company collected users' data without permission, the state's attorney general announced Friday." Attorney General Ken Paxton described the settlement as sending a message to tech companies that he will not allow them to make money off of "selling away our rights and freedoms."

"In Texas, Big Tech is not above the law." Paxton said in a statement. "For years, Google secretly tracked people's movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won...."

The state argued Google was "unlawfully tracking and collecting users' private data." Paxton claimed, for example, that Google collected millions of biometric identifiers, including voiceprints and records of face geometry, through such products and services as Google Photos and Google Assistant. Google spokesperson José Castañeda said the agreement settles an array of "old claims," some of which relate to product policies the company has already changed. "We are pleased to put them behind us, and we will continue to build robust privacy controls into our services," he said in a statement. The company also clarified that the settlement does not require any new product changes.

Google's settlement with Texas "far surpasses any other state's claims for similar violations," according to a statement from their attorney general's office. "To date, no state has attained a settlement against Google for similar data-privacy violations greater than $93 million. Even a multistate coalition that included forty states secured just $391 million — almost a billion dollars less than Texas's recovery."

The statement calls the $1.375 billion settlement "a major win for Texans' privacy" that "tells companies that they will pay for abusing our trust."
Bitcoin

Sam Altman's Eye-Scanning ID Project Launches In US 71

Sam Altman's eye-scanning identity project, now called World, officially launched in the U.S. with six in-person registration sites. CNBC reports: Here's how it works: You go up to an Orb, a spherical biometric device, and it spends about 30 seconds scanning your face and iris, then creates and stores a unique "IrisCode" for you verifying that you're a human and that you've never signed up before. Then you get some of the project's cryptocurrency, WLD, for free, and you can use your World ID as a sign-in with integrated platforms, which currently include an open API integration with Minecraft, Reddit, Telegram, Shopify and Discord.

Starting Thursday, the company is opening six flagship U.S. retail locations where people can sign up to have their eyeball scanned: Austin, Atlanta, Los Angeles, Nashville, Miami and San Francisco. At an event in San Francisco on Wednesday, the venture announced two high-profile partnerships: Visa will introduce the "World Visa card" this summer, available only to people who have had their irises scanned by World, and the online dating giant Match Group will begin a pilot program testing out World ID and some age verification tools with Tinder in Japan.
AI

Discord Begins Testing Facial Recognition Scans For Age Verification 21

Discord has begun testing age verification via facial scans or ID uploads for users in the UK and Australia seeking access to sensitive content. "The chat app's new process has been described as an 'experiment,' and comes in response to laws passed in those countries that place guardrails on youth access to online platforms," reports Gizmodo. From the report: Users may be asked to verify their age when encountering content that has been flagged by Discord's systems as being sensitive in nature, or when they change their settings to enable access to sensitive content. The app will ask users to scan their face through a computer or smartphone webcam; alternatively, they can scan a driver's license or other form of ID. "We're currently running tests in select regions to age-gate access to certain spaces or user settings," a spokesperson for Discord said in a statement. "The information shared to power the age verification method is only used for the one-time age verification process and is not stored by Discord or our vendor. For Face Scan, the solution our vendor uses operates on-device, which means there is no collection of any biometric information when you scan your face. For ID verification, the scan of your ID is deleted upon verification."
Transportation

Air Travel Set for Biggest Overhaul in 50 Years With UN-Backed Digital Credentials (theguardian.com) 103

The International Civil Aviation Organization plans to eliminate boarding passes and check-ins within three years through a new "digital travel credential" system. Passengers will store passport data on their phones and use facial recognition to move through airports, while airlines will automatically detect arrivals via biometric scanning.

The system will dynamically update "journey passes" for flight changes and delays, potentially streamlining connections. "The last upgrade of great scale was the adoption of e-ticketing in the early 2000s," said Valerie Viale from travel technology company Amadeus, who noted passenger data will be deleted within 15 seconds at each checkpoint to address privacy concerns.
Social Networks

Arkansas Social Media Age Verification Law Blocked By Federal Judge (engadget.com) 15

A federal judge struck down Arkansas' Social Media Safety Act, ruling it unconstitutional for broadly restricting both adult and minor speech and imposing vague requirements on platforms. Engadget reports: In a ruling (PDF), Judge Timothy Brooks said that the law, known as Act 689 (PDF), was overly broad. "Act 689 is a content-based restriction on speech, and it is not targeted to address the harms the State has identified," Brooks wrote in his decision. "Arkansas takes a hatchet to adults' and minors' protected speech alike though the Constitution demands it use a scalpel." Brooks also highlighted the "unconstitutionally vague" applicability of the law, which seemingly created obligations for some online services, but may have exempted services which had the "predominant or exclusive function [of]... direct messaging" like Snapchat.

"The court confirms what we have been arguing from the start: laws restricting access to protected speech violate the First Amendment," NetChoice's Chris Marchese said in a statement. "This ruling protects Americans from having to hand over their IDs or biometric data just to access constitutionally protected speech online." It's not clear if state officials in Arkansas will appeal the ruling. "I respect the court's decision, and we are evaluating our options," Arkansas Attorney general Tim Griffin said in a statement.

China

China Bans Compulsory Facial Recognition and Its Use in Private Spaces Like Hotel Rooms (theregister.com) 28

China's Cyberspace Administration and Ministry of Public Security have outlawed the use of facial recognition without consent. From a report: The two orgs last Friday published new rules on facial recognition and an explainer that spell out how orgs that want to use facial recognition must first conduct a "personal information protection impact assessment" that considers whether using the tech is necessary, impacts on individuals' privacy, and risks of data leakage. Organizations that decide to use facial recognition must data encrypt biometric data, and audit the information security techniques and practices they use to protect facial scans. Chinese that go through that process and decide they want to use facial recognition can only do so after securing individuals' consent. The rules also ban the use of facial recognition equipment in public places such as hotel rooms, public bathrooms, public dressing rooms, and public toilets. The measures don't apply to researchers or to what machine translation of the rules describes as "algorithm training activities" -- suggesting images of citizens' faces are fair game when used to train AI models.
AI

Spain To Impose Massive Fines For Not Labeling AI-Generated Content 27

Spain's government has approved legislation imposing substantial fines of up to 35 million euros or 7% of global turnover on companies that fail to clearly label AI-generated content. Reuters reports: The bill adopts guidelines from the European Union's landmark AI Act imposing strict transparency obligations on AI systems deemed to be high-risk, Digital Transformation Minister Oscar Lopez told reporters. "AI is a very powerful tool that can be used to improve our lives ... or to spread misinformation and attack democracy," he said. Spain is among the first EU countries to implement the bloc's rules, considered more comprehensive than the United States' system that largely relies on voluntary compliance and a patchwork of state regulations. Lopez added that everyone was susceptible to "deepfake" attacks - a term for videos, photographs or audios that have been edited or generated through AI algorithms but are presented as real. [...]

The bill also bans other practices, such as the use of subliminal techniques - sounds and images that are imperceptible - to manipulate vulnerable groups. Lopez cited chatbots inciting people with addictions to gamble or toys encouraging children to perform dangerous challenges as examples. It would also prevent organizations from classifying people through their biometric data using AI, rating them based on their behavior or personal traits to grant them access to benefits or assess their risk of committing a crime. However, authorities would still be allowed to use real-time biometric surveillance in public spaces for national security reasons.
EU

AI Systems With 'Unacceptable Risk' Are Now Banned In the EU 72

AI systems that pose "unacceptable risk" or harm can now be banned in the European Union. Some of the unacceptable AI activities include social scoring, deceptive manipulation, exploiting personal vulnerabilities, predictive policing based on appearance, biometric-based profiling, real-time biometric surveillance, emotion inference in workplaces or schools, and unauthorized facial recognition database expansion. TechCrunch reports: Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely.

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."
Technology

Biometrics, Windmills, and VHS tapes: The Winners of 'Rest of World' International Tech Photo Contest (restofworld.org) 5

Since launching in 2020, the nonprofit site RestofWorld.org has been covering tech news from 100 countries. And they've just announced the winners in their 2024 international photography contest.

"From Cape Verde to Bhutan, we received 227 entries from over 45 countries around the world, featuring everything from sprawling mines to biometric facial scans." Like last year, the majority of the entries in our 2024 photography contest captured on-the-ground realities of how technology is transforming lives in every corner of the world. We received submissions from over 45 countries, showcasing a stunning variety of perspectives on the intersection of technology and daily life.

Beyond striking visuals, the photographs tell us stories of how tech plays a role in local communities, from iris-scanning payment systems inside refugee camps to EV battery-powered music gatherings. The 227 entries we received from contestants — including from Mongolia, the Philippines, Argentina, and Jordan — not only celebrate these stories but reaffirm our commitment at Rest of World to challenge stereotypes about how people use technology in their daily lives.

An "honorable mention" photo shows immigrants from Africa arriving on the Italian island of Lampedusa after a perilous boat journey. ("Upon their arrival, these refugees borrowed a smartphone from a bystander and started a video call to let their relatives know they survived the journey.") And the top photo shows a U.S. Customs and Border Protection agent using a cellphone to collect facial scans from migrants entering the country from Mexico. ("After they make the crossing into the U.S., migrants are subjected to further data collection, including DNA samples.")

Biometric data collection was a recurring theme. A photo from Jordan shows a Syrian boy paying for groceries with an iris scanner at a supermarket "run jointly by the World Food Programme and the U.N. High Commissioner for Refugees." Eye-scanning technology is being used there "to ensure people use only their own credit and not borrowed or stolen cards. After having their iris scanned, Syrian refugees living in the camp can make use of services such as health care and shopping, using just their eyes."

Another recurring theme was energy. There's a lovely "honorable mention" photo from the Philippines showing two young people on a beach playing basketball "under the towering blades of the windmills in Bangu... Renewable energy has transformed this community, cutting household expenses and powering opportunities once thought to be out of reach." The third-place photo shows six children in a distant tent in "a mountainous, subarctic forest" in Mongolia" — all gathered around a laptop "to watch a documentary about a Norwegian reindeer herder" who had visited their region. ("Modern technology such as solar panels, car batteries, and the occasional Wi-Fi connection allows these families to stay connected with the world.") One photo shows a young boy carrying a solar panel down from the roof in a remote village in Jharkhand, India.

Another photo documents the largest salt flat in Argentina, part of the so-called "lithium triangle" with parts of Chile and Bolivia. A salt miner says "They started looking for lithium there in 2010. We made them stop; it was hurting the environment and affecting the water. But now they are back and I am afraid. Everything we have could be lost."

And a photo from Nigeria shows two people wearing traditional African attire but adorned with "goggles crafted from repurposed VHS tapes". RestofWorld says the goggles "represent how individuals and communities reclaim and reinterpret technology for art, commentary, and resilience. This practice reflects a community's ability to find new life in what others might discard, highlighting a deep relationship with both old and new technologies."
AI

'Yes, I am a Human': Bot Detection Is No Longer Working 91

The rise of AI has rendered traditional CAPTCHA tests increasingly ineffective, as bots can now "[solve] these puzzles in milliseconds using artificial intelligence (AI)," reports The Conversation. "How ironic. The tools designed to prove we're human are now obstructing us more than the machines they're supposed to be keeping at bay." The report warns that the imminent arrival of AI agents -- software programs designed to autonomously interact with websites on our behalf -- will further complicate matters. From the report: Developers are continually coming up with new ways to verify humans. Some systems, like Google's ReCaptcha v3 (introduced in 2018), don't ask you to solve puzzles anymore. Instead, they watch how you interact with a website. Do you move your cursor naturally? Do you type like a person? Humans have subtle, imperfect behaviors that bots still struggle to mimic. Not everyone likes ReCaptcha v3 because it raises privacy issues -- plus the web company needs to assess user scores to determine who is a bot, and the bots can beat the system anyway. There are alternatives that use similar logic, such as "slider" puzzles that ask users to move jigsaw pieces around, but these too can be overcome.

Some websites are now turning to biometrics to verify humans, such as fingerprint scans or voice recognition, while face ID is also a possibility. Biometrics are harder for bots to fake, but they come with their own problems -- privacy concerns, expensive tech and limited access for some users, say because they can't afford the relevant smartphone or can't speak because of a disability. The imminent arrival of AI agents will add another layer of complexity. It will mean we increasingly want bots to visit sites and do things on our behalf, so web companies will need to start distinguishing between "good" bots and "bad" bots. This area still needs a lot more consideration, but digital authentication certificates are proposed as one possible solution.

In sum, Captcha is no longer the simple, reliable tool it once was. AI has forced us to rethink how we verify people online, and it's only going to get more challenging as these systems get smarter. Whatever becomes the next technological standard, it's going to have to be easy to use for humans, but one step ahead of the bad actors. So the next time you find yourself clicking on blurry traffic lights and getting infuriated, remember you're part of a bigger fight. The future of proving humanity is still being written, and the bots won't be giving up any time soon.
AI

Photobucket Sued Over Plans To Sell User Photos, Biometric Identifiers To AI Companies (arstechnica.com) 22

Photobucket was sued Wednesday after a recent privacy policy update revealed plans to sell users' photos -- including biometric identifiers like face and iris scans -- to companies training generative AI models. From a report: The proposed class action seeks to stop Photobucket from selling users' data without first obtaining written consent, alleging that Photobucket either intentionally or negligently failed to comply with strict privacy laws in states like Illinois, New York, and California by claiming it can't reliably determine users' geolocation.

Two separate classes could be protected by the litigation. The first includes anyone who ever uploaded a photo between 2003 -- when Photobucket was founded -- and May 1, 2024. Another potentially even larger class includes any non-users depicted in photographs uploaded to Photobucket, whose biometric data has also allegedly been sold without consent.

Photobucket risks huge fines if a jury agrees with Photobucket users that the photo-storing site unjustly enriched itself by breaching its user contracts and illegally seizing biometric data without consent. As many as 100 million users could be awarded untold punitive damages, as well as up to $5,000 per "willful or reckless violation" of various statutes.

The Almighty Buck

Bill Gates Applauds Open Source Tools for 'Digital Public Infrastructure' (gatesnotes.com) 49

It connects people, data, and money, Bill Gates wrote this week on his personal blog. But digital public infrastructure is also "revolutionizing the way entire nations serve their people, respond to crises, and grow their economies" — and the Gates Foundation sees it "as an important part of our efforts to help save lives and fight poverty in poor countries." Digital public infrastructure [or "DPI"]: digital ID systems that securely prove who you are, payment systems that move money instantly and cheaply, and data exchange platforms that allow different services to work together seamlessly... [W]ith the right investments, countries can use DPI to bypass outdated and inefficient systems, immediately adopt cutting-edge digital solutions, and leapfrog traditional development trajectories — potentially accelerating their progress by more than a decade. Countries without extensive branch banking can move straight to mobile banking, reaching far more people at a fraction of the cost. Similarly, digital ID systems can provide legal identity to millions who previously lacked official documentation, giving them access to a wide range of services — from buying a SIM card to opening a bank account to receiving social benefits like pensions.

I've heard concerns about DPI — here's how I think about them. Many people worry digital systems are a tool for government surveillance. But properly designed DPI includes safeguards against misuse and even enhances privacy... These systems also reduce the need for physical document copies that can be lost or stolen, and even create audit trails that make it easier to detect and prevent unauthorized access. The goal is to empower people, not restrict them. Then there's the fear that DPI will disenfranchise vulnerable populations like rural communities, the elderly, or those with limited digital literacy. But when it's properly designed and thoughtfully implemented, DPI actually increases inclusion — like in India, where millions of previously unbanked people now have access to financial services, and where biometric exceptions or assisted enrollment exist for people with physical disabilities or no fixed address.

Meanwhile, countries can use open-source tools — like MOSIP for digital identity and Mojaloop for payments — to build DPI that fosters competition and promotes innovation locally. By providing a common digital framework, they allow smaller companies and start-ups to build services without requiring them to create the underlying systems from scratch. Even more important, they empower countries to seek out services that address their own unique needs and challenges without forcing them to rely on proprietary systems.

"Digital public infrastructure is key to making progress on many of the issues we work on at the Gates Foundation," Bill writes, "including protecting children from preventable diseases, strengthening healthcare systems, improving the lives and livelihoods of farmers, and empowering women to control their financial futures.

"That's why we're so committed to DPI — and why we've committed $200 million over five years to supporting DPI initiatives around the world... The future is digital. Let's make sure it's a future that benefits everyone."
Bitcoin

Sam Altman's Worldcoin Rebrands As 'World,' Unveils Next Generation Orb (cointelegraph.com) 32

The blockchain-based identity verification company founded by Sam Altman is now called "World." It also unveiled a new version of the "Orb" biometric devices the company uses to scan users' eyes. CoinTelegraph reports: World, as it's now known, also revealed a slew of other updates including a new version of its Orb biometric scanning devices, new options for identity verification and partnership integrations with popular apps including FaceTime, WhatsApp, and Zoom. [...] The new Orb, powered by Nvidia hardware, will be more efficient and "five times" more powerful than its predecessor with a smaller footprint and fewer parts. The company also said the new Orb would eventually be available in self-service kiosks in some markets.

World also announced that users will soon be able to verify their identity through methods other than the firm's Orb hardware. Through a program called World ID Credentials, the company says users with NFC-enabled government issued passports will allow them to verify their identity on the World app. Another major announcement came in the form of World ID Deep Face, a service the company claims has "solved deepfakes." According to the company, its software can be implemented into just about any app where video can be uploaded or streamed to determine whether videos featuring verified persons are real or have been faked using AI. Finally, the company also announced that so far 15 million users have signed up for its World app service; among them, seven million are verified.

EU

EU Delays New Biometric Travel Checks as IT Systems Not Up To Speed (usnews.com) 18

The European Union has delayed the introduction of a new biometric entry-check system for non-EU citizens, which was due to be introduced on Nov. 10, after Germany, France and the Netherlands said border computer systems were not yet ready. From a report: "Nov. 10 is no longer on the table," EU Home Affairs Commissioner Ylva Johansson told reporters. She said there was no new timetable, but that the possibility of a phased introduction was being looked at. The Entry/Exit System (EES) is supposed to create a digital record linking a travel document to biometric readings confirming a person's identity, removing the need to manually stamp passports at the EU's external border. It would require non-EU citizens arriving in the Schengen free-travel area to register their fingerprints, provide a facial scan and answer questions about their stay.
Government

California Passes Law To Protect Consumer 'Brain Data' (govtech.com) 28

On September 28, California amended the California Consumer Privacy Act of 2018 to recognize the importance of mental privacy. "The law marks the second such legal protection for data produced from invasive neurotechnology, following Colorado, which incorporated neural data into its state data privacy statute, the Colorado Privacy Act (CPA) in April," notes Law.com. GovTech reports: The new bill amends the California Consumer Privacy Act of 2018, which grants consumers rights over personal information that is collected by businesses. The term "personal information" already included biometric data (such as your face, voice, or fingerprints). Now it also explicitly includes neural data. The bill defines neural data as "information that is generated by measuring the activity of a consumer's central or peripheral nervous system, and that is not inferred from nonneural information." In other words, data collected from a person's brain or nerves.

The law prevents companies from selling or sharing a person's data and requires them to make efforts to deidentify the data. It also gives consumers the right to know what information is collected and the right to delete it. "This new law in California will make the lives of consumers safer while sending a clear signal to the fast-growing neurotechnology industry there are high expectations that companies will provide robust protections for mental privacy of consumers," Jared Genser, general counsel to the Neurorights Foundation, which cosponsored the bill, said in a statement. "That said, there is much more work ahead."

Privacy

Apple Vision Pro's Eye Tracking Exposed What People Type 7

An anonymous reader quotes a report from Wired: You can tell a lot about someone from their eyes. They can indicate how tired you are, the type of mood you're in, and potentially provide clues about health problems. But your eyes could also leak more secretive information: your passwords, PINs, and messages you type. Today, a group of six computer scientists are revealing a new attack against Apple's Vision Pro mixed reality headset where exposed eye-tracking data allowed them to decipher what people entered on the device's virtual keyboard. The attack, dubbed GAZEploit and shared exclusively with WIRED, allowed the researchers to successfully reconstruct passwords, PINs, and messages people typed with their eyes. "Based on the direction of the eye movement, the hacker can determine which key the victim is now typing," says Hanqiu Wang, one of the leading researchers involved in the work. They identified the correct letters people typed in passwords 77 percent of the time within five guesses and 92 percent of the time in messages.

To be clear, the researchers did not gain access to Apple's headset to see what they were viewing. Instead, they worked out what people were typing by remotely analyzing the eye movements of a virtual avatar created by the Vision Pro. This avatar can be used in Zoom calls, Teams, Slack, Reddit, Tinder, Twitter, Skype, and FaceTime. The researchers alerted Apple to the vulnerability in April, and the company issued a patch to stop the potential for data to leak at the end of July. It is the first attack to exploit people's "gaze" data in this way, the researchers say. The findings underline how people's biometric data -- information and measurements about your body -- can expose sensitive information and beused as part of the burgeoning surveillance industry.

The GAZEploit attack consists of two parts, says Zhan, one of the lead researchers. First, the researchers created a way to identify when someone wearing the Vision Pro is typing by analyzing the 3D avatar they are sharing. For this, they trained a recurrent neural network, a type of deep learning model, with recordings of 30 people's avatars while they completed a variety of typing tasks. When someone is typing using the Vision Pro, their gaze fixates on the key they are likely to press, the researchers say, before quickly moving to the next key. "When we are typing our gaze will show some regular patterns," Zhan says. Wang says these patterns are more common during typing than if someone is browsing a website or watching a video while wearing the headset. "During tasks like gaze typing, the frequency of your eye blinking decreases because you are more focused," Wang says. In short: Looking at a QWERTY keyboard and moving between the letters is a pretty distinct behavior.

The second part of the research, Zhan explains, uses geometric calculations to work out where someone has positioned the keyboard and the size they've made it. "The only requirement is that as long as we get enough gaze information that can accurately recover the keyboard, then all following keystrokes can be detected." Combining these two elements, they were able to predict the keys someone was likely to be typing. In a series of lab tests, they didn't have any knowledge of the victim's typing habits, speed, or know where the keyboard was placed. However, the researchers could predict the correct letters typed, in a maximum of five guesses, with 92.1 percent accuracy in messages, 77 percent of the time for passwords, 73 percent of the time for PINs, and 86.1 percent of occasions for emails, URLs, and webpages. (On the first guess, the letters would be right between 35 and 59 percent of the time, depending on what kind of information they were trying to work out.) Duplicate letters and typos add extra challenges.
Privacy

Illinois Governor Approves Business-Friendly Overhaul of Biometric Privacy Law (reuters.com) 38

Illinois Governor J.B. Pritzker has signed a bill into law that will significantly curb the penalties companies could face for improperly collecting and using fingerprints and other biometric data from workers and consumers. From a report: The bill passed by the legislature in May and signed by Pritzker, a Democrat, on Friday amends the state's Biometric Information Privacy Act (BIPA) so that companies can be held liable only for a single violation per person, rather than for each time biometric data is allegedly misused.

The amendments will dramatically limit companies' exposure in BIPA cases and could discourage plaintiffs' lawyers from filing many lawsuits in the first place, management-side lawyers said. "By limiting statutory damages to a single recovery per individual ... companies in most instances will no longer face the prospect of potentially annihilative damages awards that greatly outpace any privacy harms," David Oberly, of counsel at Baker Donelson in Washington, D.C., said before the bill was signed. BIPA, a 2008 law, requires companies to obtain permission before collecting fingerprints, retinal scans and other biometric information from workers and consumers. The law imposes penalties of $1,000 per violation and $5,000 for reckless or intentional violations.

Privacy

Meta To Pay Record $1.4 Billion To Settle Texas Facial Recognition Suit (texastribune.org) 43

Meta will pay Texas $1.4 billion to settle a lawsuit alleging the company used personal biometric data without user consent, marking the largest privacy-related settlement ever obtained by a state. The Texas Tribune reports: The 2022 lawsuit, filed by Texas Attorney General Ken Paxton in state court, alleged that Meta had been using facial recognition software on photos uploaded to Facebook without Texans' consent. The settlement will be paid over five years. The attorney general's office did not say whether the money from the settlement would go into the state's general fund or if it would be distributed in some other way. The settlement, announced Tuesday, does not act as an admission of guilt and Meta maintains no wrongdoing. This was the first lawsuit Paxton's office argued under a 2009 state law that protects Texans' biometric data, like fingerprints and facial scans. The law requires businesses to inform and get consent from individuals before collecting such data. It also limits sharing this data, except in certain cases like helping law enforcement or completing financial transactions. Businesses must protect this data and destroy it within a year after it's no longer needed.

In 2011, Meta introduced a feature known as Tag Suggestions to make it easier for users to tag people in their photos. According to Paxton's office, the feature was turned on by default and ran facial recognition on users' photos, automatically capturing data protected by the 2009 law. That system was discontinued in 2021, with Meta saying it deleted over 1 billion people's individual facial recognition data. As part of the settlement, Meta must notify the attorney general's office of anticipated or ongoing activities that may fall under the state's biometric data laws. If Texas objects, the parties have 60 days to attempt to resolve the issue. Meta officials said the settlement will make it easier for the company to discuss the implications and requirements of the state's biometric data laws with the attorney general's office, adding that data protection and privacy are core priorities for the firm.

Slashdot Top Deals