AI

OpenAI and Arianna Huffington Are Working Together On an 'AI Health Coach' 25

OpenAI CEO Sam Altman and businesswoman Arianna Huffington have announced they're working on an "AI health coach" via Thrive AI Health. According to a Time magazine op-ed, the two executives said that the bot will be trained on "the best peer-reviewed science" alongside "the personal biometric, lab, and other medical data you've chosen to share with it." The Verge reports: The company tapped DeCarlos Love, a former Google executive who previously worked on Fitbit and other wearables, to be CEO. Thrive AI Health also established research partnerships with several academic institutions and medical centers like Stanford Medicine, the Rockefeller Neuroscience Institute at West Virginia University, and the Alice L. Walton School of Medicine. (The Alice L. Walton Foundation is also a strategic investor in Thrive AI Health.) Thrive AI Health's goal is to provide powerful insights to those who otherwise wouldn't have access -- like a single mother looking for quick meal ideas for her gluten-free child or an immunocompromised person in need of instant advice in between doctor's appointments. [...]

The bot is still in its early stages, adopting an Atomic Habits approach. Its goal is to gently encourage small changes in five key areas of your life: sleep, nutrition, fitness, stress management, and social connection. By making minor adjustments, such as suggesting a 10-minute walk after picking up your child from school, Thrive AI Health aims to positively impact people with chronic conditions like heart disease. It doesn't claim to be ready to provide real diagnosis like a doctor would but instead aims to guide users into a healthier lifestyle. "AI is already greatly accelerating the rate of scientific progress in medicine -- offering breakthroughs in drug development, diagnoses, and increasing the rate of scientific progress around diseases like cancer," the op-ed read.
United Kingdom

How Facial Recognition Tech Is Being Used In London By Shops - and Police (bbc.co.uk) 98

"Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'."

That's a quote from the BBC by a wrongly accused customer who was flagged by a facial-recognition system called Facewatch. "She says after her bag was searched she was led out of the shop, and told she was banned from all stores using the technology."

Facewatch later wrote to her and acknowledged it had made an error — but declined to comment on the incident in the BBC's report: [Facewatch] did say its technology helped to prevent crime and protect frontline workers. Home Bargains, too, declined to comment. It's not just retailers who are turning to the technology... [I]n east London, we joined the police as they positioned a modified white van on the high street. Cameras attached to its roof captured thousands of images of people's faces. If they matched people on a police watchlist, officers would speak to them and potentially arrest them...

On the day we were filming, the Metropolitan Police said they made six arrests with the assistance of the tech... The BBC spoke to several people approached by the police who confirmed that they had been correctly identified by the system — 192 arrests have been made so far this year as a result of it.

Lindsey Chiswick, director of intelligence for the Met, told the BBC that "It takes less than a second for the technology to create a biometric image of a person's face, assess it against the bespoke watchlist and automatically delete it when there is no match."

"That is the correct and acceptable way to do it," writes long-time Slashdot reader Baron_Yam, "without infringing unnecessarily on the freedoms of the average citizen. Just tell me they have appropriate rules, effective oversight, and a penalty system with teeth to catch and punish the inevitable violators."

But one critic of the tech complains to the BBC that everyone scanned automatically joins "a digital police line-up," while the article adds that others "liken the process to a supermarket checkout — where your face becomes a bar code." And "The error count is much higher once someone is actually flagged. One in 40 alerts so far this year has been a false positive..."

Thanks to Slashdot reader Bruce66423 for sharing the article.
AI

Apple's AI Plans Include 'Black Box' For Cloud Data (appleinsider.com) 14

How will Apple protect user data while their requests are being processed by AI in applications like Siri?

Long-time Slashdot reader AmiMoJo shared this report from Apple Insider: According to sources of The Information [four different former Apple employees who worked on the project], Apple intends to process data from AI applications inside a virtual black box.

The concept, known as "Apple Chips in Data Centers" internally, would involve only Apple's hardware being used to perform AI processing in the cloud. The idea is that it will control both the hardware and software on its servers, enabling it to design more secure systems. While on-device AI processing is highly private, the initiative could make cloud processing for Apple customers to be similarly secure... By taking control over how data is processed in the cloud, it would make it easier for Apple to implement processes to make a breach much harder to actually happen.

Furthermore, the black box approach would also prevent Apple itself from being able to see the data. As a byproduct, this means it would also be difficult for Apple to hand over any personal data from government or law enforcement data requests.

Processed data from the servers would be stored in Apple's "Secure Enclave" (where the iPhone stores biometric data, encryption keys and passwords), according to the article.

"Doing so means the data can't be seen by other elements of the system, nor Apple itself."
EU

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 28

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

The Almighty Buck

JPMorgan, Mastercard Embrace Biometric Payment Options 27

With JPMorgan and Mastercard piloting biometric payment options, a future where consumers can pay with their face is rapidly approaching. "Our focus on biometrics as a secure way to verify identity, replacing the password with the person, is at the heart of our efforts in this area," said Dennis Gamiello, executive vice president of identity products and innovation at Mastercard. Based on the positive feedback received thus far, Gamiello says the biometric checkout technology will roll out to more new markets later this year. CNBC reports: Biometric payment options are becoming more common. Amazon introduced pay-by-palm technology in 2020, and while its cashier-less store experiment has faltered, it installed the tech in 500 of its Whole Foods stores last year. Mastercard, which is working with PopID, launched a pilot for face-based payments in Brazil back in 2022, and it was deemed a success -- 76% of pilot participants said they would recommend the technology to a friend. Late last year, Mastercard said it was teaming with NEC to bring its Biometric Checkout Program to the Asia-Pacific region.

A deal that PopID recently signed with JPMorgan is a sign of things to come in the U.S., said John Miller, PopID CEO, and what he thinks will be a "breakthrough" year for pay-by-face technology. The consumer case is tied to the growing importance of loyalty programs. Most quick-service restaurants require consumers to provide their loyalty information to earn rewards -- which means pulling out a phone, opening an app, finding the link to the loyalty QR code, and then presenting the QR code to the cashier or reader. For payment, consumers are typically choosing between pulling out their wallet, selecting a credit card, and then dipping or tapping the card or pulling out their phone, opening it with Face ID, and then presenting it to the reader. Miller says PopID simplifies this process by requiring just tapping an on-screen button, and then looking briefly at a camera for both loyalty check-in and payment.

"We believe our partnership with JPMorgan is a watershed moment for biometric payments as it represents the first time a leading merchant acquirer has agreed to push biometric payments to its merchant customers," Miller said. "JPMorgan brings the kind of credibility and assurance that both merchants and consumers need to adopt biometric payments." Juniper Research forecasts over 100% market growth for global biometric payments between 2024 and 2028, and by 2025, $3 trillion in mobile, biometric-secured payments. Sheldon Jacobson, a professor in computer science at the University of Illinois, Urbana-Champaign, said he sees biometric identification as part of a technology continuum that has evolved from payment with a credit card to smartphones. "The next natural step is to simply use facial recognition," he said.
Crime

Deep Fake Scams Growing in Global Frequency and Sophistication, Victim Warns (cnn.com) 19

In an elaborate scam in January, "a finance worker, was duped into attending a video call with people he believed were the chief financial officer and other members of staff," remembers CNN. But Hong Kong police later said that all of them turned out to be deepfake re-creations which duped the employee into transferring $25 million. According to police, the worker had initially suspected he had received a phishing email from the company's UK office, as it specified the need for a secret transaction to be carried out. However, the worker put aside his doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized.
Now the targeted company has been revealed: a major engineering consulting firm, with 18,500 employees across 34 offices: A spokesperson for London-based Arup told CNN on Friday that it notified Hong Kong police in January about the fraud incident, and confirmed that fake voices and images were used. "Unfortunately, we can't go into details at this stage as the incident is still the subject of an ongoing investigation. However, we can confirm that fake voices and images were used," the spokesperson said in an emailed statement. "Our financial stability and business operations were not affected and none of our internal systems were compromised," the person added...

Authorities around the world are growing increasingly concerned about the sophistication of deepfake technology and the nefarious uses it can be put to. In an internal memo seen by CNN, Arup's East Asia regional chairman, Michael Kwok, said the "frequency and sophistication of these attacks are rapidly increasing globally, and we all have a duty to stay informed and alert about how to spot different techniques used by scammers."

The company's global CIO emailed CNN this statement. "Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.

"What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months."

Slashdot reader st33ld13hl adds that in a world of Deep Fakes, insurance company USAA is now asking its customers to authenticate with voice. (More information here.)

Thanks to Slashdot reader quonset for sharing the news.
Privacy

In Argentina, Facing Surging Inflation, 500K Accept Worldcoin's Offer of $50 for Iris-Scanning (restofworld.org) 67

Wednesday Rest of World noticed an overlooked tech story in Argentina: Olga de León looked confused as she walked out of a nightclub on the edge of Buenos Aires on a recent Tuesday afternoon. She had just had her iris scanned. "No one told me what they'll do with my eye," de León, 57, told Rest of World. "But I did this out of need." De León, who lives off the $95 pension she receives from the state, had been desperate for money. Persuaded by her nephew, she agreed to have one of her irises scanned by Worldcoin, Sam Altman's blockchain project. In exchange, she received nearly $50 worth of WLD, the company's cryptocurrency.

De León is one of about half a million Argentines who have handed their biometric data over to Worldcoin. Beaten down by the country's 288% inflation rate and growing unemployment, they have flocked to Worldcoin Orb verification hubs, eager to get the sign-up crypto bonus offered by the company. A network of intermediaries — who earn a commission from every iris scan — has lured many into signing up for the practice in Argentina, where data privacy laws remain weak. But as the popularity of Worldcoin skyrockets in the country, experts have sounded the alarm about the dangers of giving away biometric data. Two provinces are now pushing for legal investigations. "Seeing that [iris scans have] been banned in European countries, shouldn't we be trying to stop it, too?" Javier Smaldone, a software consultant and digital security expert, told Rest of World.

Last month Worldcoin's web site announced that more than 10 million people in 160 countries had created a World ID and compatible wallet (performing 75 million transactions) — and that 5,195,475 people had also verified their World ID using Worldcoin's iris-scanning Orb.

But the article notes a big drop in the number of countries even allowing Worldcoin's iris-scanning — from 25 to just eight. While in less than a year Worldcoin opened nearly 60 centers across Argentina...
Privacy

Cops Can Force Suspect To Unlock Phone With Thumbprint, US Court Rules (arstechnica.com) 146

An anonymous reader quotes a report from Ars Technica: The US Constitution's Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law. The US Court of Appeals for the 9th Circuit had to grapple with the question of "whether the compelled use of Payne's thumb to unlock his phone was testimonial," the ruling (PDF) in United States v. Jeremy Travis Payne said. "To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial."

A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court's denial of Payne's motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine. There was a dispute in District Court over whether a CHP officer "forcibly used Payne's thumb to unlock the phone." But for the purposes of Payne's appeal, the government "accepted the defendant's version of the facts, i.e., 'that defendant's thumbprint was compelled.'" Payne's Fifth Amendment claim "rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination," the ruling said. Judges rejected his claim, holding "that the compelled use of Payne's thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking." "When Officer Coddington used Payne's thumb to unlock his phone -- which he could have accomplished even if Payne had been unconscious -- he did not intrude on the contents of Payne's mind," the court also said.

United States

A Breakthrough Online Privacy Proposal Hits Congress (wired.com) 27

An anonymous reader quotes a report from Wired: Congress may be closer than ever to passing a comprehensive data privacy framework after key House and Senate committee leaders released a new proposal on Sunday. The bipartisan proposal, titled the American Privacy Rights Act, or APRA, would limit the types of consumer data that companies can collect, retain, and use, allowing solely what they'd need to operate their services. Users would also be allowed to opt out of targeted advertising, and have the ability to view, correct, delete, and download their data from online services. The proposal would also create a national registry of data brokers, and force those companies to allow users to opt out of having their data sold. [...] In an interview with The Spokesman Review on Sunday, [Cathy McMorris Rodgers, House Energy and Commerce Committee chair] claimed that the draft's language is stronger than any active laws, seemingly as an attempt to assuage the concerns of Democrats who have long fought attempts to preempt preexisting state-level protections. APRA does allow states to pass their own privacy laws related to civil rights and consumer protections, among other exceptions.

In the previous session of Congress, the leaders of the House Energy and Commerce Committees brokered a deal with Roger Wicker, the top Republican on the Senate Commerce Committee, on a bill that would preempt state laws with the exception of the California Consumer Privacy Act and the Biometric Information Privacy Act of Illinois. That measure, titled the American Data Privacy and Protection Act, also created a weaker private right of action than most Democrats were willing to support. Maria Cantwell, Senate Commerce Committee chair, refused to support the measure, instead circulating her own draft legislation. The ADPPA hasn't been reintroduced, but APRA was designed as a compromise. "I think we have threaded a very important needle here," Cantwell told The Spokesman Review. "We are preserving those standards that California and Illinois and Washington have."

APRA includes language from California's landmark privacy law allowing people to sue companies when they are harmed by a data breach. It also provides the Federal Trade Commission, state attorneys general, and private citizens the authority to sue companies when they violate the law. The categories of data that would be impacted by APRA include certain categories of "information that identifies or is linked or reasonably linkable to an individual or device," according to a Senate Commerce Committee summary of the legislation. Small businesses -- those with $40 million or less in annual revenue and limited data collection -- would be exempt under APRA, with enforcement focused on businesses with $250 million or more in yearly revenue. Governments and "entities working on behalf of governments" are excluded under the bill, as are the National Center for Missing and Exploited Children and, apart from certain cybersecurity provisions, "fraud-fighting" nonprofits. Frank Pallone, the top Democrat on the House Energy and Commerce Committee, called the draft "very strong" in a Sunday statement, but said he wanted to "strengthen" it with tighter child safety provisions.

Privacy

Portugal Orders Altman's Worldcoin To Halt Data Collection (reuters.com) 24

Portugal's data regulator has ordered Sam Altman's iris-scanning project Worldcoin to stop collecting biometric data for 90 days, it said on Tuesday, in the latest regulatory blow to a venture that has raised privacy concerns in multiple countries. From a report: Worldcoin encourages people to have their faces scanned by its "orb" devices, in exchange for a digital ID and free cryptocurrency. More than 4.5 million people in 120 countries have signed up, according to Worldcoin's website. Portugal's data regulator, the CNPD, said there was a high risk to citizens' data protection rights, which justified urgent intervention to prevent serious harm. More than 300,000 people in Portugal have provided Worldcoin with their biometric data, the CNPD said.
Privacy

Worldcoin Fails To Get Injunction Against Spain's Privacy Suspension (techcrunch.com) 9

Controversial eyeball scanning startup Worldcoin has failed to get an injunction against a temporary suspension ordered Wednesday by Spain's data protection authority, the AEPD. TechCrunch: The authority used emergency powers contained in the European Union's General Data Protection Regulation (GDPR) to make the local order, which can apply for up to three months. It said it was taking the precautionary measure against Worldcoin's operator, Tools for Humanity, in light of the sensitive nature of the biometric data being collected, which could pose a high risk to the rights and freedoms of individuals. It also raised specific concerns about risks to minors, citing complaints received.

Today a Madrid-based High Court declined to grant an injunction against the AEPD's order, saying that the "safeguarding of public interest" must be prioritized. As we reported Friday, the crypto blockchain biometrics digital identity firm shuttered scanning in the market shortly after the AEPD order -- which gave it 72 hours to comply. Today's court decision means Worldcoin's services remain suspended in Spain -- for up to three months.

United Kingdom

Leisure Firm in UK Told Scanning Staff Faces is Illegal (bbc.co.uk) 17

Bruce66423 writes: The data watchdog has ordered a leisure centre group to stop using facial recognition tech to monitor its staff. The Information Commissioner's Office (ICO) says Serco Leisure has been unlawfully processing the biometric data of more than 2,000 employees at 38 UK leisure facilities. It did so to check staff attendance - a practice the ICO said was "neither fair nor proportionate."

Serco Leisure says it will comply with the enforcement notice. But it added it had taken legal advice prior to installing the cameras, and said staff had not complained about them during the five years they had been in place. The firm said it was to "make clocking-in and out easier and simpler" for workers. "We engaged with our team members in advance of its roll-out and its introduction was well-received by colleagues," the company said in a statement.

Security

Fingerprints Can Be Recreated From the Sounds Made When Swiping On a Touchscreen (tomshardware.com) 42

An anonymous reader quotes a report from Tom's Hardware: An interesting new attack on biometric security has been outlined by a group of researchers from China and the US. PrintListener: Uncovering the Vulnerability of Fingerprint Authentication via the Finger Friction Sound [PDF] proposes a side-channel attack on the sophisticated Automatic Fingerprint Identification System (AFIS). The attack leverages the sound characteristics of a user's finger swiping on a touchscreen to extract fingerprint pattern features. Following tests, the researchers assert that they can successfully attack "up to 27.9% of partial fingerprints and 9.3% of complete fingerprints within five attempts at the highest security FAR [False Acceptance Rate] setting of 0.01%." This is claimed to be the first work that leverages swiping sounds to infer fingerprint information.

Without contact prints or finger detail photos, how can an attacker hope to get any fingerprint data to enhance MasterPrint and DeepMasterPrint dictionary attack results on user fingerprints? One answer is as follows: the PrintListener paper says that "finger-swiping friction sounds can be captured by attackers online with a high possibility." The source of the finger-swiping sounds can be popular apps like Discord, Skype, WeChat, FaceTime, etc. Any chatty app where users carelessly perform swiping actions on the screen while the device mic is live. Hence the side-channel attack name -- PrintListener. [...]

To prove the theory, the scientists practically developed their attack research as PrintListener. In brief, PrintListener uses a series of algorithms for pre-processing the raw audio signals which are then used to generate targeted synthetics for PatternMasterPrint (the MasterPrint generated by fingerprints with a specific pattern). Importantly, PrintListener went through extensive experiments "in real-world scenarios," and, as mentioned in the intro, can facilitate successful partial fingerprint attacks in better than one in four cases, and complete fingerprint attacks in nearly one in ten cases. These results far exceed unaided MasterPrint fingerprint dictionary attacks.

Privacy

Vietnam To Collect Biometrics For New ID Cards (theregister.com) 33

Starting in July, the Vietnamese government will begin collecting biometric information from its citizens when issuing new identification cards. The Register reports: Prime minister Pham Minh Chinh instructed the nation's Ministry of Public Security to collect the data in the form of iris scans, voice samples and actual DNA, in accordance with amendments to Vietnam's Law on Citizen Identification. The ID cards are issued to anyone over the age of 14 in Vietnam, and are optional for citizens between the ages of 6 and 14, according to a government news report. Amendments to the Law on Citizen Identification that allow collection of biometrics passed on November 27 of last year.

The law allows recording of blood type among the DNA-related information that will be contained in a national database to be shared across agencies "to perform their functions and tasks." The ministry will work with other parts of the government to integrate the identification system into the national database. [...] Vietnam's future identity cards will incorporate the functions of health insurance cards, social insurance books, driver's licenses, birth certificates, and marriage certificates, as defined by the amendment.

As for how the information will be collected, the amendments state: "Biometric information on DNA and voice is collected when voluntarily provided by the people or the agency conducting criminal proceedings or the agency managing the person to whom administrative measures are applied in the process of settling the case according to their functions and duties whether to solicit assessment or collect biometric information on DNA, people's voices are shared with identity management agencies for updating and adjusting to the identity database."

Privacy

New 'Gold Pickaxe' Android, iOS Malware Steals Your Face For Fraud (bleepingcomputer.com) 13

An anonymous reader quotes a report from BleepingComputer: A new iOS and Android trojan named 'GoldPickaxe' employs a social engineering scheme to trick victims into scanning their faces and ID documents, which are believed to be used to generate deepfakes for unauthorized banking access. The new malware, spotted by Group-IB, is part of a malware suite developed by the Chinese threat group known as 'GoldFactory,' which is responsible for other malware strains such as 'GoldDigger', 'GoldDiggerPlus,' and 'GoldKefu.' Group-IB says its analysts observed attacks primarily targeting the Asia-Pacific region, mainly Thailand and Vietnam. However, the techniques employed could be effective globally, and there's a danger of them getting adopted by other malware strains. [...]

For iOS (iPhone) users, the threat actors initially directed targets to a TestFlight URL to install the malicious app, allowing them to bypass the normal security review process. When Apple remove the TestFlight app, the attackers switched to luring targets into downloading a malicious Mobile Device Management (MDM) profile that allows the threat actors to take control over devices. Once the trojan has been installed onto a mobile device in the form of a fake government app, it operates semi-autonomously, manipulating functions in the background, capturing the victim's face, intercepting incoming SMS, requesting ID documents, and proxying network traffic through the infected device using 'MicroSocks.'

Group-IB says the Android version of the trojan performs more malicious activities than in iOS due to Apple's higher security restrictions. Also, on Android, the trojan uses over 20 different bogus apps as cover. For example, GoldPickaxe can also run commands on Android to access SMS, navigate the filesystem, perform clicks on the screen, upload the 100 most recent photos from the victim's album, download and install additional packages, and serve fake notifications. The use of the victims' faces for bank fraud is an assumption by Group-IB, also corroborated by the Thai police, based on the fact that many financial institutes added biometric checks last year for transactions above a certain amount.

Privacy

UK Police To Be Able To Run Face Recognition Searches on 50 Million Driving Licence Holders (theguardian.com) 24

The police will be able to run facial recognition searches on a database containing images of Britain's 50 million driving licence holders under a law change being quietly introduced by the government. From a report: Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match. The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

Facial recognition searches match the biometric measurements of an identified photograph, such as that contained on driving licences, to those of an image picked up elsewhere. The intention to allow the police or the National Crime Agency (NCA) to exploit the UK's driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is "sneaking it under the radar." Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish "driver information regulations" to enable the searches, but he will need only to consult police bodies, according to the bill.

AI

Rite Aid Banned From Using Facial Recognition Software 60

An anonymous reader quotes a report from TechCrunch: Rite Aid has been banned from using facial recognition software for five years, after the Federal Trade Commission (FTC) found that the U.S. drugstore giant's "reckless use of facial surveillance systems" left customers humiliated and put their "sensitive information at risk." The FTC's Order (PDF), which is subject to approval from the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also instructs Rite Aid to delete any images it collected as part of its facial recognition system rollout, as well as any products that were built from those images. The company must also implement a robust data security program to safeguard any personal data it collects.

A Reuters report from 2020 detailed how the drugstore chain had secretly introduced facial recognition systems across some 200 U.S. stores over an eight-year period starting in 2012, with "largely lower-income, non-white neighborhoods" serving as the technology testbed. With the FTC's increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency's crosshairs. Among its allegations are that Rite Aid -- in partnership with two contracted companies -- created a "watchlist database" containing images of customers that the company said had engaged in criminal activity at one of its stores. These images, which were often poor quality, were captured from CCTV or employees' mobile phone cameras.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action -- and the majority of the time this instruction was to "approach and identify," meaning verifying the customer's identity and asking them to leave. Often, these "matches" were false positives that led to employees incorrectly accusing customers of wrongdoing, creating "embarrassment, harassment, and other harm," according to the FTC. "Employees, acting on false positive alerts, followed consumers around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing," the complaint reads. Additionally, the FTC said that Rite Aid failed to inform customers that facial recognition technology was in use, while also instructing employees to specifically not reveal this information to customers.
In a press release, Rite Aid said that it was "pleased to reach an agreement with the FTC," but that it disagreed with the crux of the allegations.

"The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores," Rite Aid said in its statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC's investigation regarding the Company's use of the technology began."
Privacy

Ex-Commissioner For Facial Recognition Tech Joins Facewatch Firm He Approved (theguardian.com) 12

The recently-departed watchdog in charge of monitoring facial recognition technology in UK has joined the private firm he controversially approved, paving the way for the mass roll-out of biometric surveillance cameras in high streets across the country. From a report: In a move critics have dubbed an "outrageous conflict of interest," Professor Fraser Sampson, former biometrics and surveillance camera commissioner, has joined Facewatch as a non-executive director. Sampson left his watchdog role on 31 October, with Companies House records showing he was registered as a company director at Facewatch the following day, 1 November.

Campaigners claim this might mean he was negotiating his Facewatch contract while in post, and have urged the advisory committee on business appointments to investigate if it may have "compromised his work in public office." It is understood that the committee is currently considering the issue. Facewatch uses biometric cameras to check faces against a watch list and, despite widespread concern over the technology, has received backing from the Home Office, and has already been introduced in hundreds of high-street shops and supermarkets.

Transportation

Automakers' Data Privacy Practices 'Are Unacceptable,' Says US Senator (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: US Senator Edward Markey (D-Mass.) is one of the more technologically engaged of our elected lawmakers. And like many technologically engaged Ars Technica readers, he does not like what he sees in terms of automakers' approach to data privacy. On Friday, Sen. Markey wrote to 14 car companies with a variety of questions about data privacy policies, urging them to do better. As Ars reported in September, the Mozilla Foundation published a scathing report on the subject of data privacy and automakers. The problems were widespread -- most automakers collect too much personal data and are too eager to sell or share it with third parties, the foundation found.

Markey noted (PDF) the Mozilla Foundation report in his letters, which were sent to BMW, Ford, General Motors, Honda, Hyundai, Kia, Mazda, Mercedes-Benz, Nissan, Stellantis, Subaru, Tesla, Toyota, and Volkswagen. The senator is concerned about the large amounts of data that modern cars can collect, including the troubling potential to use biometric data (like the rate a driver blinks and breathes, as well as their pulse) to infer mood or mental health. Sen. Markey is also worried about automakers' use of Bluetooth, which he said has expanded "their surveillance to include information that has nothing to do with a vehicle's operation, such as data from smartphones that are wirelessly connected to the vehicle."
"These practices are unacceptable," Markey wrote. "Although certain data collection and sharing practices may have real benefits, consumers should not be subject to a massive data collection apparatus, with any disclosures hidden in pages-long privacy policies filled with legalese. Cars should not -- and cannot -- become yet another venue where privacy takes a backseat."

The 14 automakers have until December 21 to answer Markey's questions.

Slashdot Top Deals