IT

Amazon Quietly Rolls Out Support for Passkeys, With a Catch (techcrunch.com) 52

Amazon has quietly rolled out support for passkeys as it becomes the latest tech giant to join the passwordless future. But you still might have to hold onto your Amazon password for a little while longer. From a report: The option to set up a passkey is now available on the e-commerce giant's website, allowing users to log in using biometric authentication on their device, such as their fingerprint or face scan. Doing so makes it far more difficult for bad actors to remotely access users' accounts, given that the attacker also needs physical access to the user's device.

But Amazon's implementation of passkeys isn't without issues, as noted by Vincent Delitz, co-founder of German tech startup Corbado, who first documented the arrival of passkey support on Amazon. Delitz noted that there is currently no support for passkeys in Amazon's native apps, such as Amazon's shopping app or Prime Video, which TechCrunch has also checked, meaning you still have to use a password to sign-in (for now). What's more, if you've set up a passkey but previously set up two-factor authentication (2FA), Amazon will still prompt you to enter a one-time verification code when logging in, a move Delitz said was "redundant," since passkeys remove the need for 2FA as they are stored on your device.

AI

New York Bans Facial Recognition In Schools (apnews.com) 22

An anonymous reader quotes a report from the Associated Press: New York state banned the use of facial recognition technology in schools Wednesday, following a report that concluded the risks to student privacy and civil rights outweigh potential security benefits. Education Commissioner Betty Rosa's order leaves decisions on digital fingerprinting and other biometric technology up to local districts. The state has had a moratorium on facial recognition since parents filed a court challenge to its adoption by an upstate district.

[A]n analysis by the Office of Information Technology Services issued last month "acknowledges that the risks of the use of (facial recognition technology) in an educational setting may outweigh the benefits." The report, sought by the Legislature, noted "the potentially higher rate of false positives for people of color, non-binary and transgender people, women, the elderly, and children." It also cited research from the nonprofit Violence Project that found that 70% of school shooters from 1980 to 2019 were current students. The technology, the report said, "may only offer the appearance of safer schools."

Biotechnology would not stop a student from entering a school "unless an administrator or staff member first noticed that the student was in crisis, had made some sort of threat, or indicated in some other way that they could be a threat to school security," the report said. The state report found that the use of digital fingerprinting was less risky and could be beneficial for school lunch payments and accessing electronic tablets and other devices. Schools may use that technology after seeking parental input, Rosa said.
"Schools should be safe places to learn and grow, not spaces where they are constantly scanned and monitored, with their most sensitive information at risk," said Stefanie Coyle, deputy director of the NYCLU's Education Policy Center.
Windows

Windows 11 Gains Support for Managing Passkeys (techcrunch.com) 49

At an event today focused on AI and security tools and new Surface devices, Microsoft announced that Windows 11 users will soon be able to take better advantage of passkeys, the digital credentials that can be used as an authentication method for websites and apps. From a report: Once the expanded passkeys support rolls out, Windows 11 users will be able to create a passkey using Windows Hello, Windows' biometric identity and access control feature. They'll then be able to use that passkey to access supported webs or apps using their face, fingerprint or PIN. Windows 11 passkeys can be managed on the devices on which they're stored, or saved to a mobile phone for added convenience.

"For the past several years, we've been committed to working with our industry partners and the FIDO Alliance to further the passwordless future with passkeys," Microsoft wrote in a blog post this morning. "Passkeys are the cross-platform, cross-ecosystem future of accessing websites and applications." Microsoft began rolling out support for passkey management several months ago in the Windows Insider dev channel, but this marks the capability's general availability.

The Courts

Court Blocks California's Online Child Safety Law (theverge.com) 23

A federal judge has granted a request to block the California Age-Appropriate Design Code Act (CAADCA), a law that requires special data safeguards for underage users online. The Verge reports: In a ruling (PDF) issued today, Judge Beth Freeman granted a preliminary injunction for tech industry group NetChoice, saying the law likely violates the First Amendment. It's the latest of several state-level internet regulations to be blocked while a lawsuit against them proceeds, including some that are likely bound for the Supreme Court. The CAADCA is meant to expand on existing laws -- like the federal COPPA framework -- that govern how sites can collect data from children. But Judge Freeman objected to several of its provisions, saying they would unlawfully target legal speech. "Although the stated purpose of the Act -- protecting children when they are online -- clearly is important, NetChoice has shown that it is likely to succeed on the merits of its argument that the provisions of the CAADCA intended to achieve that purpose do not pass constitutional muster," wrote Freeman.

Freeman cites arguments made by legal writer Eric Goldman, who argued that the law would force sites to erect barriers for children and adults alike. Among other things, the ruling takes issue with the requirement that sites estimate visitors' ages to detect underage users. The provision is ostensibly meant to cut down on the amount of data collected about young users, but Freeman notes that it could involve invasive technology like face scans or analyzing biometric information -- ironically requiring users to provide more personal information.

The law offers sites an alternative of making data collection for all users follow the standards for minors, but Freeman found that this would also chill legal speech since part of the law's goal is to avoid targeted advertising that would show objectionable content to children. "Data and privacy protections intended to shield children from harmful content, if applied to adults, will also shield adults from that same content," Freeman concluded.

Privacy

CBP Tells Airports Its New Facial Recognition Target is 75% of Passengers Leaving the US (404media.co) 40

Slash_Account_Dot writes: Customs and Border Protection (CBP) has told airports it plans to increase its targets for scanning passengers with facial recognition as they leave the U.S., according to an internal airport email obtained by 404 Media. The new goal will be to scan 75 percent of all passengers, the email adds. The news signals CBP's increasing focus on biometric, and in particular facial recognition, systems at airports. Although it is unclear if related to the shift in goals, one traveler was also recently told by airline industry staff "CBP said everyone has to do it" when they asked to opt-out of facial recognition while boarding for an international flight last month.
Government

IBM Returns To the Facial Recognition Market 17

During the Black Lives Matter protests in 2020, IBM announced that it would no longer offer "general purpose" facial recognition technology due to concerns about racial profiling, mass surveillance, and other human rights violations. Now, according to The Verge and Liberty Investigates, "IBM signed a $69.8 million contract with the British government to develop a national biometrics platform that will offer a facial recognition function to immigration and law enforcement officials." From the report: A contract notice for the Home Office Biometrics Matcher Platform outlines how the project initially involves developing a fingerprint matching capability, while later stages introduce facial recognition for immigration purposes -- described as "an enabler for strategic facial matching for law enforcement." The final stage of the project is described as delivery of a "facial matching for law enforcement use-case." The platform will allow photos of individuals to be matched against images stored on a database -- what is sometimes known as a "one-to-many" matching system. In September 2020, IBM described such "one-to-many" matching systems as "the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights."

IBM spokesman Imtiaz Mufti denied that its work on the contract was in conflict with its 2020 commitments. "IBM no longer offers general-purpose facial recognition and, consistent with our 2020 commitment, does not support the use of facial recognition for mass surveillance, racial profiling, or other human rights violations," he said. "The Home Office Biometrics Matcher Platform and associated Services contract is not used in mass surveillance. It supports police and immigration services in identifying suspects against a database of fingerprint and photo data. It is not capable of video ingest, which would typically be needed to support face-in-a-crowd biometric usage."

Human rights campaigners, however, said IBM's work on the project is incompatible with its 2020 commitments. Kojo Kyerewaa of Black Lives Matter UK said: "IBM has shown itself willing to step over the body and memory of George Floyd to chase a Home Office contract. This won't be forgotten." Matt Mahmoudi, PhD, tech researcher at Amnesty International, said: "The research across the globe is clear; there is no application of one-to-many facial recognition that is compatible with human rights law, and companies -- including IBM -- must therefore cease its sale, and honor their earlier statements to sunset these tools, even and especially in the context of law and immigration enforcement where the rights implications are compounding."
United Kingdom

UK Government Seeks Expanded Use of AI-based Facial Recognition By Police (ft.com) 15

UK's Home Office is looking to increase its use of controversial facial recognition technologies to track and find criminals within policing and other security agencies. From a report: In a document released on Wednesday, the government outlined its ambitions to potentially deploy new biometric systems nationally over the next 12 to 18 months. The move comes after privacy campaigners and independent academics criticised the technology for being inaccurate and biased, particularly against darker-skinned people.

MPs have previously called for a moratorium on its use on the general population until clear laws are established by parliament. The government is calling for submissions from companies for technologies that "can resolve identity using facial features and landmarks," including for live facial recognition which involves screening the general public for specific individuals on police watch lists.

In particular, the Home Office is highlighting its interest in novel artificial intelligence technologies that could process facial data efficiently to identify individuals, and software that could be integrated with existing technologies deployed by the department and with CCTV cameras. Facial recognition software has been used by South Wales Police and London's Metropolitan Police over the past five years across multiple trials in public spaces including shopping centres, during events such as the Notting Hill Carnival and, more recently, during the coronation.

Privacy

Worldcoin Ignored Initial Order To Stop Iris Scans in Kenya, Records Show (techcrunch.com) 11

Months before Kenya finally banned iris scans by Sam Altman's crypto startup Worldcoin, the Office of the Data Protection Commissioner (ODPC) had ordered its parent company, Tools for Humanity, to stop collecting personal data. From a report: The ODPC had in May this year instructed the crypto startup to stop iris scans and the collection of facial recognition and other personal data in Kenya, a letter sent to Worldcoin and seen by TechCrunch shows. Tools for Humanity, the company building Worldcoin, did not stop taking biometric data until early this month when Kenya's ministry of interior and administration, a more powerful entity, suspended it following its official launch. Worldcoin's official launch led to a spike in the number of people queuing up to have their eyeballs scanned in exchange for "free money," drawing the attention of authorities.

The letter shows that ODPC had instructed Worldcoin to cease collecting data for intruding on individuals' privacy by gathering biometric data without a well-established and compelling justification. Further, it said Worldcoin had failed to obtain valid consent from people before scanning their irises, saying its agents failed to inform its subjects about the data security and privacy measures it took, and how the data collected would be used or processed. "Your client is hereby instructed to cease the collection of all facial recognition data and iris scans, from your subscribers. This cessation should be implemented without delay and should include all ongoing and future data processing activities," said Rose Mosero, in a letter to Tools for Humanity that outlined the concerns.

Security

New (Deep Learning-Enhanced) Acoustic Attack Steals Data from Keystrokes With 95% Accuracy (bleepingcomputer.com) 50

Long-time Slashdot reader SonicSpike quotes this article from BleepingComputer: A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%...

Such an attack severely affects the target's data security, as it could leak people's passwords, discussions, messages, or other sensitive information to malicious third parties. Moreover, contrary to other side-channel attacks that require special conditions and are subject to data rate and distance limitations, acoustic attacks have become much simpler due to the abundance of microphone-bearing devices that can achieve high-quality audio captures. This, combined with the rapid advancements in machine learning, makes sound-based side-channel attacks feasible and a lot more dangerous than previously anticipated.

The researchers achieved 95% accuracy from the smartphone recordings, 93% from Zoom recordings, and 91.7% from Skype.

The article suggests potential defenses against the attack might include white noise, "software-based keystroke audio filters," switching to password managers — and using biometric authentication.
Privacy

Worldcoin Being Probed by French Privacy Regulator for 'Questionable' Practises 6

Worldcoin (WLD), the eyeball-scanning crypto project launched by OpenAI's Sam Altman, is being investigated by French data protection regulator CNI for "questionable" practises, the regulator told CoinDesk. From a report: "The legality of this [data] collection seems questionable, as do the conditions for preservation of biometric data," a CNIL spokesperson said in a written statement, referring to Worldcoin's practise of scanning retinas to ensure that no single person can claim crypto rewards twice.

"CNIL has initiated investigations," supporting the work of Bavarian privacy regulators who have primary responsibility under EU law, the spokesperson added. Worldcoin went live on Monday and its cheerleaders say it could spread crypto wider than bitcoin (BTC), but it has drawn the ire of privacy watchdogs in the U.K., where the Information Commissioner's Office has warned that people must freely give consent to the processing of their personal data, and be able to withdraw it without detriment.
Technology

Amazon's Palm-Scanning Payment System Coming To All Whole Foods Stores (fastcompany.com) 27

Amazon has announced that its palm-scanning payment technology, called Amazon One, will roll out to all 500-plus Whole Foods locations by the end of 2023. From a report: Amazon first introduced the contactless Amazon One payment system in 2020, but its expansion by the end of 2023 will be its largest to date. Amazon One works by the user scanning their palm above a reader -- in other words, it's another form of contactless biometric authentication, like Apple's Face ID. But instead of reading your face, Amazon One reads the lines and ridges of your palm and the unique vein patterns beneath it. This reading of deeper subcutaneous features means that someone can't just photograph your palm and start loading up on costly cheeses at Whole Foods at your expense.

Your palm signature is associated with your Amazon Prime account or just a credit card, and it means you don't even need to bring your phone or wallet with you to shop and pay for goods. Currently, Amazon One is available at 200 Whole Foods in the United States as well as 200 locations at other retail outlets. Amazon's rollout will bring the total Amazon One payment locations to over 700 by year's end. Other locations where you can currently use Amazon One include Coors Field in Colorado and select Panera Bread restaurants.

Privacy

Footage From Amazon's In-Van Surveillance Cameras Is Leaking Online (vice.com) 25

An anonymous reader quotes a report from Motherboard: A phone-recorded video posted to Reddit shows a wooden desk strewn with various office supplies. On a monitor on the desk, a video begins to play: an Amazon delivery driver, being recorded by a driver-facing camera in their van, leans out of their window to talk to a customer. Though the video is cute, the setup is not: The camera's AI tracks their movements, surrounding them with a bright green box. Below them on the monitor's screen, a yellow line marks the length of the clip sent to the driver's dispatcher. Above them sits a timecode and a speed marker of "0 MPH." The driver opens their door, and moments later, a small French bulldog leaps into the van, tail wagging. The driver is delighted. The person behind the camera laughs a little. [...] The desk set-up looks consistent with that of an Amazon delivery service partner (DSP), the small-business contractors responsible for Amazon's door-to-door deliveries. The DSPs usually operate out of Amazon delivery warehouses, where they are given a desk like the one in the video, in a small area of the warehouse, out of which they select routes, dispatch drivers, and monitor their actions on the road with the help of the cameras.

The video is one of a slew of in-van surveillance videos recently posted to Reddit, a phenomenon which hasn't frequently been seen on the site before. Over the past two weeks, many users in the Amazon delivery service partner drivers subreddit (r/AmazonDSPDrivers) have shared video footage from the cameras, either directly or by recording it on their phone from a monitor within the warehouse. It is clear that many of the videos are not being posted by the subject of the video themselves, and highlights the fact that Amazon drivers, who already have incredibly difficult jobs, are being monitored at all times.

When Motherboard first wrote about the "Biometric Consent" form drivers had to sign that allows them to be monitored while on the job, Amazon insisted that the program was about safety only, and that workers shouldn't be worried about their privacy: "Don't believe the self-interested critics who claim these cameras are intended for anything other than safety," a spokesperson told us at the time. But this video, and a rash of others that have recently become public, shows that access to the camera feeds is being abused. [...] It's not clear why there has been a sudden spate of videos being posted publicly. One current Amazon delivery driver said that the drivers themselves did not have access to the videos -- only Amazon, Netradyne, and the relevant DSPs did.

AI

100 Bands Including RATM Boycott Venues Using Facial Recognition Technology (rollingstone.com) 46

Rolling Stone reports: Over 100 artists including Rage Against the Machine co-founders Tom Morello and Zack de la Rocha, along with Boots Riley and Speedy Ortiz, have announced that they are boycotting any concert venue that uses facial recognition technology, citing concerns that the tech infringes on privacy and increases discrimination.

The boycott, organized by the digital rights advocacy group Fight for the Future, calls for the ban of face-scanning technology at all live events. Several smaller independent concert venues across the country, including the House of Yes in Brooklyn, the Lyric Hyperion in Los Angeles, and Black Cat in D.C., also pledged to not use facial recognition tech for their shows. Other artists who said they would boycott include Anti-Flag, Wheatus, Downtown Boys, and over 80 additional artists. The full list of signatories is available here.

"Surveillance tech companies are pitching biometric data tools as 'innovative' and helpful for increasing efficiency and security. Not only is this false, it's morally corrupt," Leila Nashashibi, campaigner at Fight for the Future, said in a statement. "For starters, this technology is so inaccurate that it actually creates more harm and problems than it solves, through misidentification and other technical faultiness. Even scarier, though, is a world in which all facial recognition technology works 100% perfectly — in other words, a world in which privacy is nonexistent, where we're identified, watched, and surveilled everywhere we go...." New York venue Citi Field as well as Cleveland's FirstEnergy Stadium, Miami's Hard Rock Stadium, and the Pechanga Arena in San Diego are among several venues across the country that have used face-scanning.

Thanks to long-time Slashdot reader SonicSpike for sharing the story.
EU

European Companies Claim the EU's AI Act Could 'Jeopardise Technological Sovereignty' (theverge.com) 15

Some of the biggest companies in Europe have taken collective action to criticize the European Union's recently approved artificial intelligence regulations, claiming that the Artificial Intelligence Act is ineffective and could negatively impact competition. From a report: In an open letter sent to the European Parliament, Commission, and member states on Friday, over 150 executives from companies like Renault, Heineken, Airbus, and Siemens slammed the AI Act for its potential to "jeopardise Europe's competitiveness and technological sovereignty." On June 14th, the European Parliament greenlit a draft of the AI Act following two years of developing its rules, and expanding them to encompass recent AI breakthroughs like large language AI models (LLMs) and foundation models, such as OpenAI's GPT-4. There are still several phases remaining before the new law can take effect, with the remaining inter-institutional negotiations expected to end later this year. The signatories of the open letter claim that the AI Act in its current state may suppress the opportunity AI technology provides for Europe to "rejoin the technological avant-garde." They argue that the approved rules are too extreme, and risk undermining the bloc's technological ambitions instead of providing a suitable environment for AI innovation.
AI

EU Votes To Ban AI In Biometric Surveillance, Require Disclosure From AI Systems 34

European Union officials have voted in favor of stricter regulations on artificial intelligence, including a ban on AI use in biometric surveillance and a requirement for AI systems like OpenAI's ChatGPT to disclose when content is generated by AI. Ars Technica reports: On Wednesday, European Union officials voted to implement stricter proposed regulations concerning AI, according to Reuters. The updated draft of the "AI Act" law includes a ban on the use of AI in biometric surveillance and requires systems like OpenAI's ChatGPT to reveal when content has been generated by AI. While the draft is still non-binding, it gives a strong indication of how EU regulators are thinking about AI. The new changes to the European Commission's proposed law -- which have not yet been finalized -- intend to shield EU citizens from potential threats linked to machine learning technology.

The new draft of the AI Act includes a provision that would ban companies from scraping biometric data (such as user photos) from social media for facial recognition training purposes. News of firms like Clearview AI using this practice to create facial recognition systems drew severe criticism from privacy advocates in 2020. However, Reuters reports that this rule might be a source of contention with some EU countries who oppose a blanket ban on AI in biometric surveillance. The new EU draft also imposes disclosure and transparency measures on generative AI. Image synthesis services like Midjourney would be required to disclose AI-generated content to help people identify synthesized images. The bill would also require that generative AI companies provide summaries of copyrighted material scraped and utilized in the training of each system. While the publishing industry backs this proposal, according to The New York Times, tech developers argue against its technical feasibility.

Additionally, creators of generative AI systems would be required to implement safeguards to prevent the generation of illegal content, and companies working on "high-risk applications" must assess their potential impact on fundamental rights and the environment. The current draft of the EU law designates AI systems that could influence voters and elections as "high-risk." It also classifies systems used by social media platforms with over 45 million users under the same category, thus encompassing platforms like Meta and Twitter. [...] Experts say that after considerable debate over the new rules among EU member nations, a final version of the AI Act isn't expected until later this year.
Google

Google's Password Manager Gains Biometric Authentication on Desktop (techcrunch.com) 18

Google's aiming to make it easier to use and secure passwords -- at least, for users of the Password Manager tool built into its Chrome browser. From a report: Today, the tech giant announced that Password Manager, which generates unique passwords and autofills them across platforms, will soon gain biometric authentication on PC. (Android and iOS have had biometric authentication for some time.) When enabled, it'll require an additional layer of security, like fingerprint recognition or facial recognition, before Chrome autofills passwords.

Exactly which types of biometrics are available in Password Manager on desktop will depend on the hardware attached to the PC, of course (e.g. a fingerprint reader), as well as whether the PC's operating system supports it. Beyond "soon," Google didn't say when to expect the feature to arrive.

Security

Brute-Force Test Attack Bypasses Android Biometric Defense (techxplore.com) 35

schwit1 shares a report from TechXplore: Chinese researchers say they successfully bypassed fingerprint authentication safeguards on smartphones by staging a brute force attack. Researchers at Zhejiang University and Tencent Labs capitalized on vulnerabilities of modern smartphone fingerprint scanners to stage their break-in operation, which they named BrutePrint. Their findings are published on the arXiv preprint server.

A flaw in the Match-After-Lock feature, which is supposed to bar authentication activity once a device is in lockout mode, was overridden to allow a researcher to continue submitting an unlimited number of fingerprint samples. Inadequate protection of biometric data stored on the Serial Peripheral Interface of fingerprint sensors enables attackers to steal fingerprint images. Samples also can be easily obtained from academic datasets or from biometric data leaks.

And a feature designed to limit the number of unsuccessful fingerprint matching attempts -- Cancel-After-Match-Fail (CAMF) -- has a flaw that allowed researchers to inject a checksum error disabling CAMF protection. In addition, BrutePrint altered illicitly obtained fingerprint images to appear as though they were scanned by the targeted device. This step improved the chances that images would be deemed valid by fingerprint scanners. To launch a successful break-in, an attacker requires physical access to a targeted phone for several hours, a printed circuit board easily obtainable for $15, and access to fingerprint images.

AI

EU, US To Seek Stopgap Standards for AI, EU Tech Chief Says (reuters.com) 8

The European Union and the United States are set to step up cooperation on artificial intelligence with a view to establishing minimum standards before legislation enters force, the EU's tech chief Margrethe Vestager said on Tuesday. From a report: The European Union's AI Act could be the world's first comprehensive legislation governing the technology, with new rules on facial recognition and biometric surveillance, but EU governments and lawmakers still need to agree a common text. Vestager, a vice-president of the European Commission, told a briefing on Tuesday that process might be completed by the end of the year.

"That would still leave one if not two years then to come into effect, which means that we need something to bridge that period of time," she said. Vestager said AI would be one area of focus at the fourth ministerial-level meeting of the Trade and Technology Council (TTC) in Sweden on May 30-31, with discussions on generative AI algorithms that produce new text, visual or sound content, such as ChatGPT. "There is a shared sense of urgency. In order to make the most of this technology, guard rails are needed," she said. "Can we discuss what we can expect companies to do as a minimum before legislation kicks in?"

EU

EU Lawmakers' Committees Agree Tougher Draft AI Rules (reuters.com) 2

European lawmakers came a step closer to passing new rules regulating artificial intelligence tools such as ChatGPT, following a crunch vote on Thursday where they agreed tougher draft legislation. From a report: The European Union's highly anticipated AI Act looks set to be the world's first comprehensive legislation governing the technology, with new rules around the use of facial recognition, biometric surveillance, and other AI applications. After two years of negotiations, the bill is now expected to move to the next stage of the process, in which lawmakers finalise its details with the European Commission and individual member states.

Speaking ahead of the vote by two lawmakers' committees, Dragos Tudorache, one of the parliamentarians (MEPs) charged with drafting the laws, said: "It is a delicate deal. But it is a package that I think gives something to everyone that participated in these negotiations. Our societies expect us to do something determined about artificial intelligence, and the impact it has on their lives. It's enough to turn on the TV ... in the last two or three months, and every day you see how important this is becoming for citizens." Under the proposals, AI tools will be classified according to their perceived level of risk, from low to unacceptable. Governments and companies using these tools will have different obligations, depending on the risk level.

EU

EU Lawyers Say Plan To Scan Private Messages For Child Abuse May Be Unlawful (theguardian.com) 68

An anonymous reader quotes a report from The Guardian: An EU plan under which all WhatsApp, iMessage and Snapchat accounts could be screened for child abuse content has hit a significant obstacle after internal legal advice said it would probably be annulled by the courts for breaching users' rights. Under the proposed "chat controls" regulation, any encrypted service provider could be forced to survey billions of messages, videos and photos for "identifiers" of certain types of content where it was suspected a service was being used to disseminate harmful material. The providers issued with a so-called "detection order" by national bodies would have to alert police if they found evidence of suspected harmful content being shared or the grooming of children.

Privacy campaigners and the service providers have already warned that the proposed EU regulation and a similar online safety bill in the UK risk end-to-end encryption services such as WhatsApp disappearing from Europe. Now leaked internal EU legal advice, which was presented to diplomats from the bloc's member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year. The legal service of the council of the EU, the decision-making body led by national ministers, has advised the proposed regulation poses a "particularly serious limitation to the rights to privacy and personal data" and that there is a "serious risk" of it falling foul of a judicial review on multiple grounds.

The EU lawyers write that the draft regulation "would require the general and indiscriminate screening of the data processed by a specific service provider, and apply without distinction to all the persons using that specific service, without those persons being, even indirectly, in a situation liable to give rise to criminal prosecution." The legal service goes on to warn that the European court of justice has previously judged the screening of communications metadata is "proportionate only for the purpose of safeguarding national security" and therefore "it is rather unlikely that similar screening of content of communications for the purpose of combating crime of child sexual abuse would be found proportionate, let alone with regard to the conduct not constituting criminal offenses." The lawyers conclude the proposed regulation is at "serious risk of exceeding the limits of what is appropriate and necessary in order to meet the legitimate objectives pursued, and therefore of failing to comply with the principle of proportionality".
The legal service is also concerned about the introduction of age verification technology and processes to popular encrypted services. "The lawyers write that this would necessarily involve the mass profiling of users, or the biometric analysis of the user's face or voice, or alternatively the use of a digital certification system they note 'would necessarily add another layer of interference with the rights and freedoms of the users,'" reports the Guardian.

"Despite the advice, it is understood that 10 EU member states -- Belgium, Bulgaria, Cyprus, Hungary, Ireland, Italy, Latvia, Lithuania, Romania and Spain -- back continuing with the regulation without amendment."

Slashdot Top Deals