Crime

Was the Arrest of Telegram's CEO Inevitable? (platformer.news) 174

Casey Newton, former senior editor at the Verge, weighs in on Platformer about the arrest of Telegram CEO Pavel Durov.

"Fending off onerous speech regulations and overzealous prosecutors requires that platform builders act responsibly. Telegram never even pretended to." Officially, Telegram's terms of service prohibit users from posting illegal pornographic content or promotions of violence on public channels. But as the Stanford Internet Observatory noted last year in an analysis of how CSAM spreads online, these terms implicitly permit users who share CSAM in private channels as much as they want to. "There's illegal content on Telegram. How do I take it down?" asks a question on Telegram's FAQ page. The company declares that it will not intervene in any circumstances: "All Telegram chats and group chats are private amongst their participants," it states. "We do not process any requests related to them...."

Telegram can look at the contents of private messages, making it vulnerable to law enforcement requests for that data. Anticipating these requests, Telegram created a kind of jurisdictional obstacle course for law enforcement that (it says) none of them have successfully navigated so far. From the FAQ again:

To protect the data that is not covered by end-to-end encryption, Telegram uses a distributed infrastructure. Cloud chat data is stored in multiple data centers around the globe that are controlled by different legal entities spread across different jurisdictions. The relevant decryption keys are split into parts and are never kept in the same place as the data they protect. As a result, several court orders from different jurisdictions are required to force us to give up any data. [...] To this day, we have disclosed 0 bytes of user data to third parties, including governments.

As a result, investigation after investigation finds that Telegram is a significant vector for the spread of CSAM.... The company's refusal to answer almost any law enforcement request, no matter how dire, has enabled some truly vile behavior. "Telegram is another level," Brian Fishman, Meta's former anti-terrorism chief, wrote in a post on Threads. "It has been the key hub for ISIS for a decade. It tolerates CSAM. Its ignored reasonable [law enforcement] engagement for YEARS. It's not 'light' content moderation; it's a different approach entirely.

The article asks whether France's action "will embolden countries around the world to prosecute platform CEOs criminally for failing to turn over user data." On the other hand, Telegram really does seem to be actively enabling a staggering amount of abuse. And while it's disturbing to see state power used indiscriminately to snoop on private conversations, it's equally disturbing to see a private company declare itself to be above the law.

Given its behavior, a legal intervention into Telegram's business practices was inevitable. But the end of private conversation, and end-to-end encryption, need not be.

The Courts

Supreme Court Declines To Block Texas Porn Restriction (nbcnews.com) 145

The Supreme Court on Tuesday refused to block on free speech grounds a provision of Texas law aimed at preventing minors from accessing pornographic content online. From a report: The justices turned away a request made by the Free Speech Coalition, a pornography industry trade group, as well as several companies. The challengers said the 2023 law violates the Constitution's First Amendment by requiring anyone using the platforms in question, including adults, to submit personal information.

One provision of the law, known as H.B. 1181, mandates that platforms verify users' ages by requiring them to submit information about their identities. Although the law is aimed at limiting children's access to sexually explicit content, the lawsuit focuses on how those measures also affect adults. "Specifically, the act requires adults to comply with intrusive age verification measures that mandate the submission of personally identifying information over the internet in order to access websites containing sensitive and intimate content," the challengers wrote in court papers.

The Courts

Florida Braces For Lawsuits Over Law Banning Kids From Social Media (arstechnica.com) 168

An anonymous reader quotes a report from Ars Technica: On Monday, Florida became the first state to ban kids under 14 from social media without parental permission. It appears likely that the law -- considered one of the most restrictive in the US -- will face significant legal challenges, however, before taking effect on January 1. Under HB 3, apps like Instagram, Snapchat, or TikTok would need to verify the ages of users, then delete any accounts for users under 14 when parental consent is not granted. Companies that "knowingly or recklessly" fail to block underage users risk fines of up to $10,000 in damages to anyone suing on behalf of child users. They could also be liable for up to $50,000 per violation in civil penalties. [...]

DeSantis' statement noted that "in addition to protecting children from the dangers of social media, HB 3 requires pornographic or sexually explicit websites to use age verification to prevent minors from accessing sites that are inappropriate for children." This suggests that Florida could face a legal challenge from adult sites like Pornhub, which have been suing to block states from requiring an ID to access adult content. Most recently, Pornhub blocked access to its platform in Texas, arguing that such laws "impinge on the rights of adults to access protected speech" and fail "strict scrutiny by employing the least effective and yet also most restrictive means of accomplishing Texas's stated purpose of allegedly protecting minors."

According to the Guardian, [Florida House Speaker Paul Renner, who spearheaded the law] expected that social media companies would "sue the second after" HB 3 was signed. So far, no legal challenges have been raised, but Renner seemingly expects that the law's focus on "addictive features such as notification alerts and autoplay videos, rather than on their content" would ensure that the law defeats any constitutional concerns potentially raised by social media companies. "We're going to beat them, and we're never, ever going to stop," Renner vowed.

AI

Taylor Swift Deepfakes Originated From AI Challenge, Report Says 62

The pornographic deepfakes of Taylor Swift that proliferated on social media late last month originated from an online challenge to break safety mechanisms designed to block people from generating lewd images with artificial intelligence, according to social network analysis company Graphika. Bloomberg: For weeks, users of internet forum 4chan have taken part in daily competitions to find words and phrases that could help them bypass the filters on popular image-generation services, which include Microsoft Designer and OpenAI's DALL-E, the researchers found. The ultimate goal was to create sexual images of prominent female figures such as singers and politicians. "While viral pornographic pictures of Taylor Swift have brought mainstream attention to the issue of AI-generated non-consensual intimate images, she is far from the only victim," said Cristina Lopez G., a senior analyst at Graphika, in an email. "In the 4chan community where these images originated, she isn't even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to school children."
DRM

'Copyright Troll' Porn Company 'Makes Millions By Shaming Porn Consumers' (yahoo.com) 100

In 1999 Los Angeles Times reporter Michael Hiltzik co-authored a Pulitzer Prize-winning story. Now a business columnist for the Times, he writes that a Southern California maker of pornographic films named Strike 3 Holdings is also "a copyright troll," according to U.S. Judge Royce C. Lamberth: Lamberth cwrote in 2018, "Armed with hundreds of cut-and-pasted complaints and boilerplate discovery motions, Strike 3 floods this courthouse (and others around the country) with lawsuits smacking of extortion. It treats this Court not as a citadel of justice, but as an ATM." He likened its litigation strategy to a "high-tech shakedown." Lamberth was not speaking off the cuff. Since September 2017, Strike 3 has filed more than 12,440 lawsuits in federal courts alleging that defendants infringed its copyrights by downloading its movies via BitTorrent, an online service on which unauthorized content can be accessed by almost anyone with a computer and internet connection.

That includes 3,311 cases the firm filed this year, more than 550 in federal courts in California. On some days, scores of filings reach federal courthouses — on Nov. 17, to select a date at random, the firm filed 60 lawsuits nationwide... Typically, they are settled for what lawyers say are cash payments in the four or five figures or are dismissed outright...

It's impossible to pinpoint the profits that can be made from this courthouse strategy. J. Curtis Edmondson, a Portland, Oregon, lawyer who is among the few who pushed back against a Strike 3 case and won, estimates that Strike 3 "pulls in about $15 million to $20 million a year from its lawsuits." That would make the cases "way more profitable than selling their product...." If only one-third of its more than 12,000 lawsuits produced settlements averaging as little as $5,000 each, the yield would come to $20 million... The volume of Strike 3 cases has increased every year — from 1,932 in 2021 to 2,879 last year and 3,311 this year.

What's really needed is a change in copyright law to bring the statutory damages down to a level that truly reflects the value of a film lost because of unauthorized downloading — not $750 or $150,000 but perhaps a few hundred dollars.

Anone of the lawsuits go to trial. Instead ISPs get a subpoena demanding the real-world address and name behind IP addresses "ostensibly used to download content from BitTorrent..." according to the article. Strike 3 will then "proceed by sending a letter implicitly threatening the subscriber with public exposure as a pornography viewer and explicitly with the statutory penalties for infringement written into federal copyright law — up to $150,000 for each example of willful infringement and from $750 to $30,0000 otherwise."

A federal judge in Connecticut wrote last year that "Given the nature of the films at issue, defendants may feel coerced to settle these suits merely to prevent public disclosure of their identifying information, even if they believe they have been misidentified."

Thanks to Slashdot reader Beerismydad for sharing the article.
AI

AI Platform Generated Images That 'Could Be Categorized as Child Pornography,' Leaked Documents Show (404media.co) 189

404 Media: OctoML, a Seattle-based startup that helps companies optimize and deploy their machine learning models, debated internally whether it was ethical and legally risky for it to generate images for Civitai, an AI model sharing and image generating platform backed by venture capital firm Andreessen Horowitz, after it discovered Civitai generated content that OctoML co-founder Thierry Moreau said "could be categorized as child pornography," according to internal OctoML Slack messages and documents viewed by 404 Media.

OctoML has raised $132 million in funding, and is an AWS partner, meaning it generated these images on Amazon servers. "What's absolutely staggering is that this is the #3 all time downloaded model on CivitAI, and is presented as a pretty SFW model," Moreau, who is also OctoML's VP, technology partnerships, said in a company Slack room called #ai_ethics on June 8, 2023. Moreau was referring to an AI model called "Deliberate" that can produce pornographic images. "A fairly innocent and short prompt '[girl: boy: 15], hyperdetailed' automatically generated unethical/shocking content -- read something could be categorized as child pornography," his Slack message added.

Australia

Australia Will Not Force Adult Websites To Bring In Age Verification Due To Privacy and Security Concerns (theguardian.com) 76

The federal government of Australia will not force adult websites to bring in age verification due to concerns around privacy and security of the technology. The Guardian reports: On Wednesday, the communications minister, Michelle Rowland, released the eSafety commissioner's long-awaited roadmap for age verification for online pornographic material, which has been sitting with the government since March 2023. The federal government has decided against forcing sites to bring in age verification technology, instead tasking the eSafety commissioner, Julie Inman Grant, to work with the industry to develop a new code to educate parents on how to access filtering software and limit children's access to such material or sites that are not appropriate.

"It is clear from the roadmap at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness or implementation issues," the government's response to the roadmap said. The technology must work effectively without circumvention, must be able to be applied to pornography hosted outside Australia, and not introduce the risk to personal information for adults who choose to access legal pornography, the government stated. "The roadmap makes clear that a decision to mandate age assurance is not yet ready to be taken."

The new tranche of codes will be developed by eSafety following the implementation of the first set of industry codes in December this year. The government will also bring forward an independent statutory review of the Online Safety Act in 2024 to ensure it is fit for purpose and this review will be completed in this term of government. The UK's approach to age assurance will also be monitored as the UK is "a key likeminded partner." The report suggested to trial a pilot of age assurance technologies, but this was not adopted by the government. The report also noted the government's development of a digital ID in the wake of the Optus and Medibank data breaches, but said it was not suggesting the government ID be used for confirming ages on pornographic websites.

United Kingdom

UK Tightens Online Safety Bill Again as It Nears Final Approval (bloomberg.com) 31

The UK made last-minute amendments toughening up its sweeping, long-awaited Online Safety Bill following scrutiny in Parliament's upper chamber, the House of Lords. From a report: Internet companies carrying pornographic content will be explicitly required to use age verification or estimation measures, and ensure these methods are effective, the Department for Science, Innovation and Technology said in an emailed statement Friday. Executives will be held personally responsible for child safety on their platforms, the statement said.

DSIT didn't respond to follow-up questions about the detail of this policy. Regulator Ofcom will be empowered to retrieve data on the online activity of deceased children to understand if and how their online activity may have played any role in their death, if requested by a coroner, the government said. It also announced Ofcom will research the role that app stores play in children's access to harmful content. The watchdog will also publish guidance on how platforms can reduce risks to women and have to improve public literacy of disinformation.

Social Networks

The Imgur Apocalypse Is Going To Break Large Parts of the Internet (vice.com) 61

An anonymous reader quotes a report from Motherboard: Imgur, a popular photo-uploading service that has been informally tied to Reddit since its 2009 founding, will remove two types of content from its platform starting next month: explicit or pornographic imagery, and images uploaded anonymously -- the latter with a lean on unused images, according to the company. While technically banned from Imgur for years through its community rules, adult content hasn't been actively removed (and is incredibly popular). Until now.

The move is also going to be disastrous for the continuity of the internet. Like Photobucket before it, Imgur has been widely used to host millions of photos that are linked to, embedded, or used elsewhere, and lots of these photos were uploaded by people who didn't bother to sign up for accounts. Imgur is especially popular as a host for Reddit, meaning the content of those old posts could suddenly disappear off the internet. The move will likely also break embeds in various forum posts and blog posts all over the internet, creating an unpleasant form of link rot. (The Archive Team, generally a harbinger of shuttering sites, is working on backing up this material, according to an announcement on Reddit.)

AI

Inside the Deepfake Porn Economy (nbcnews.com) 67

The nonconsensual deepfake economy has remained largely out of sight, but it's easily accessible, and some creators can accept major credit cards. From a report: Digitally edited pornographic videos featuring the faces of hundreds of unconsenting women are attracting tens of millions of visitors on websites, one of which can be found at the top of Google search results. The people who create the videos charge as little as $5 to download thousands of clips featuring the faces of celebrities, and they accept payment via Visa, Mastercard and cryptocurrency. While such videos, often called deepfakes, have existed online for years, advances in artificial intelligence and the growing availability of the technology have made it easier -- and more lucrative -- to make nonconsensual sexually explicit material.

An NBC News review of two of the largest websites that host sexually explicit deepfake videos found that they were easily accessible through Google and that creators on the websites also used the online chat platform Discord to advertise videos for sale and the creation of custom videos. The deepfakes are created using AI software that can take an existing video and seamlessly replace one person's face with another's, even mirroring facial expressions. Some lighthearted deepfake videos of celebrities have gone viral, but the most common use is for sexually explicit videos. According to Sensity, an Amsterdam-based company that detects and monitors AI-developed synthetic media for industries like banking and fintech, 96% of deepfakes are sexually explicit and feature women who didn't consent to the creation of the content. Most deepfake videos are of female celebrities, but creators now also offer to make videos of anyone. A creator offered on Discord to make a 5-minute deepfake of a "personal girl," meaning anyone with fewer than 2 million Instagram followers, for $65.

Mozilla

Mozilla Launches a New Startup Focused on 'Trustworthy' AI (techcrunch.com) 61

On the eve of its 25th anniversary, Mozilla, the not-for-profit behind the Firefox browser, is launching an AI-focused startup. From a report: Called Mozilla.ai, the newly forged company's mission isn't to build just any AI -- its mission is to build AI that's open source and "trustworthy," according to Mark Surman, the executive president of Mozilla and the head of Mozilla.ai. "Working on trustworthy AI for almost five years, I've constantly felt a mix of excitement and anxiety," he told TechCrunch in an email interview. "The last month or two of rapid-fire big tech AI announcements has been no different. Really exciting new tech is emerging -- new tools that have immediately sparked artists, founders ... all kinds of people to do new things. The anxiety comes when you realize almost no one is looking at the guardrails."

Surman was referring to the rash of AI models in recent months that, while impressive in their capabilities, have worrisome real-world implications. At release, OpenAI's text-generating ChatGPT could be prompted to write malware, identify exploits in open source code and create phishing websites that looked similar to well-trafficked sites. Text-to-image AI like Stable Diffusion, meanwhile, has been co-opted to create pornographic, nonconsensual deepfakes and ultra-graphic depictions of violence. The creators of these models say that they're taking steps to curb abuse. But Mozilla felt that not enough was being done. "We've been working on trustworthy AI on the public interest research side for about five years, hoping other industry players with more AI expertise would step up to build more trustworthy tech," Surman said. "They haven't. So we decided mid-last year we needed to do it ourselves -- and to find like-minded partners to do it alongside us. We then set out to find someone with the right mix of academic and industry AI experience to lead it." Funded by a $30 million seed investment from the Mozilla Foundation, Mozilla's parent organization, Mozilla.ai is a wholly owned subsidiary of the Mozilla Foundation -- much like the Mozilla Corporation (the org responsible for developing Firefox) and Mozilla Ventures (the Mozilla Foundation's VC fund). Its managing director is Moez Draief, who previously was the chief scientist at Huawei's Noah's Ark AI lab and the global chief scientist at consulting company Capgemini.

United States

US Fed Reserve Zoom Conference Canceled After 'Porn-Bombing' (pcmag.com) 75

A Federal Reserve Zoom event with more than 220 people was canceled after a user hijacked proceedings and displayed pornographic content, Reuters reports. From a report: The hijack left Fed Governor Christopher Waller unable to deliver his opening remarks because graphic images from a call participant named "Dan" began to pop up on the screen. In a statement to Reuters, Brent Tjarks, executive director of the Mid-Size Bank Coalition of America (MBCA), which hosted the Zoom event, said: "We were a victim of a teleconference or Zoom hijacking and we are trying to understand what we need to do going forward to prevent this from ever happening again. It is an incident we deeply regret. We have had various programs and this is something that we have never had happen to us." Tjarks adds that he suspects a security switch for the Zoom event that would have muted users and prevented them from sharing their screens was incorrectly set, though he could not confirm. The MBCA, whose roughly 100 members include banks with between $10 billion and $100 billion in assets, made the decision to cancel the event minutes after it was scheduled to commence, citing "technical difficulties."
Google

Google Will Soon Blur Explicit Images By Default in Search Results (theverge.com) 67

Google is introducing a new online safety feature to help users avoid inadvertently seeing graphically violent or pornographic images while using its search engine. From a report: Announced as part of the company's Safer Internet Day event on Tuesday, the new default setting enabled for everyone will automatically blur explicit images that appear in search results, even for users that don't have SafeSearch enabled. Google has confirmed to The Verge that, should they wish, signed-in users over 18 will be able to disable the blur setting entirely after it launches in "the coming months."
The Internet

Watching Porn Now Requires Age Verification in Louisiana Because of New Law 328

An anonymous reader shares a report: The porn industry has been around for a while and in today's digital age business is booming. When Laurie Schlegel isn't seeing her patients who struggle with sex addiction, she's at the Louisiana State Capitol. The Republican state representative from Metairie passed HB 142 earlier this year requiring age verification for any website that contains 33.3% or more pornographic material. "Pornography is destroying our children and they're getting unlimited access to it on the internet and so if the pornography companies aren't going to be responsible, I thought we need to go ahead and hold them accountable," said Schlegel. According to Schlegel, websites would verify someone's age in collaboration with LA Wallet. So, if you plan on using these sites in the future, you may want to download the app. "I would say so," said Sara Kelley, project manager with Envoc. "I mean, I think it's a must-have for anyone who has a Louisiana state ID or driver's license."

Kelley added there are other ways websites could ask you to verify your age if you cannot access LA Wallet. She added that although some personal information will be required, companies must not retain personal data after complete verification. "It doesn't identify your date of birth, it doesn't identify who you are, where you live, what part of the state you're in, or any information from your device or from your actual ID. It just returns that age to say that yes, this person is old enough to be allowed to go in," explained Kelley. It will be the website's responsibility to ensure age verification is required when accessing their site in Louisiana. Schlegel said there will be consequences for those who fail to follow the law.
AI

Stable Diffusion Made Copying Artists and Generating Porn Harder (theverge.com) 63

AmiMoJo writes: Users of AI image generator Stable Diffusion are angry about an update to the software that "nerfs" its ability to generate NSFW output and pictures in the style of specific artists. Stability AI, the company that funds and disseminates the software, announced Stable Diffusion Version 2 early this morning European time. The update re-engineers key components of the model and improves certain features like upscaling (the ability to increase the resolution of images) and in-painting (context-aware editing). But, the changes also make it harder for Stable Diffusion to generate certain types of images that have attracted both controversy and criticism. These include nude and pornographic output, photorealistic pictures of celebrities, and images that mimic the artwork of specific artists.

"They have nerfed the model," commented one user on a Stable Diffusion sub-reddit. "It's kinda an unpleasant surprise," said another on the software's official Discord server. Users note that asking Version 2 of Stable Diffusion to generate images in the style of Greg Rutkowski -- a digital artist whose name has become a literal shorthand for producing high-quality images -- no longer creates artwork that closely resembles his own. "What did you do to greg," commented one user on Discord.

United Kingdom

UK To Criminalize Deepfake Porn Sharing Without Consent (techcrunch.com) 116

Brace for yet another expansion to the UK's Online Safety Bill: The Ministry of Justice has announced changes to the law which are aimed at protecting victims of revenge porn, pornographic deepfakes and other abuses related to the taking and sharing of intimate imagery without consent -- in a crackdown on a type of abuse that disproportionately affects women and girls. From a report: The government says the latest amendment to the Bill will broaden the scope of current intimate image offences -- "so that more perpetrators will face prosecution and potentially time in jail."

Other abusive behaviors that will become explicitly illegal include "downblousing" (where photographs are taken down a women's top without consent); and the installation of equipment, such as hidden cameras, to take or record images of someone without their consent. The government describes the planned changes as a comprehensive package of measure to modernize laws in this area.

AI

A Horrifying New AI App Swaps Women Into Porn Videos With a Click (technologyreview.com) 258

Karen Hao, reporting for MIT Technology Review: The website is eye-catching for its simplicity. Against a white backdrop, a giant blue button invites visitors to upload a picture of a face. Below the button, four AI-generated faces allow you to test the service. Above it, the tag line boldly proclaims the purpose: turn anyone into a porn star by using deepfake technology to swap the person's face into an adult video. All it requires is the picture and the push of a button. MIT Technology Review has chosen not to name the service, which we will call Y, or use any direct quotes and screenshots of its contents, to avoid driving traffic to the site. It was discovered and brought to our attention by deepfake researcher Henry Ajder, who has been tracking the evolution and rise of synthetic media online.

For now, Y exists in relative obscurity, with a small user base actively giving the creator development feedback in online forums. But researchers have feared that an app like this would emerge, breaching an ethical line no other service has crossed before. From the beginning, deepfakes, or AI-generated synthetic media, have primarily been used to create pornographic representations of women, who often find this psychologically devastating. The original Reddit creator who popularized the technology face-swapped female celebrities' faces into porn videos. To this day, the research company Sensity AI estimates, between 90% and 95% of all online deepfake videos are nonconsensual porn, and around 90% of those feature women.

Cloud

Man Steals 620K Photos From iCloud Accounts Without Apple Noticing (latimes.com) 74

An anonymous reader quotes a report from The Los Angeles Times: A Los Angeles County man broke into thousands of Apple iCloud accounts and collected more than 620,000 private photos and videos in a plot to steal and share images of nude young women, federal authorities say. Hao Kuo Chi, 40, of La Puente, has agreed to plead guilty to four felonies, including conspiracy to gain unauthorized access to a computer, court records show. Chi, who goes by David, admitted that he impersonated Apple customer support staff in emails that tricked unsuspecting victims into providing him with their Apple IDs and passwords, according to court records. He gained unauthorized access to photos and videos of at least 306 victims across the nation, most of them young women, he acknowledged in his plea agreement with federal prosecutors in Tampa, Fla.

Chi said he hacked into the accounts of about 200 of the victims at the request of people he met online. Using the moniker "icloudripper4you," Chi marketed himself as capable of breaking into iCloud accounts to steal photos and videos, he admitted in court papers. Chi acknowledged in court papers that he and his unnamed co-conspirators used a foreign encrypted email service to communicate with each other anonymously. When they came across nude photos and videos stored in victims' iCloud accounts, they called them "wins," which they collected and shared with one another. "I don't even know who was involved," Chi said Thursday in a brief phone conversation. He expressed fear that public exposure of his crimes would "ruin my whole life."

The scam started to unravel In March 2018. A California company that specializes in removing celebrity photos from the internet notified an unnamed public figure in Tampa, Fla., that nude photos of the person had been posted on pornographic websites, according to [FBI agent Anthony Bossone]. The victim had stored the nude photos on an iPhone and backed them up to iCloud. Investigators soon discovered that a log-in to the victim's iCloud account had come from an internet address at Chi's house in La Puente, Bossone said. The FBI got a search warrant and raided the house May 19. By then, agents had already gathered a clear picture of Chi's online life from a vast trove of records that they obtained from Dropbox, Google, Apple, Facebook and Charter Communications. On Aug. 5, Chi agreed to plead guilty to one count of conspiracy and three counts of gaining unauthorized access to a protected computer. He faces up to five years in prison for each of the four crimes.

Privacy

Unlike Clearview AI, this Facial-Recognition Search Engine is Open to Everyone (cnn.com) 30

This week CNN investigated PimEyes, a "mysterious" but powerful facial-recognition search engine: If you upload a picture of your face to PimEyes' website, it will immediately show you any pictures of yourself that the company has found around the internet. You might recognize all of them, or be surprised (or, perhaps, even horrified) by some; these images may include anything from wedding or vacation snapshots to pornographic images. PimEyes is open to anyone with internet access. It's a stark contrast from Clearview AI, which became well-known for building its enormous stash of faces with images of people from social networks and limits its use to law enforcement (Clearview has said it has hundreds of such customers).

PimEyes' decision to make facial-recognition software available to the general public crosses a line that technology companies are typically unwilling to traverse, and opens up endless possibilities for how it can be used and abused. Imagine a potential employer digging into your past, an abusive ex tracking you, or a random stranger snapping a photo of you in public and then finding you online. This is all possible through PimEyes: Though the website instructs users to search for themselves, it doesn't stop them from uploading photos of anyone. At the same time, it doesn't explicitly identify anyone by name, but as CNN Business discovered by using the site, that information may be just clicks away from images PimEyes pulls up...

PimEyes lets users see a limited number of small, somewhat pixelated search results at no cost, or you can pay a monthly fee, which starts at $29.99, for more extensive search results and features (such as to click through to see full-size images on the websites where PimEyes found them and to set up alerts for when PimEyes finds new pictures of faces online that its software believes match an uploaded face)... Although PimEyes instructs visitors to only search for their own face, there's no mechanism on the site to ensure it's used this way... There's also no way to ensure this facial-recognition technology isn't used to misidentify people...

The website currently lists no information about who owns or runs the search engine, or how to reach them, and users must submit a form to get answers to questions or help with accounts.

The Internet

LiveLeak, the Internet's Font of Gore and Violence, Has Shut Down (theverge.com) 79

Video site LiveLeak, best known for hosting gruesome footage that mainstream rivals wouldn't touch, has shut down after fifteen years in operation. In its place is "ItemFix," a site that bans users from uploading media containing "excessive violence or gory content." The Verge reports: In a blog post, LiveLeak founder Hayden Hewitt did not give an explicit reason for the site's closure, saying only that: "The world has changed a lot over these last few years, the Internet alongside it, and we as people." In a video posted on his YouTube channel Trigger Warning, Hewitt offered no further details, but said that maintaining LiveLeak had become a struggle, and that he and his team "just didn't have it in us to carry on fighting." "Everything's different now, everything moves on," says Hewitt, before adding in an aside to the camera: "I don't fucking like it. I liked it much better when it was the Wild West."

LiveLeak has been a mainstay of internet culture for many years, its name synonymous with footage of murder, terrorism, and everyday incidents of crime and violence. A sinister doppelganger to sites like YouTube, LiveLeak was founded in 2006 and grew out of a culture of early internet "shock sites" like Ogrish, Rotten.com, and BestGore: websites that hosted violent and pornographic content with the express aim of disgusting visitors.

[D]emand for such extreme content will always exist, even if individual sites like LiveLeak come and go. In his farewell blog post, the site's founder Hayden Hewitt emphasized the importance of the site's community. "To the members, the uploaders, the casual visitors, the trolls and the occasionally demented people who have been with us. You have been our constant companions and although we probably didn't get to communicate too often you're appreciated more than you realize," he writes. "On a personal level you have fascinated and amused me with your content. Lastly, to those no longer with us. I still remember you."

Slashdot Top Deals