Biotech

Police Use DNA Phenotyping To Limit Pool of Suspects To 15,000 (vice.com) 50

An anonymous reader quotes a report from Motherboard: The Queensland, Australia police have used DNA phenotyping for the first time ever in hopes of leading to a breakthrough for a 1982 murder. The department partnered with a U.S.-based company called Parabon NanoLabs to create a profile image of the murder suspect, a Caucasian man with long blonde hair. Police claim that this image was generated using blood samples found at the scene of the murder of a man from 40 years ago; according to the Australian Broadcasting Corporation this is the first time "investigative genetic genealogy" has been used in Queensland.

This image does not factor in any environmental characteristics, such as tattoos, facial hair, and scars, and cannot determine the age or body mass of the suspect. However, Queensland investigators have published the image online and are offering a $500,000 reward and indemnity from prosecution to anyone who might have information about the suspect. The image is a vague rendering of a man that does not provide any more information than the sketch that the department already has of the suspect. This further perpetuates the hyper-surveillance of any man who resembles the image. Parabon NanoLabs has already been criticized by criminal justice and privacy experts for disseminating images that implicate too broad a pool of suspects.

The Queensland police department said that the DNA sample from the case generated a genealogy tree of "15,000 'linked' individuals" and they have not been able to find a close match yet. Instead of facing the possibility that DNA phenotyping may not be an effective tool for narrowing down a suspect, the police department's strategy is to ask the public for their DNA samples. Criminologist Xanthe Mallett said in a press release that to help police find a match, people can "opt-in" to share their own DNA samples with investigators through DNA services such as Family Tree and GEDMatch.
"Many members of the public that see this generated image will be unaware that it's a digital approximation, that age, weight, hairstyle, and face shape may be very different, and that accuracy of skin/hair/eye color is approximate," said Callie Schroeder, the Global Privacy Counsel at the Electronic Privacy Information Center.
Social Networks

Tumblr Will Now Allow Nudity But Not Explicit Sex (theverge.com) 45

Tumblr has made an update it hinted at in September, changing its rules to allow nudity -- but not sexually explicit images -- on the platform. The Verge reports: The company updated its community guidelines earlier today, laying out a set of rules that stops short of its earlier permissive attitude toward sexuality but that formally allows a wider range of imagery. "We now welcome a broader range of expression, creativity, and art on Tumblr, including content depicting the human form (yes, that includes the naked human form). So, even if your creations contain nudity, mature subject matter, or sexual themes, you can now share them on Tumblr using the appropriate Community Label," the post says. "Visual depictions of sexually explicit acts remain off-limits on Tumblr."

A help center post and the community guidelines offer a little more detail. They say that "text, images, and videos that contain nudity, offensive language, sexual themes, or mature subject matter" is allowed on Tumblr, but "visual depictions of sexually explicit acts (or content with an overt focus on genitalia)" aren't. There's an exception for "historically significant art that you may find in a mainstream museum and which depicts sex acts -- such as from India's Sunga Empire," although it must be labeled with a mature content or "sexual themes" tag so that users can filter it from their dashboards.

"Nudity and other kinds of adult material are generally welcome. We're not here to judge your art, we just ask that you add a Community Label to your mature content so that people can choose to filter it out of their Dashboard if they prefer," say the community guidelines. However, users can't post links or ads to "adult-oriented affiliate networks," they can't advertise "escort or erotic services," and they can't post content that "promotes pedophilia," including "sexually suggestive" content with images of children.
On December 17th, 2018, Tumblr permanently banned adult content from its platform. The site was owned by Verizon at the time and later sold to WordPress.com owner Automattic, which largely maintained the ban "in large part because internet infrastructure services -- like payment processors and Apple's iOS App Store -- typically frown on explicit adult content," reports The Verge.
IOS

Apple Executive Responds To Annoying iOS 16 Copy and Paste Prompt: 'Absolutely Not Expected Behavior' (macrumors.com) 42

Apple has responded to user complaints regarding an annoying pop-up in iOS 16 that asks for user permission if an app wants to access the clipboard to paste text, images, and more. From a report: The new prompt was added to iOS 16 as a privacy measure for users, requiring that apps ask for permission to access the clipboard, which may have sensitive data. The prompt, however, has become an annoyance for users as they install iOS 16, as it constantly asks for permission whenever they wish to paste something into an app. As user annoyance with the behavior boils high, Apple has finally responded, saying the constant pop-up is not how the feature is intended to work. MacRumors reader Kieran sent an email to Craig Federighi and Tim Cook, complaining about the constant prompt and advocating for Apple to treat access to the clipboard the same way iOS treats third-party access to location, camera, microphone, and more. Ron Huang, a senior manager at Apple, joined the email thread saying the pop-up is not supposed to appear every time a user attempts to paste. "This is absolutely not expected behavior, and we will get to the bottom of it," Huang said. Huang added that this behavior is not something Apple has seen internally but that Kieran is "not the only one" experiencing it. Responding to the suggestion that clipboard access should be added within the Settings app on a per-app basis, Huang said it would make a "good improvement" and added that Apple "certainly need to fix and make apps like Mail just work even without this setting, but it's nonetheless helpful for apps which users want to share data with even if they didn't initiate it." "Stay tuned," he added.
AI

OpenAI Begins Allowing Users To Edit Faces With DALL-E 2 (techcrunch.com) 17

An anonymous reader quotes a report from TechCrunch: After initially disabling the capability, OpenAI today announced that customers with access to DALL-E 2 can upload people's faces to edit them using the AI-powered image-generating system. Previously, OpenAI only allowed users to work with and share photorealistic faces and banned the uploading of any photo that might depict a real person, including photos of prominent celebrities and public figures. OpenAI claims that improvements to its safety system made the face-editing feature possible by "minimizing the potential of harm" from deepfakes as well as attempts to create sexual, political and violent content.

In an email to customers, the company wrote: "Many of you have told us that you miss using DALL-E to dream up outfits and hairstyles on yourselves and edit the backgrounds of family photos. A reconstructive surgeon told us that he'd been using DALL-E to help his patients visualize results. And filmmakers have told us that they want to be able to edit images of scenes with people to help speed up their creative processes [We] built new detection and response techniques to stop misuse."

The change in policy isn't opening the floodgates necessarily. OpenAI's terms of service will continue to prohibit uploading pictures of people without their consent or images that users don't have the rights to -- although it's not clear how consistent the company's historically been about enforcing those policies. In any case, it'll be a true test of OpenAI's filtering technology, which some customers in the past have complained about being overzealous and somewhat inaccurate. Deepfakes come in many flavors, from fake vacation photos to presidents of war-torn countries. Accounting for every emerging form of abuse will be a never-ending battle, in some cases with very high stakes.

AT&T

Filmmakers Sue AT&T To Block Pirate Sites, Disconnect Repeat Infringers (torrentfreak.com) 74

An anonymous reader quotes a report from TorrentFreak: A group of independent movie companies has filed a copyright infringement lawsuit against AT&T. The Internet provider, which has over 80 million subscribers in the US, faces far-reaching demands. In addition to millions in damages, the filmmakers want the ISP to terminate the accounts of repeat infringers and block access to sites such as The Pirate Bay and YTS. [...] In a complaint (PDF) filed at a federal court in Texas, Voltage Pictures and its affiliates, known for films such as "After We Collided," "Dallas Buyers Club," "Room 203," and "The Bird Catcher", accuse the ISP of contributory and vicarious copyright infringement.

"For years, AT&T has knowingly allowed AT&T users to engage in online piracy, the illegal distribution and downloading of copyrighted materials, including films. AT&T provides the IP addresses used for piracy, makes the connections needed to share and download pirated films, and transmits the pirated films," they write. The ISP allegedly turned a blind eye to pirating subscribers, facilitating mass online piracy. The filmmakers say they sent tens of thousands of notices to the company, reporting alleged copyright infringements. In some cases, hundreds of notices were sent for a single IP address without any visible response from the Internet provider.

In the United States, the law requires Internet providers to adopt a policy that provides for the termination of accounts of repeat infringers, under appropriate circumstances. AT&T references this in its terms but according to the filmmakers' complaint, this policy is not sufficient. The lawsuit specifically claims that AT&T willingly keeps repeat infringers on board because that adds tens of millions of dollars to AT&T's bottom line. [...] To compensate for all piracy-related losses, the plaintiffs request actual or statutory damages, which can run into millions of dollars. In addition, they also want AT&T to terminate repeat infringers under appropriate circumstances. Finally, and of particular interest, the movie companies also want the Internet provider to block foreign pirate sites. They include YTS, The Pirate Bay, RARBG, 1337x, and others that have been called out in the US Trade Representative's annual overview of notorious markets.

Businesses

Amazon Tests TikTok-Like Feed in App (wsj.com) 11

Even Amazon.com wants to be a little like TikTok. From a report: Amazon is testing a feature in its app that would show users a TikTok-style photo and video feed of products for shoppers to share with other users. The test is currently visible to a small number of Amazon employees, according to a person familiar with it. Amazon joins other major technology companies such as Meta Platforms and Google parent Alphabet that have attempted to bump up engagement through short videos and an endless stream of content. The portal being tested under the internal name "Inspire," appears as a diamond widget on the home page of Amazon's app, according to Israeli-based artificial intelligence firm Watchful Technologies, which has tracked the feature's use.

The widget brings shoppers to a feed that shows a stream of images and videos of products, with shoppers able to like, share and ultimately purchase items. While most of the feed now appears as still pictures, Watchful researchers said the portal also features video content. An Amazon spokeswoman said the company is "constantly testing new features to help make customers' lives a little easier." Amazon often experiments with new products and services for employees before releasing them publicly. It is possible the company may alter the "Inspire" feature significantly before launching it to the public or not release it at all. Amazon is the latest tech giant to try to capitalize on the sharp rise and popularity of TikTok, owned by Chinese company ByteDance.

United Kingdom

UK Cybersecurity Chiefs Back Plan To Scan Phones for Child Abuse Images (theguardian.com) 73

Tech companies should move ahead with controversial technology that scans for child abuse imagery on users' phones, the technical heads of GCHQ and the UK's National Cybersecurity Centre have said. From a report: So-called "client-side scanning" would involve service providers such as Facebook or Apple building software that monitors communications for suspicious activity without needing to share the contents of messages with a centralised server. Ian Levy, the NCSC's technical director, and Crispin Robinson, the technical director of cryptanalysis -- codebreaking -- at GCHQ, said the technology could protect children and privacy at the same time.

"We've found no reason why client-side scanning techniques cannot be implemented safely in many of the situations one will encounter," they wrote in a discussion paper published on Thursday, which the pair said was "not government policy." They argued that opposition to proposals for client-side scanning -- most famously a plan from Apple, now paused indefinitely, to scan photos before they are uploaded to the company's image-sharing service -- rested on specific flaws, which were fixable in practice. They suggested, for instance, requiring the involvement of multiple child protection NGOs, to guard against any individual government using the scanning apparatus to spy on civilians; and using encryption to ensure that the platform never sees any images that are passed to humans for moderation, instead involving only those same NGOs.

Microsoft

Microsoft is Launching a Facebook Rip-off Inside Teams (theverge.com) 112

An anonymous reader shares a report: Exactly 10 years ago today, Microsoft completed its $1.2 billion purchase of Yammer, an enterprise-focused social networking platform. Despite a big Yammer overhaul in 2019, Microsoft has been increasingly focused on Teams and its new Viva platform as the hubs of communication in workplaces. Microsoft is now launching Viva Engage today, a new Facebook-like app inside Teams that encourages social networking at work.

Viva Engage builds on some of the strengths of Yammer, promoting digital communities, conversations, and self-expression in the workplace. While Yammer often feels like an extension of SharePoint and Office, Viva Engage looks like a Facebook replica. It includes a storylines section, which is effectively your Facebook news feed, featuring conversational posts, videos, images, and more. It looks and feels just like Facebook, and it's clearly designed to feel similar so employees will use it to share news or even personal interests.

Education

Intel Calls Its AI That Detects Student Emotions a Teaching Tool. Others Call It 'Morally Reprehensible' (protocol.com) 38

An anonymous reader shares a report: When college instructor Angela Dancey wants to decipher whether her first-year English students comprehend what she's trying to get across in class, their facial expressions and body language don't reveal much. "Even in an in-person class, students can be difficult to read. Typically, undergraduates don't communicate much through their faces, especially a lack of understanding," said Dancey, a senior lecturer at the University of Illinois Chicago. Dancey uses tried-and-true methods such as asking students to identify their "muddiest point" -- a concept or idea she said students still struggle with -- following a lecture or discussion. "I ask them to write it down, share it and we address it as a class for everyone's benefit," she said. But Intel and Classroom Technologies, which sells virtual school software called Class, think there might be a better way. The companies have partnered to integrate an AI-based technology developed by Intel with Class, which runs on top of Zoom. Intel claims its system can detect whether students are bored, distracted or confused by assessing their facial expressions and how they're interacting with educational content.

"We can give the teacher additional insights to allow them to better communicate," said Michael Chasen, co-founder and CEO of Classroom Technologies, who said teachers have had trouble engaging with students in virtual classroom environments throughout the pandemic. His company plans to test Intel's student engagement analytics technology, which captures images of students' faces with a computer camera and computer vision technology and combines it with contextual information about what a student is working on at that moment to assess a student's state of understanding. Intel hopes to transform the technology into a product it can distribute more broadly, said Sinem Aslan, a research scientist at Intel, who helped develop the technology. "We are trying to enable one-on-one tutoring at scale," said Aslan, adding that the system is intended to help teachers recognize when students need help and to inform how they might alter educational materials based on how students interact with the educational content. "High levels of boredom will lead [students to] completely zone out of educational content," said Aslan. But critics argue that it is not possible to accurately determine whether someone is feeling bored, confused, happy or sad based on their facial expressions or other external signals.

Television

Roku OS 11 Will Let You Set Your Own Photos as a Screensaver (theverge.com) 61

Roku device owners will soon have a whole host of new personalization features, including all-new Photo Streams, with the Roku OS 11. From a report: Firstly, when Roku OS 11 rolls out to users in the weeks ahead, they'll be able to change their screensaver to display their own photography or images with Photo Streams. Not only will Photo Streams allow users to display photos from their desktop or mobile device on Roku, but users will also be able to share Streams with other Roku device owners as well. Once a Stream is shared, other Roku owners will be able to add to it, allowing everyone to collaborate on a shared album. Roku OS 11 will also introduce a new "what to watch on Roku" menu, a personally curated hub added to the home screen menu that will suggest popular and recently released TV and movies.
Bitcoin

The Associated Press Is Starting Its Own NFT Marketplace For Photojournalism (theverge.com) 21

The Associated Press, or AP, has announced that it's starting a marketplace to sell NFTs of its photojournalists' work in collaboration with a company called Xooa. The Verge reports: It's billing its foray into NFTs as a way for collectors to "purchase the news agency's award-winning contemporary and historic photojournalism" and says that the virtual tokens will be released at "broad and inclusive price points" (though it's hard to tell what types of prices resellers will want on the AP marketplace). The news outlet says its system will be built on the "environmentally friendly" Polygon blockchain and that the NFTs will "include a rich set of original metadata" to tell buyers when, where, and how the photos were taken. It says its first collection, launching January 31st, will include NFTs featuring photos of "space, climate, war and other images to spotlights on the work of specific AP photographers."

Buyers will be able to pay for NFTs from the market using either credit cards or Ethereum -- AP says the MetaMask will be the first wallet supported but that there are plans to add support for others. There will be virtual queues to buy NFTs as they're released by AP, with "Pulitzer Drops" containing more limited-edition NFTs happening every two weeks -- the FAQ says these particular images will "have increased scarcity to preserve their status." Buyers will be able to resell those NFTs on the site's secondary market. AP says that the proceeds from the NFTs' sale will be used to fund its journalistic endeavors. It'll also get revenue whenever they're resold on its marketplace -- the FAQ says there's a 10 percent fee associated with reselling, and Xooa spokesperson Lauren Easton told The Verge in an email that the two companies would share that fee. Easton also told us that the "photographers will share in all revenue collected," but didn't specify what their cut would be.
The NFT marketplace is set to open on January 31st, but you can get on a waitlist now to get "priority access" and a higher waitlist ranking if you refer others to sign up.
Intel

Intel Demos Lightning Fast 13.8 GBps PCIe 5.0 SSD with Alder Lake (tomshardware.com) 40

Intel has demonstrated how its Core i9-12900K Alder Lake processor can work with Samsung's recently announced PM1743 PCIe 5.0 x4 SSD. The result is as astonishing as it is predictable: the platform demonstrated approximately 13.8 GBps throughput in the IOMeter benchmark. From a report: Intel planned to show the demo at CES, however, the company is no longer going in person. So, Ryan Shrout, Intel's chief performance strategist, decided to share the demo publicly via Twitter. The system used for the demonstration included a Core i9-12900K processor, an Asus Z690 motherboard and an EVGA GeForce RTX 3080 graphics board. Intel hooked up Samsung's PM1743 SSD using a special PCIe 5.0 interposer card and the drive certainly did not disappoint. From a practical standpoint, 13.8 GBps may be overkill for regular desktop users, but for those who need to load huge games, work with large 8K video files or ultra-high-resolution images will appreciate the added performance. However, there is a small catch with this demo. Apparently, Samsung will be among the first to ship its PM1743 PCIe 5.0 drives, which is why Intel decided to use this SSD for the demonstration. But Samsung's PM1743-series is aimed at enterprises, so it will be available in a 2.5-inch/15mm with dual-port support and new-generation E3.S (76 Ã-- 112.75 Ã-- 7.5mm) form-factors, so it is not aimed at desktops (and Intel admits that).
Windows

Ask Slashdot: What Do You Remember About Windows ME? (computerworld.com) 269

"Windows Me was unstable, unloved and unusable," remembered Computerworld last year, on the 20th anniversary of its release, calling it "a stink bomb of an operating system." Windows Me was a ghastly, slapdash piece of work, incompatible with lots of hardware and software. It frequently failed during the installation process — which should have been the first sign for people that this was an operating system they shouldn't try.Often, when you tried to shut it down, it declined to do so, like a two-year-old throwing a temper tantrum over being forced to go to sleep. It was slow and insecure. Its web browser, Internet Explorer, frequently refused to load web pages.
But they ultimately argue that it wasn't as bad as Windows Vista, which "simply refused to run, or ran so badly it was useless on countless PCs. Not just old PCs, but even newly bought PCs, right out of the box, with Vista installed." And they conclude that the worst Microsoft OS of all is still Windows 8. ("You want bad? You want stupid? You want an operating system that not only was roundly reviled by consumers and businesses alike, but also set Microsoft's business plans back years?")

Slashdot reader alaskana98 even remembers Windows ME semi-fondly as "the last Microsoft OS to use the Windows 95 codebase." While rightly being panned as a buggy and crash-prone OS — indeed it was labelled as the worst version of Windows ever released by Computer World — it did introduce a number of features that continue on to this very day. Those features include:

-A personalized start menu that would show your most recently accessed programs, today a common feature in the Windows landscape.
-Software support for DVD playback. Previously one needed a dedicated card to playback DVDs.
-Windows Movie Maker and Windows Media Player 7, allowing home users to create, edit and burn their own digital home movies. While seemingly pedestrian in today's times, these were groundbreaking features for home users in the year 2000.
-The first iteration of System Restore — imagine a modern version of Windows not having the ability to conveniently restore to a working configuration — before Windows ME, this was simply not a possibility for the average home user unless you had a rigorous backup routine.
-The removal of real-mode DOS. While very controversial at the time, this change arguably improved the speed and reliability of the boot process.

Love it or hate it (well, lets face it, if you were a computer user at that point you probably hated it) — Windows ME did make several important contributions to the modern OS landscape that are often overlooked to this day. Do you have any stories from the heady days of late 2000 when Windows ME was first released?

Slashdot reader Z00L00K remembers in a comment that "The removal of real-mode DOS is what REALLY made ME impossible to use for most of us at the time. It broke backwards compatibility so hard that the only way out was to use any of the earlier versions of Windows instead!"

Is this re-awakening images of the year 2000 for anyone? Share your own memories and thoughts in the comments.

What do you remember about Windows ME?
AI

Clearview AI On Track To Win US Patent For Facial Recognition Technology (politico.com) 17

An anonymous reader quotes a report from Politico: Clearview AI has gotten the green light on a federal patent for its facial recognition technology -- an award that the company says is the first to cover a so-called "search engine for faces" that crawls the internet to find matches. Clearview's software -- which scrapes public images from social media to help law enforcement match images in government databases or surveillance footage -- has long faced fire from privacy advocates who say it uses people's faces without their knowledge or consent. Civil rights groups also argue that facial recognition technology is generally error-prone, misidentifying women and minorities at higher rates than it does white men and sometimes leading to false arrests. (A recent audit of Clearview's tech by the Commerce Department's National Institute of Standards and Technology found its results to be highly accurate (PDF), and the company said it knows of no instances to date where the technology has led to a wrongful arrest.) Now, some of those critics fear that codifying Clearview's work with a patent will accelerate the growth of these technologies before legislators or regulators have fully addressed the potential dangers.

The U.S. Patent and Trademark Office sent Clearview a "notice of allowance" on Wednesday, meaning the patent will be approved once the company pays certain administrative fees. The patent covers Clearview's "methods of providing information about a person based on facial recognition," including its "automated web crawler" that scans social networking sites and the internet and its algorithms that analyze and match facial images obtained online. "There are other facial recognition patents out there -- that are methods of doing it -- but this is the first one around the use of large-scale internet data," Clearview CEO and co-founder Hoan Ton-That told POLITICO in an exclusive interview. The product uses a database of more than 10 billion photos, Ton-That said, and he has emphasized that "as a person of mixed race, having non-biased technology is important to me." Clearview argues that there is a First Amendment right to make use of public material. "All information in our datasets are all publicly available info that people voluntarily posted online -- it's not anything on your private camera roll," Ton-That said. "If it was all private data, that would be a completely different story."

Ton-That said Clearview serves government users only and that "we don't intend to ever make a consumer version of Clearview AI." Yet Clearview says in its patent application that the invention could be useful for other purposes. The company argues that "it may be desirable for an individual to know more about a person that they meet, such as through business, dating, or other relationship." Common ways of learning about new people, like asking them questions or checking out their business cards, may be unreliable because the information they choose to share could be false, the application says.
"The part that they're looking to protect is exactly the part that's the most problematic," said Matt Mahmoudi, an Amnesty International researcher who is leading the group's work to ban facial recognition. "They are patenting the very part of it that's in violation of international human rights law."

Mahmoudi of Amnesty International said that language in the patent leaves the door open to a cascade of new uses in the future. "It shows a willingness to go down a slippery slope of basically being available in any context," he said.
Facebook

Meta Builds Tool To Stop the Spread of 'Revenge Porn' (nbcnews.com) 94

Facebook's parent company, Meta, has worked with the U.K.-based nonprofit Revenge Porn Helpline to build a tool that lets people prevent their intimate images from being uploaded to Facebook, Instagram and other participating platforms without their consent. From a report: The tool, which builds on a pilot program Facebook started in Australia in 2017, launched Thursday. It allows people who are worried that their intimate photos or videos have been or could be shared online, for example by disgruntled ex-partners, to submit the images to a central, global website called StopNCII.org, which stands for "Stop Non-Consensual Intimate Images."

"It's a massive step forward," said Sophie Mortimer, the helpline's manager. "The key for me is about putting this control over content back into the hands of people directly affected by this issue so they are not just left at the whims of a perpetrator threatening to share it." Karuna Nain, Meta's director of global safety policy, said the company had shifted its approach to use an independent website to make it easier for other companies to use the system and to reduce the burden on the victims of image-based abuse to report content to "each and every platform." During the submission process, StopNCII.org gets consent and asks people to confirm that they are in an image. People can select material on their devices, including manipulated images, that depict them nude or nearly nude. The photos or the videos will then be converted into unique digital fingerprints known as "hashes," which will be passed on to participating companies, starting with Facebook and Instagram.

Wikipedia

Wikipedia Criticized After Years of Using the Wrong Man's Picture to Depict a Serial Killer (wikipedia.org) 113

Andreas Kolbe is a former co-editor-in-chief of The Signpost, an online newspaper for (English-language) Wikipedia that's been published online since 2005 with contributions from Wikipedia editors. Kolbe has been contributing to it since 2006.

Last week he returned to the Signpost to share a cautionary tale. Its title? "A photo on Wikipedia can ruin your life."

Also a long-time Slashdot reader, Andreas Kolbe shares this summary with us: For more than two years, Wikipedia illustrated its article on New York serial killer Nathaniel White with the police photo of an African-American man from Florida who happened to have the same name. A Wikipedia user said he had found the picture on crimefeed.com, a "true crime" site associated with the Discovery Channel, which also used the same photo in a TV broadcast on the serial killer.

During the two-and-a-half years the Wikipedia article showed the picture of the wrong man, it was viewed over 125,000 times, including nearly 12,000 times on the day the TV program ran. The man whose picture was used said he received threats to his person from people who assumed he really was the killer, and took to dressing incognito.

His picture is now all over Google when people search for the serial killer.

"Friends and family contacted Plaintiff concerning the broadcast and asking Plaintiff if he actually murdered people in the state of New York," adds a legal complaint the man eventually filed against the Wikimedia Foundation. "Plaintiff assured these friends and family that even though he acknowledged his criminal past, he never murdered anyone nor has he ever been to the state of New York...."

Last month the legal director of the Wikimedia Foundation and a Legal Fellow co-authored a blog post pointing out the lawsuit "was filed months after Wikipedia editors proactively corrected the error at issue in September 2020." The blog post celebrates a judge's dismissal of the suit as "a victory for free knowledge," and acknowledges the protections afforded by Section 230 of the Communications Decency Act. "Our ability to maintain and grow the world's largest repository of free knowledge depends on robust legal immunity.... The Wikimedia Foundation applauds this ruling and remains committed to protecting global exchange of knowledge and freedom of expression across the internet."

But the blog post also argued that "the many members of our volunteer community are very effective at identifying and removing these inaccuracies when they do occur." Andreas Kolbe disagrees. "The photo was in the article for over two years," Kolbe writes on Signpost. "For a man to have his face presented to the world as that of a serial killer on a top-20 website, for such a significant amount of time, can hardly be described as indicative of 'very effective' quality control on the part of the community." The picture was only removed after a press report pointed out that Wikipedia had the wrong picture. This means the deletion was in all likelihood reactive rather than "proactive"...

The wrong photograph appears to have been removed by an unknown member of the public, an IP address that had never edited before and has not edited since. The volunteer community seems to have been completely unaware of the problem throughout...

It would seem more appropriate -

- to acknowledge that community processes failed Mr. White to a quite egregious degree, and
- to alert the community to the fact that its quality control processes are in need of improvement....

Surely Wikipedia's guidelines, policies and community practices for sourcing images, in particular images used to imply responsibility for specific crimes, would benefit from some strengthening, to ensure they actually depict the correct individual.

Pondering the dismissal of the lawsuit, Kolbe ultimately asks if there's a deeper moral question in a world where a man was "defamed on our global top-20 website with absolute impunity, without his having any realistic hope of redress for what happened to him." While to the best of my belief the error did not originate in Wikipedia, but was imported into Wikipedia from an unreliable external site, for more than two years any vigilante Googling Nathaniel White serial killer would have seen Mr. White's color picture prominently displayed in Google's knowledge graph panel (multiple copies of it still appear there at the time of writing). And along with it they would have found a prominent link to the serial killer's Wikipedia biography, again featuring Mr. White's image — providing what looked like encyclopedic confirmation that Mr. White of Florida was indeed guilty of sickening crimes...

On the very day the picture was removed from the article here, a video about the serial killer was uploaded to YouTube — complete with Mr. White's picture, citing Wikipedia. At the time of writing, the video's title page with Mr. White's color picture is the top Google image result in searches for the serial killer. All in all, seven of Google's top-fifteen image search results for Nathaniel White serial killer today feature Mr. White's image. Only two black-and-white photos show what seems to have been the real killer.

A comment on the Wikimedia Foundation blog adds, "What I'd much rather see is an acknowledgement that the community process failed Mr White to an extreme degree and that steps will be taken to prevent recurrence of such cases."
Software

Adobe Brings New Creative Cloud Apps To M1 Macs and The Web (arstechnica.com) 11

During Adobe Max 2021 today, the company announced new features for Creative Cloud's various iPad apps, two more applications running natively on Apple Silicon Macs, and new web versions of some apps, among other things. Ars Technica reports: Adobe said it is adding or improving AI-driven tools across the suite, including an updated Object Selection Tool for Photoshop on Desktop. And some AI tools previously seen in Photoshop, like the Sky Replacement tool, are headed to Lightroom on Mac, iPad, and iPhone for the first time. The iPad version of Photoshop will gain support for RAW images and is getting several new tools and the ability to convert layers into Smart Objects. Illustrator for iPad is getting some improvements, too, most notably the ability to vectorize images and track version history and revert to earlier iterations. Further, After Effects and InDesign are getting Apple Silicon support on recent Macs.

It's not all about native applications, though -- Adobe announced this week that it will bring versions of Photoshop and Illustrator to the web. The web versions won't be as robust as the desktop versions, but they will let you make minor edits and provide a way to share and discuss work with colleagues or clients. The apps will allow users to review work and leave comments without launching a native version of Photoshop -- think of it a bit like a stripped-down version of InVision that exists directly inside the Creative Cloud ecosystem.
Adobe also said it's launching a system built into Photoshop that can, among other things, "help prove that the person selling an NFT is the person who made it," reports The Verge. "It's called Content Credentials, and NFT sellers will be able to link the Adobe ID with their crypto wallet, allowing compatible NFT marketplaces to show a sort of verified certificate proving the art's source is authentic."
United States

Poorly Devised Regulation Lets Firms Pollute With Abandon (economist.com) 60

Athletes don't get advance warning of drug tests. Police don't share schedules of planned raids. Yet America's Environmental Protection Agency (EPA) does not seem convinced of the value of surprise in deterring bad behaviour [the link may be paywalled]. From a report: Every year it publishes a list of dates, spaced at six-day intervals, on which it will require state and local agencies to provide data on concentrations of harmful fine particulate matter (PM2.5), such as soot or cement dust. In theory, such a policy should enable polluters to spew as much filth into the air as they like 83% of the time, and clean up their act every sixth day. However, this ill-advised approach does offer one silver lining: it lets economists measure how much businesses change their behaviour when the proverbial parents are out of town.

A new paper by Eric Zou of the University of Oregon makes use of satellite images to spy on polluters at times when they think no one is watching. NASA, America's space agency, publishes data on the concentration of aerosol particles -- ranging from natural dust to man-made toxins -- all around the world, as seen from space. For every day in 2001-13, Mr Zou compiled these readings in the vicinity of each of America's 1,200 air-monitoring sites. Although some stations provided data continuously, 30-50% of them sent reports only once every six days. For these sites, Mr Zou studied how aerosol levels varied based on whether data would be reported. Sure enough, the air was consistently cleaner in these areas on monitoring days than it was the rest of the time, by a margin of 1.6%. Reporting schedules were almost certainly the cause: in areas where stations were retired, average pollution levels on monitoring days promptly rose to match the readings on non-monitoring days.

Google

Google's Releases New AI Photo Upscaling Tech (petapixel.com) 97

Michael Zhang writes via PetaPixel: In a post titled "High Fidelity Image Generation Using Diffusion Models" published on the Google AI Blog (and spotted by DPR), Google researchers in the company's Brain Team share about new breakthroughs they've made in image super-resolution. [...] The first approach is called SR3, or Super-Resolution via Repeated Refinement. Here's the technical explanation: "SR3 is a super-resolution diffusion model that takes as input a low-resolution image, and builds a corresponding high resolution image from pure noise," Google writes. "The model is trained on an image corruption process in which noise is progressively added to a high-resolution image until only pure noise remains." "It then learns to reverse this process, beginning from pure noise and progressively removing noise to reach a target distribution through the guidance of the input low-resolution image." SR3 has been found to work well on upscaling portraits and natural images. When used to do 8x upscaling on faces, it has a "confusion rate" of nearly 50% while existing methods only go up to 34%, suggesting that the results are indeed photo-realistic.

Once Google saw how effective SR3 was in upscaling photos, the company went a step further with a second approach called CDM, a class-conditional diffusion model. "CDM is a class-conditional diffusion model trained on ImageNet data to generate high-resolution natural images," Google writes. "Since ImageNet is a difficult, high-entropy dataset, we built CDM as a cascade of multiple diffusion models. This cascade approach involves chaining together multiple generative models over several spatial resolutions: one diffusion model that generates data at a low resolution, followed by a sequence of SR3 super-resolution diffusion models that gradually increase the resolution of the generated image to the highest resolution." "With SR3 and CDM, we have pushed the performance of diffusion models to state-of-the-art on super-resolution and class-conditional ImageNet generation benchmarks," Google researchers write. "We are excited to further test the limits of diffusion models for a wide variety of generative modeling problems."

Government

10 US Government Agencies Plan Expanded Use of Facial Recognition (msn.com) 29

The Washington Post reports that the U.S. government "plans to expand its use of facial recognition to pursue criminals and scan for threats, an internal survey has found, even as concerns grow about the technology's potential for contributing to improper surveillance and false arrests." Ten federal agencies — the departments of Agriculture, Commerce, Defense, Homeland Security, Health and Human Services, Interior, Justice, State, Treasury and Veterans Affairs — told the Government Accountability Office they intend to grow their facial recognition capabilities by 2023, the GAO said in a report posted to its website Tuesday. Most of the agencies use face-scanning technology so employees can unlock their phones and laptops or access buildings, though a growing number said they are using the software to track people and investigate crime. The Department of Agriculture, for instance, said it wants to use it to monitor live surveillance feeds at its facilities and send an alert if it spots any faces also found on a watch list...

The GAO said in June that 20 federal agencies have used either internally developed or privately run facial recognition software, even though 13 of those agencies said they did not "have awareness" of which private systems they used and had therefore "not fully assessed the potential risks ... to privacy and accuracy." In the current report, the GAO said several agencies, including the Justice Department, the Air Force and Immigration and Customs Enforcement, reported that they had used facial recognition software from Clearview AI, a firm that has faced lawsuits from privacy groups and legal demands from Google and Facebook after it copied billions of facial images from social media without their approval... Many federal agencies said they used the software by requesting that officials in state and local governments run searches on their own software and report the results. Many searches were routed through a nationwide network of "fusion centers," which local police and federal investigators use to share information on potential threats or terrorist attacks...

U.S. Customs and Border Protection officials, who have called the technology "the way of the future," said earlier this month that they had run facial recognition scans on more than 88 million travelers at airports, cruise ports and border crossings. The systems, the officials said, have detected 850 impostors since 2018 — or about 1 in every 103,000 faces scanned.

Slashdot Top Deals