Nintendo

Nintendo Goes After Fan-Made Custom Steam 'Icons' With DMCA Takedowns (arstechnica.com) 41

An anonymous reader quotes a report from Ars Technica: Nintendo has issued a number of Digital Millennium Copyright Act (DMCA) requests against SteamGridDB (SGDB), a site that hosts custom fan-made icons and images used to represent games on Steam's front-end interface. Since 2015, SGDB's collection has grown to include hundreds of thousands of images representing tens of thousands of titles. That includes custom imagery for many standard Steam games and emulated game ROMs, which can be added to Steam as "external games."

To be clear, SteamGridDB doesn't host the kind of ROM files that have gotten other sites in legal trouble with Nintendo, or even the emulators used to run those games. "We don't support piracy in any way," an SGDB admin (who asked to remain anonymous) told Ars. "The website is just a free repository where people can share options to customize their game launchers." But in a series of DMCA requests viewed by Ars Technica, dated October 27, Nintendo says some of the imagery on SGDB "displays Nintendo's trademarks and other intellectual property (including characters) which is likely to lead to consumer confusion." Thus, dozens of SGDB images have been replaced with a blank image featuring the text "this asset has been removed in response to a DMCA takedown request" (you can see some of the specific images that were removed in this Internet Archive snapshot from April and compare it to how the listing currently looks).

Thus far, Nintendo's DMCA requests focus on imagery for just five Switch games that are listed on SGDB: Pokemon Scarlet & Violet, Splatoon 3, Super Mario Odyssey, The Legend of Zelda: Breath of the Wild, and Xenoblade Chronicles 3. Other Switch games listed on the site (some featuring the same exact characters) are unaffected, as are images for many older Nintendo titles. [...] Even for the Switch games in question, the DMCA requests focused on images that "straight up used sprites and assets from [Nintendo's] IP," according to the SGDB admin. Nintendo's requests so far seem to have ignored "completely original creations" and "pure fan art" even when that art involves drawings of Nintendo's original characters. It's unclear if those kinds of images would fall under a different legal standard in this case. "If an IP holder asks to take down original creations then I'll figure out the best way to handle that when it happens," the admin said. "The site is basically all just fan art, we're open to publishers reaching out and discussing any issues they may have. [The] best way to find a good course of action is to discuss options."

Communications

Trump Posted Classified Satellite Imagery On Twitter As President (npr.org) 342

According to documents recently declassified by the National Geospatial-Intelligence Agency (NGA), former President Donald Trump posted a classified satellite image of a failed rocket launch in Iran on Twitter in 2019. NPR reports: Now, three years after Trump's tweet, the National Geospatial-Intelligence Agency (NGA) has formally declassified the original image. The declassification, which came as the result of a Freedom of Information Act request by NPR, followed a grueling Pentagon-wide review to determine whether the briefing slide it came from could be shared with the public. Many details on the original image remain redacted -- a clear sign that Trump was sharing some of the U.S. government's most prized intelligence on social media, says Steven Aftergood, specialist in secrecy and classification at the Federation of American Scientists. "He was getting literally a bird's eye view of some of the most sensitive US intelligence on Iran," he says. "And the first thing he seemed to want to do was to blurt it out over Twitter." "[A]erospace experts determined the photo was taken by a classified spacecraft called USA 224, believed to be a multibillion-dollar KH-11 reconnaissance aircraft," adds Gizmodo. "The spacecraft is similar to the Hubble Telescope, but instead of getting a closer look at the stars, it views the Earth's surface."
AI

Meet 'Unstable Diffusion', the Group Trying To Monetize AI Porn Generators (techcrunch.com) 89

An anonymous reader quotes a report from TechCrunch: When Stable Diffusion, the text-to-image AI developed by startup Stability AI, was open sourced earlier this year, it didn't take long for the internet to wield it for porn-creating purposes. Communities across Reddit and 4chan tapped the AI system to generate realistic and anime-style images of nude characters, mostly women, as well as non-consensual fake nude imagery of celebrities. But while Reddit quickly shut down many of the subreddits dedicated to AI porn, and communities like NewGrounds, which allows some forms of adult art, banned AI-generated artwork altogether, new forums emerged to fill the gap. By far the largest is Unstable Diffusion, whose operators are building a business around AI systems tailored to generate high-quality porn. The server's Patreon -- started to keep the server running as well as fund general development -- is currently raking in over $2,500 a month from several hundred donors.

"In just two months, our team expanded to over 13 people as well as many consultants and volunteer community moderators," Arman Chaudhry, one of the members of the Unstable Diffusion admin team, told TechCrunch in a conversation via Discord. "We see the opportunity to make innovations in usability, user experience and expressive power to create tools that professional artists and businesses can benefit from." Unsurprisingly, some AI ethicists are as worried as Chaudhry is optimistic. While the use of AI to create porn isn't new [...] Unstable Diffusion's models are capable of generating higher-fidelity examples than most. The generated porn could have negative consequences particularly for marginalized groups, the ethicists say, including the artists and adult actors who make a living creating porn to fulfill customers' fantasies.

Unstable Diffusion got its start in August -- around the same time that the Stable Diffusion model was released. Initially a subreddit, it eventually migrated to Discord, where it now has roughly 50,000 members. [...] Today, the Unstable Diffusion server hosts AI-generated porn in a range of different art styles, sexual preferences and kinks. [...] Users in these channels can invoke the bot to generate art that fits the theme, which they can then submit to a "starboard" if they're especially pleased with the results. Unstable Diffusion claims to have generated over 4,375,000 images to date. On a semiregular basis, the group hosts competitions that challenge members to recreate images using the bot, the results of which are used in turn to improve Unstable Diffusion's models. As it grows, Unstable Diffusion aspires to be an "ethical" community for AI-generated porn -- i.e. one that prohibits content like child pornography, deepfakes and excessive gore. Users of the Discord server must abide by the terms of service and submit to moderation of the images that they generate; Chaudhry claims the server employs a filter to block images containing people in its "named persons" database and has a full-time moderation team.
"Chaudhry sees Unstable Diffusion evolving into an organization to support broader AI-powered content generation, sponsoring dev groups and providing tools and resources to help teams build their own systems," reports TechCrunch. "He claims that Equilibrium AI secured a spot in a startup accelerator program from an unnamed 'large cloud compute provider' that comes with a 'five-figure' grant in cloud hardware and compute, which Unstable Diffusion will use to expand its model training infrastructure."

In addition to the grant, Unstable Diffusion will launch a Kickstarter campaign and seek venture funding, Chaudhry says.

"We plan to create our own models and fine-tune and combine them for specialized use cases which we shall spin off into new brands and products," Chaudhry added.
Earth

Scientists Are Uncovering Ominous Waters Under Antarctic Ice (wired.com) 37

A super-pressurized, 290-mile-long river is running under the ice sheet. That could be bad news for sea-level rise. From a report: For all its treacherousness and general inclination to kill you, Antarctica's icy surface is fairly tranquil: vast stretches of miles-thick whiteness, with not a plant or animal to speak of. But way below the surface, where that ice meets land, things get wild. What scientists used to think was a ho-hum subglacial environment is in fact humming with hydrological activity, recent research is revealing, with major implications for global sea-level rise. Researchers just found that, at the base of Antarctica's ice, an area the size of Germany and France combined is feeding meltwater into a super-pressurized, 290-mile-long river running to the sea. "Thirty years ago, we thought the whole of the ice pretty much was frozen to the bed," says Imperial College London glaciologist Martin Siegert, coauthor of a new paper in Nature Geoscience describing the finding. "Now we're in a position that we've just never been in before, to understand the whole of the Antarctic ice sheet."

Antarctica's ice is divided into two main components: the ice sheet that sits on land, and the ice shelf that extends off the coast, floating on seawater. Where the two meet -- where the ice lifts off the bed and starts touching the ocean -- is known as the grounding line. But the underside of all that ice is obscured. To find out what's going on below, some scientists have hiked across glaciers while dragging ground-penetrating radar units on sleds -- the pings travel through thousands of feet of ice and bounce off the underlying seawater, so the researchers can build detailed maps of what used to be hidden. Others are setting off explosions, then analyzing the seismic waves that come back to the surface to indicate whether there's land or water below. Still others are lowering torpedo-shaped robots through boreholes to get unprecedented imagery of the underside of the floating ice shelf. Up in the sky, satellites can measure minute changes in surface elevation, which indicates the features below -- a swell, for instance, might betray a subglacial lake.

Google

Google Is Shutting Down Its Dedicated Street View App Next Year (9to5google.com) 13

An anonymous reader quotes a report from 9to5Google: Google is preparing to shut down the dedicated Street View app on Android, keeping the feature in Google Maps. Google's Street View is an easy way to get a 360-degree look at almost any given street on the planet, perfect for getting a sense of your next travel destination or simply exploring the world from the comfort of home. While the Google Maps app has long offered an easy way to hop into Street View, there has also been a dedicated Street View app on Android and iOS.

This standalone app served two distinct groups of people -- those who wanted to deeply browse Street View and those who wanted to contribute their own 360 imagery. Considering the more popular Google Maps app has Street View support and Google offers a "Street View Studio" web app for contributors, it should be no surprise to learn that the company is now preparing to shut down the Street View app.

In the latest update, version 2.0.0.484371618, Google has prepared a handful of deprecation/shutdown notices for the Street View app. These notices are not yet visible in the app today, but our team managed to enable them. In the notice, Google confirms that the Street View app is set to shut down on March 31, 2023, encouraging users to switch to either Google Maps or Street View Studio. However, one feature that is being fully shut down with the Street View app's demise is that of "Photo Paths." First launched last year, Photo Paths were intended as a way to let nearly anyone with a smartphone contribute simple 2D photos of a road or path that had not yet been documented by Street View. Unlike every other feature of the Street View app, there is no replacement for Photo Paths on the web app or Google Maps app.

Social Networks

Tumblr Will Now Allow Nudity But Not Explicit Sex (theverge.com) 45

Tumblr has made an update it hinted at in September, changing its rules to allow nudity -- but not sexually explicit images -- on the platform. The Verge reports: The company updated its community guidelines earlier today, laying out a set of rules that stops short of its earlier permissive attitude toward sexuality but that formally allows a wider range of imagery. "We now welcome a broader range of expression, creativity, and art on Tumblr, including content depicting the human form (yes, that includes the naked human form). So, even if your creations contain nudity, mature subject matter, or sexual themes, you can now share them on Tumblr using the appropriate Community Label," the post says. "Visual depictions of sexually explicit acts remain off-limits on Tumblr."

A help center post and the community guidelines offer a little more detail. They say that "text, images, and videos that contain nudity, offensive language, sexual themes, or mature subject matter" is allowed on Tumblr, but "visual depictions of sexually explicit acts (or content with an overt focus on genitalia)" aren't. There's an exception for "historically significant art that you may find in a mainstream museum and which depicts sex acts -- such as from India's Sunga Empire," although it must be labeled with a mature content or "sexual themes" tag so that users can filter it from their dashboards.

"Nudity and other kinds of adult material are generally welcome. We're not here to judge your art, we just ask that you add a Community Label to your mature content so that people can choose to filter it out of their Dashboard if they prefer," say the community guidelines. However, users can't post links or ads to "adult-oriented affiliate networks," they can't advertise "escort or erotic services," and they can't post content that "promotes pedophilia," including "sexually suggestive" content with images of children.
On December 17th, 2018, Tumblr permanently banned adult content from its platform. The site was owned by Verizon at the time and later sold to WordPress.com owner Automattic, which largely maintained the ban "in large part because internet infrastructure services -- like payment processors and Apple's iOS App Store -- typically frown on explicit adult content," reports The Verge.
Medicine

'Science Has a Nasty Photoshopping Problem' (nytimes.com) 190

Dr. Bik is a microbiologist who has worked at Stanford University and for the Dutch National Institute for Health who is "blessed" with "what I'm told is a better-than-average ability to spot repeating patterns," according to their new Op-Ed in the New York Times.

In 2014 they'd spotted the same photo "being used in two different papers to represent results from three entirely different experiments...." Although this was eight years ago, I distinctly recall how angry it made me. This was cheating, pure and simple. By editing an image to produce a desired result, a scientist can manufacture proof for a favored hypothesis, or create a signal out of noise. Scientists must rely on and build on one another's work. Cheating is a transgression against everything that science should be. If scientific papers contain errors or — much worse — fraudulent data and fabricated imagery, other researchers are likely to waste time and grant money chasing theories based on made-up results.....

But were those duplicated images just an isolated case? With little clue about how big this would get, I began searching for suspicious figures in biomedical journals.... By day I went to my job in a lab at Stanford University, but I was soon spending every evening and most weekends looking for suspicious images. In 2016, I published an analysis of 20,621 peer-reviewed papers, discovering problematic images in no fewer than one in 25. Half of these appeared to have been manipulated deliberately — rotated, flipped, stretched or otherwise photoshopped. With a sense of unease about how much bad science might be in journals, I quit my full-time job in 2019 so that I could devote myself to finding and reporting more cases of scientific fraud.

Using my pattern-matching eyes and lots of caffeine, I have analyzed more than 100,000 papers since 2014 and found apparent image duplication in 4,800 and similar evidence of error, cheating or other ethical problems in an additional 1,700. I've reported 2,500 of these to their journals' editors and — after learning the hard way that journals often do not respond to these cases — posted many of those papers along with 3,500 more to PubPeer, a website where scientific literature is discussed in public....

Unfortunately, many scientific journals and academic institutions are slow to respond to evidence of image manipulation — if they take action at all. So far, my work has resulted in 956 corrections and 923 retractions, but a majority of the papers I have reported to the journals remain unaddressed.

Manipulated images "raise questions about an entire line of research, which means potentially millions of dollars of wasted grant money and years of false hope for patients." Part of the problem is that despite "peer review" at scientific journals, "peer review is unpaid and undervalued, and the system is based on a trusting, non-adversarial relationship. Peer review is not set up to detect fraud."

But there's other problems. Most of my fellow detectives remain anonymous, operating under pseudonyms such as Smut Clyde or Cheshire. Criticizing other scientists' work is often not well received, and concerns about negative career consequences can prevent scientists from speaking out. Image problems I have reported under my full name have resulted in hateful messages, angry videos on social media sites and two lawsuit threats....

Things could be about to get even worse. Artificial intelligence might help detect duplicated data in research, but it can also be used to generate fake data. It is easy nowadays to produce fabricated photos or videos of events that never happened, and A.I.-generated images might have already started to poison the scientific literature. As A.I. technology develops, it will become significantly harder to distinguish fake from real.

Science needs to get serious about research fraud.

Among their proposed solutions? "Journals should pay the data detectives who find fatal errors or misconduct in published papers, similar to how tech companies pay bounties to computer security experts who find bugs in software."
Technology

Shutterstock Will Start Selling AI-Generated Stock Imagery With Help from OpenAI (theverge.com) 22

Will AI image generators kill the stock image industry? It's a question asked by many following the rise of text-to-image AI models in recent years. The answer from the industry's incumbents, though, is "no" -- not if we can start selling AI-generated content first. From a report: Today, stock image giant Shutterstock has announced an extended partnership with OpenAI, which will see the AI lab's text-to-image model DALL-E 2 directly integrated into Shutterstock "in the coming months." In addition, Shutterstock is launching a "Contributor Fund" that will reimburse creators when the company sells work to train text-to-image AI models. This follows widespread criticism from artists whose output has been scraped from the web without their consent to create these systems. Notably, Shutterstock is also banning the sale of AI-generated art on its site that is not made using its DALL-E integration.
Earth

Nord Stream Rupture May Mark Biggest Single Methane Release Ever Recorded, UN Says 604

The ruptures on the Nord Stream natural gas pipeline system under the Baltic Sea have led to what is likely the biggest single release of climate-damaging methane ever recorded, the United Nations Environment Programme said on Friday. Reuters reports: A huge plume of highly concentrated methane, a greenhouse gas far more potent but shorter-lived than carbon dioxide, was detected in an analysis this week of satellite imagery by researchers associated with UNEP's International Methane Emissions Observatory, or IMEO, the organization said. "This is really bad, most likely the largest emission event ever detected," Manfredi Caltagirone, head of the IMEO for UNEP, told Reuters. "This is not helpful in a moment when we absolutely need to reduce emissions."

Researchers at GHGSat, which uses satellites to monitor methane emissions, estimated the leak rate from one of four rupture points was 22,920 kilograms per hour. That is equivalent to burning about 630,000 pounds of coal every hour, GHGSat said in a statement. "This rate is very high, especially considering it's four days following the initial breach," the company said. The total amount of methane leaking from the Gazprom-led (GAZP.MM) pipeline system may be higher than from a major leak that occurred in December from offshore oil and gas fields in Mexican waters of the Gulf of Mexico, which spilled around 100 metric tons of methane per hour, Caltagirone said.

The Gulf of Mexico leak, also viewable from space, ultimately released around 40,000 metric tons of methane over 17 days, according to a study conducted by the Polytechnic University of Valencia and published in the journal Environmental Science & Technology Letters. That is the equivalent of burning 1.1 billion pounds of coal, according to the U.S. Environmental Protection Agency's Greenhouse Gas Equivalencies Calculator.
Technology

Magic Leap's Smaller, Lighter Second-Gen AR Glasses Are Now Available (engadget.com) 14

Magic Leap's second take on augmented reality eyewear is available. "The glasses are still aimed at developers and pros, but they include a number of design upgrades that make them considerably more practical -- and point to where AR might be headed," reports Engadget. From the report: The design is 50 percent smaller and 20 percent lighter than the original. It should be more comfortable to wear over long periods, then. Magic Leap also promises better visibility for AR in bright light (think a well-lit office) thanks to "dynamic dimming" that makes virtual content appear more solid. Lens optics supposedly deliver higher quality imagery with easier-to-read text, and the company touts a wider field of view (70 degrees diagonal) than comparable wearables.

You can expect decent power that includes a quad-core AMD Zen 2-based processor in the "compute pack," a 12.6MP camera (plus a host of cameras for depth, eye tracking and field-of-view) and 60FPS hand tracking for gestures. You'll only get 3.5 hours of non-stop use, but the 256GB of storage (the most in any dedicated AR device, Magic Leap claims) provides room for more sophisticated apps.
The base model of the glasses costs $3,299, with the Enterprise model amounting to about $5,000.
Businesses

Adobe Defends Its $20 Billion Deal for Figma (axios.com) 19

Adobe executives think there's a lot that critics of its $20 billion purchase of Figma are missing. From a report: In a meeting with Axios, Adobe general counsel Dana Rao defended the deal's price tag and highlighted why Adobe believes it needs Figma to help shape the design-software giant's broader future. Adobe XD just wasn't cutting it. It was a product designed for a single user sitting at a PC in a world that wants cloud-based tools for real-time multi-user collaboration. After seven years of investment, Adobe XD was bringing in just $15 million in annual recurring revenue on a standalone basis -- a minuscule fraction of Figma's $400 million annual recurring revenue. (That, in turn, is a minuscule fraction of Adobe's overall annual revenue of $17 billion.)

Adobe has essentially put XD on ice, assigning just 20 employees to the product in what it sees as "maintenance mode." Figma has more than 800 people. Adobe needs a rethink for the cloud era. Its current efforts have been about bringing its existing tools to the web. The Figma deal offers help with the longer-term challenge of "reimagining the whole thing," in Rao's words. Adobe Express is an early homegrown attempt, but Rao said Figma will help the company fully reinvent itself for the next era of design. Rao said over time Figma customers will benefit from Adobe's other resources, including its troves of fonts and stock imagery.

AI

Getty Images Bans AI-Generated Content Over Fears of Legal Challenges (theverge.com) 45

Getty Images has banned the upload and sale of illustrations generated using AI art tools like DALL-E, Midjourney, and Stable Diffusion. From a report: It's the latest and largest user-generated content platform to introduce such a ban, following similar decisions by sites including Newgrounds, PurplePort, and FurAffinity. Getty Images CEO Craig Peters told The Verge that the ban was prompted by concerns about the legality of AI-generated content and a desire to protect the site's customers. "There are real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata and those individuals contained within the imagery," said Peters. Given these concerns, he said, selling AI artwork or illustrations could potentially put Getty Images users at legal risk. "We are being proactive to the benefit of our customers," he added. One of Getty Images' biggest competitors, Shutterstock, also seems to be limiting some searches for AI content but hasn't yet introduced specific policies banning the material.
AI

Horrifying Woman Keeps Appearing In AI-Generated Images (vice.com) 98

An anonymous reader quotes a report from Motherboard: AI image generators like DALL-E and Midjourney have become an especially buzzy topic lately, and it's easy to see why. Using machine learning models trained on billions of images, the systems tap into the allure of the black box, creating works that feel both alien and strangely familiar. Naturally, this makes fertile ground for all sorts of AI urban legends, since nobody can really explain how the complex neural networks are ultimately deciding on the images they create. The latest example comes from an AI artist named Supercomposite, who posted disturbing and grotesque generated images of a woman who seems to appear in response to certain queries.

The woman, whom the artist calls "Loab," was first discovered as a result of a technique called "negative prompt weights," in which a user tries to get the AI system to generate the opposite of whatever they type into the prompt. To put it simply, different terms can be "weighted" in the dataset to determine how likely they will be to appear in the results. But by assigning the prompt a negative weight, you essentially tell the AI system, "Generate what you think is the opposite of this prompt." In this case, using a negative-weight prompt on the word "Brando" generated the image of a logo featuring a city skyline and the words "DIGITA PNTICS." When Supercomposite used the negative weights technique on the words in the logo, Loab appeared. "Since Loab was discovered using negative prompt weights, her gestalt is made from a collection of traits that are equally far away from something," Supercomposite wrote in a thread on Twitter. "But her combined traits are still a cohesive concept for the AI, and almost all descendent images contain a recognizable Loab."

The images quickly went viral on social media, leading to all kinds of speculation on what could be causing the unsettling phenomenon. Most disturbingly, Supercomposite claims that generated images derived from the original image of Loab almost universally veer into the realm of horror, graphic violence, and gore. But no matter how many variations were made, the images all seem to feature the same terrifying woman. "Through some kind of emergent statistical accident, something about this woman is adjacent to extremely gory and macabre imagery in the distribution of the AI's world knowledge," Supercomposite wrote. It's unclear which AI tools were used to generate the images, and Supercomposite declined to elaborate when reached via Twitter DM. "I can't confirm or deny which model it is for various reasons unfortunately! But I can confirm Loab exists in multiple image-generation AI models," Supercomposite told Motherboard.

Google

Google Maps Launches Street View in India After 11-Year Wait (nasdaq.com) 9

Alphabet's Google Maps on Wednesday launched its panoramic Street View service in 10 Indian cities in partnership with Tech Mahindra and Genesys, 11 years after a first attempt ran into regulatory troubles. From a report: The feature, which offers 360-degree views of streets around the world using photos taken by cruising vehicles, has faced privacy complaints and regulatory scrutiny in many countries. The Indian launch comes after Google was denied permission at least twice in the last decade by the government over security concerns. Company executives said on Wednesday it was able to meet the regulatory requirements thanks to a new geospatial policy from India last year, which allows foreign map operators to provide panoramic imagery by licensing the data from local partners. Data collection was entirely done by Tech Mahindra and Genesys, Google said, adding the service would be available in over 50 Indian cities by the end of this year.
United Kingdom

UK Cybersecurity Chiefs Back Plan To Scan Phones for Child Abuse Images (theguardian.com) 73

Tech companies should move ahead with controversial technology that scans for child abuse imagery on users' phones, the technical heads of GCHQ and the UK's National Cybersecurity Centre have said. From a report: So-called "client-side scanning" would involve service providers such as Facebook or Apple building software that monitors communications for suspicious activity without needing to share the contents of messages with a centralised server. Ian Levy, the NCSC's technical director, and Crispin Robinson, the technical director of cryptanalysis -- codebreaking -- at GCHQ, said the technology could protect children and privacy at the same time.

"We've found no reason why client-side scanning techniques cannot be implemented safely in many of the situations one will encounter," they wrote in a discussion paper published on Thursday, which the pair said was "not government policy." They argued that opposition to proposals for client-side scanning -- most famously a plan from Apple, now paused indefinitely, to scan photos before they are uploaded to the company's image-sharing service -- rested on specific flaws, which were fixable in practice. They suggested, for instance, requiring the involvement of multiple child protection NGOs, to guard against any individual government using the scanning apparatus to spy on civilians; and using encryption to ensure that the platform never sees any images that are passed to humans for moderation, instead involving only those same NGOs.

AI

DALL-E Mini Is the Internet's Favorite AI Meme Machine (wired.com) 52

The viral image-generation app is good, absurd fun. It's also giving the world an education in how artificial intelligence may warp reality. From a report: On June 6, Hugging Face, a company that hosts open source artificial intelligence projects, saw traffic to an AI image-generation tool called DALL-E Mini skyrocket. The outwardly simple app, which generates nine images in response to any typed text prompt, was launched nearly a year ago by an independent developer. But after some recent improvements and a few viral tweets, its ability to crudely sketch all manner of surreal, hilarious, and even nightmarish visions suddenly became meme magic. Behold its renditions of "Thanos looking for his mom at Walmart," "drunk shirtless guys wandering around Mordor," "CCTV camera footage of Darth Vader breakdancing," and "a hamster Godzilla in a sombrero attacking Tokyo." As more people created and shared DALL-E Mini images on Twitter and Reddit, and more new users arrived, Hugging Face saw its servers overwhelmed with traffic. "Our engineers didn't sleep for the first night," says Clement Delangue, CEO of Hugging Face, on a video call from his home in Miami. "It's really hard to serve these models at scale; they had to fix everything." In recent weeks, DALL-E Mini has been serving up around 50,000 images a day.

DALL-E Mini's viral moment doesn't just herald a new way to make memes. It also provides an early look at what can happen when AI tools that make imagery to order become widely available, and a reminder of the uncertainties about their possible impact. Algorithms that generate custom photography and artwork might transform art and help businesses with marketing, but they could also have the power to manipulate and mislead. A warning on the DALL-E Mini web page warns that it may "reinforce or exacerbate societal biases" or "generate images that contain stereotypes against minority groups." DALL-E Mini was inspired by a more powerful AI image-making tool called DALL-E (a portmanteau of Salvador Dali and WALL-E), revealed by AI research company OpenAI in January 2021. DALL-E is more powerful but is not openly available, due to concerns that it will be misused.

Moon

Rogue Rocket's Moon Crash Site Spotted By NASA Probe (space.com) 16

The grave of a rocket body that slammed into the moon more than three months ago has been found. Space.com reports: Early this year, astronomers determined that a mysterious rocket body was on course to crash into the lunar surface on March 4. Their calculations suggested that the impact would occur inside Hertzsprung Crater, a 354-mile-wide (570 kilometers) feature on the far side of the moon. Their math was on the money, it turns out. Researchers with NASA's Lunar Reconnaissance Orbiter (LRO) mission announced last night (June 23) that the spacecraft had spotted a new crater in Hertzsprung -- almost certainly the resting place of the rogue rocket.

Actually, LRO imagery shows that the impact created two craters, an eastern one about 59 feet (18 meters) wide superimposed over a western one roughly 52 feet (16 m) across. "The double crater was unexpected and may indicate that the rocket body had large masses at each end," Mark Robinson of Arizona State University, the principal investigator of the Lunar Reconnaissance Orbiter Camera (LROC), wrote in an update last night. "Typically a spent rocket has mass concentrated at the motor end; the rest of the rocket stage mainly consists of an empty fuel tank," he added. "Since the origin of the rocket body remains uncertain, the double nature of the crater may help to indicate its identity."

As Robinson noted, the moon-crashing rocket remains mysterious. Early speculation held that it was likely the upper stage of the SpaceX Falcon 9 rocket that launched the Deep Space Climate Observatory (DSCOVR) mission for NASA and the U.S. National Oceanic and Atmospheric Administration in February 2015. But further observations and calculations changed that thinking, leading many scientists to conclude that the rocket body was probably part of the Long March 3 booster that launched China's Chang'e 5T1 mission around the moon in October 2014. China has denied that claim.

Google

Google Tool Shows What's on the Surface of the Earth in Real Time (theverge.com) 13

A new dataset from Google shows the features on the surface of the Earth in near real time, the company announced Thursday. The tool, called Dynamic World, uses deep learning and satellite imagery to develop a high-resolution land cover map that shows which bits of land have features like trees, crops, or water. From a report: Land cover maps usually take a long time to produce, and there are big gaps between the time images are taken and when the data is published. They also often don't have a detailed breakdown of what's on the ground in a particular area -- a city would be classified as "built-up" (a designation for human-altered landscapes) even if there are big sections with parks, for example. Dynamic World classifies the land cover type for every 1,100 square feet, Google said. It shows how likely it is that the sections are covered by one of nine cover types: water, flooded vegetation, built-up areas, trees, crops, bare ground, grass, shrub / scrub, and snow / ice. Google detailed its system, developed with the World Resources Institute, in a paper published in Nature's Scientific Data.
Google

Google Brings Street View History To Phones, Introduces 'Street View Studio' (arstechnica.com) 4

Today is the 15th birthday of Google Maps Street View, Google's project to take ground-level, 360-degree photographs of the entire world. To celebrate, the company is rolling out a few new features. From a report: First up, Google is bringing historical Street View data to iOS and Android phones. The feature has long existed on desktop browsers, where you can click into Street View mode and then time travel through Google's image archives. When you tap on a place to see Street View imagery, a "see more dates" button will appear next to the current age of the photo, letting you browse all the photos for that area going back to 2007. Google says the feature will release "starting today on Android and iOS globally," though, like all Google product launches, it will take some time to fully roll out.

If you'd like to help Google with its plan to photograph the entire world, the company is launching "Street View Studio." Google calls this "a new platform with all the tools you need to publish 360 image sequences quickly and in bulk." The Street View app is still around for people who want to build a 360 photosphere from a regular smartphone camera, but Google imagines Street View Studio as a tool for people with consumer 360 cameras. Google has a store-style page that lists compatible 360 cameras; the options range from sub-$200 fisheye cameras to the $3,600, ball-shaped Insta360 Pro, which looks like something out of Star Wars.

Twitter

Twitter Will Hide Tweets That Share False Info During a Crisis (theverge.com) 160

On Thursday, Twitter announced a new policy for dealing with misinformation during a period of crisis, establishing new standards for gating or blocking the promotion of certain tweets if they are seen as spreading misinformation. The Verge reports: "Content moderation is more than just leaving up or taking down content," explained Yoel Roth, Twitter's head of safety and integrity, in a blog post detailing the new policy, "and we've expanded the range of actions we may take to ensure they're proportionate to the severity of the potential harm." The new policy puts particular scrutiny on false reporting of events, false allegations involving weapons or use of force, or broader misinformation regarding atrocities or international response.

Hoax tweets and other misinformation regularly go viral during emergencies, as users rush to share unverified information. The sheer speed of events makes it difficult to implement normal verification or fact-checking systems, creating a significant challenge for moderators. Under the new policy, tweets classified as misinformation will not necessarily be deleted or banned; instead, Twitter will add a warning label requiring users to click a button before the tweet can be displayed (similar to the existing labels for explicit imagery). The tweets will also be blocked from algorithmic promotion. The stronger standards are meant to be limited to specific events. Twitter will initially apply the policy to content concerning the ongoing Russian invasion of Ukraine, but the company expects to apply the rules to all emerging crises going forward. For the purposes of the policy, crisis is defined as "situations in which there is a widespread threat to life, physical safety, health, or basic subsistence."

Slashdot Top Deals