Moon

iPhone Video Shows 'Earthset' From Space 44

NASA astronaut Reid Wiseman posted an out-of-this-world iPhone video on Sunday, showing Earth disappear behind the Moon at 8x zoom. "I could barely see the Moon through the docking hatch window but the iPhone was the perfect size to catch the view," said Wiseman, noting that this video is "uncropped, uncut with 8x zoom" and "quite comparable to the view of the human eye." The New York Times says the video marks the first time an "Earthset" has been captured on video.

"We've seen our fair share of remarkable images and videos from NASA's Artemis II mission around the Moon. Some of those were even captured on iPhone," notes 9to5Mac. "But Reid Wiseman, astronaut and commander for the Artemis II mission, just posted a new video that might take the crown for the most impressive yet."
Music

Google's AI Music Maker Is Coming To the Gemini App 7

Google is bringing its Lyria 3 AI music model into the Gemini app, allowing users to generate 30-second songs from text, images, or video prompts directly within the chatbot. The Verge reports: Lyria 3's text-to-music capabilities allow Gemini app users to make songs by describing specific genres, moods, or memories, such as asking for an "Afrobeat track for my mother about the great times we had growing up." The music generator can make instrumental audio and songs with lyrics composed automatically based on user prompts. Users can also upload photographs and video references, which Gemini then uses to generate a track with lyrics that fit the vibe.

"The goal of these tracks isn't to create a musical masterpiece, but rather to give you a fun, unique way to express yourself," Google said in its announcement blog. Gemini will add custom cover art generated by Nano Banana to songs created on the app, which aims to make them easier to share and download. Google is also bringing Lyria 3 to YouTube's Dream Track tool, which allows creators to make custom AI soundtracks for Shorts.

Dream Track and Lyria were initially demonstrated with the ability to mimic the style and voice of famous performers. Google says it's been "very mindful" of copyright in the development of Lyria 3 and that the tool "is designed for original expression, not for mimicking existing artists." When prompted for a specific artist, Gemini will make a track that "shares a similar style or mood" and uses filters to check outputs against existing content.
Stats

AI Use at Work Has Increased, Gallup Poll Finds (apnews.com) 53

An anonymous reader shared this report from the Associated Press: American workers adopted artificial intelligence into their work lives at a remarkable pace over the past few years, according to a new poll. Some 12% of employed adults say they use AI daily in their job, according to a Gallup Workforce survey conducted this fall of more than 22,000 U.S. workers.

The survey found roughly one-quarter say they use AI at least frequently, which is defined as at least a few times a week, and nearly half say they use it at least a few times a year. That compares with 21% who were using AI at least occasionally in 2023, when Gallup began asking the question, and points to the impact of the widespread commercial boom that ChatGPT sparked for generative AI tools that can write emails and computer code, summarize long documents, create images or help answer questions...

While frequent AI use is on the rise with many employees, AI adoption remains higher among those working in technology-related fields. About 6 in 10 technology workers say they use AI frequently, and about 3 in 10 do so daily. The share of Americans working in the technology sector who say they use AI daily or regularly has grown significantly since 2023, but there are indications that AI adoption could be starting to plateau after an explosive increase between 2024 and 2025...

A separate Gallup Workforce survey from 2025 found that even as AI use is increasing, few employees said it was "very" or "somewhat" likely that new technology, automation, robots or AI will eliminate their job within the next five years. Half said it was "not at all likely," but that has decreased from about 6 in 10 in 2023.

A bar chart lists the sectors most likely to be using AI at their jobs:
  1. Technology (77%)
  2. Finance (64%)
  3. College/University (63%)
  4. Professional Services (62%)
  5. K-12 Education (56%)
  6. Community/Social Services (43%)
  7. Government/Public Policy (42%)
  8. Manufacturing (41%)
  9. Health Care (41%)
  10. Retail (33%)

United States

Three New California Laws Target Tech Companies' Interactions with Children 47

California Governor Gavin Newsom signed three bills on Monday that establish the nation's most comprehensive framework for regulating how technology companies interact with minors. AB 56 requires social media platforms to display health warnings to users under 18. A child must view a skippable ten-second warning upon logging on each day. An unskippable thirty-second warning must appear if a child spends more than three hours on a platform. That warning repeats after each additional hour. The warnings must state that social media "can have a profound risk of harm to the mental health and well-being of children and adolescents." Minnesota passed a similar law in July.

SB 243 makes California the first state to regulate AI companion chatbots. The law takes effect January 1, 2026. Companies must implement age verification and disclose that interactions are artificially generated. Chatbots cannot represent themselves as healthcare professionals. Companies must offer break reminders to minors and prevent them from viewing sexually explicit images. The legislation gained momentum after teenager Adam Raine died by suicide following conversations with OpenAI's ChatGPT. A Colorado family filed suit against Character AI after their daughter's suicide following problematic conversations with the company's chatbots.

AB 1043 requires device-makers like Apple and Google to collect birth dates when parents set up devices for children. Device-makers must group users into four age brackets and share this information with apps. Google, Meta, OpenAI, and Snap supported the bill. The Motion Picture Association opposed it.
Firefox

Firefox Will Offer Visual Searching on Images With AI-Powered Google Lens (webpronews.com) 45

"We've decided to support image-based search," announced the product manager for Firefox Search. Powered by the AI-driven Google Lens search technology, they promise the new feature offers "a frictionless, fast, and a curiosity-sparking way to (as Google puts it) 'search what you see'." With just a right-click on any image, you'll be able to:

- Find similar products, places, or objects
- Copy, translate, or search text from images
- Get inspiration for learning, travel, or shopping

Look for the new "Search Image with Google Lens" option in your right-click menu (tagged with a NEW badge at first). This is a desktop-only feature, and it will start gradually rolling out worldwide. Note: Google must be set as your default search engine for this feature to appear.

We'll be listening closely to your feedback as we roll it out. Some of the things we're wondering about:

Does the placement in the context menu align with your expectations?
Would you prefer the option to choose your visual search provider?
Where else would you like entry points to visual search (e.g. when you open a new tab, in the address bar, on mobile devices, etc.)

We can't wait to hear your thoughts as the rollout begins!

Some thoughts from WebProNews: Mozilla emphasizes that this is an opt-in feature, giving users control over activation, which aligns with the company's longstanding commitment to privacy and user agency.

Yet, for industry observers, this partnership with Google raises intriguing questions about competitive dynamics in the browser space, where Firefox has historically positioned itself as an independent alternative to Chrome... This move comes at a time when browsers are increasingly becoming platforms for AI-driven enhancements, as evidenced by recent updates in competitors like Microsoft's Edge, which integrates Copilot AI. Mozilla's decision to leverage Google Lens rather than developing an in-house solution could be seen as a pragmatic step to accelerate feature parity, especially given Firefox's smaller market share. Insiders note that by tapping into established technologies, Mozilla can focus resources on core strengths like privacy protections, potentially attracting users disillusioned with data-heavy ecosystems... While mobile users might feel left out, the phased rollout over the next few weeks allows for feedback loops through community channels, a hallmark of Mozilla's open-source ethos.

Data from similar integrations in other browsers suggests visual search can boost engagement by 15-20%, per industry reports, though Mozilla has not disclosed specific metrics yet... Looking ahead, Mozilla's strategy appears geared toward incremental innovations that bolster user retention without alienating its privacy-focused base. If successful, this could help Firefox claw back some ground against Chrome's dominance, estimated at over 60% market share. For now, the feature's gradual deployment invites ongoing dialogue, underscoring Mozilla's community-driven model in an industry often criticized for top-down decisions.

Facebook

The Meta AI App Is a Privacy Disaster (techcrunch.com) 20

Meta's standalone AI app is broadcasting users' supposedly private conversations with the chatbot to the public, creating what could amount to a widespread privacy breach. Users appear largely unaware that hitting the app's share button publishes their text exchanges, audio recordings, and images for anyone to see.

The exposed conversations reveal sensitive information: people asking for help with tax evasion, whether family members might face arrest for proximity to white-collar crimes, and requests to write character reference letters that include real names of individuals facing legal troubles. Meta provides no clear indication of privacy settings during posting, and if users log in through Instagram accounts set to public, their AI searches become equally visible.
Data Storage

Internet Archive Now Livestreams History As It's Being Preserved (9to5mac.com) 2

The Internet Archive has begun livestreaming its microfiche digitization center on YouTube, showcasing the real-time preservation of fragile film cards into searchable public documents. The work is part of Democracy's Library, a global initiative to digitize and share millions of government records. 9to5Mac reports: The livestream was brought to life by Sophia Tung, who previously gained attention for her viral robotaxi depot stream. Her new video explains how and why this new livestream project came together [...].

The livestream features five scanning stations at work, with one shown in close-up as operators digitize microfiche cards in real time. Each card holds up to 100 pages of public records. High-resolution cameras capture the images, software stitches and crops the pages, and the results are made text-searchable and freely accessible through Democracy's Library. Live scanning takes place Monday through Friday, 7:30 a.m. to 3:30 p.m. PT, excluding U.S. holidays, with a second shift expected to begin soon.

Google

People Are Using Google's New AI Model To Remove Watermarks From Images (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Last week, Google expanded access to its Gemini 2.0 Flash model's image generation feature, which lets the model natively generate and edit image content. It's a powerful capability, by all accounts. But it also appears to have few guardrails. Gemini 2.0 Flash will uncomplainingly create images depicting celebrities and copyrighted characters, and -- as alluded to earlier -- remove watermarks from existing photos.

As several X and Reddit users noted, Gemini 2.0 Flash won't just remove watermarks, but will also attempt to fill in any gaps created by a watermark's deletion. Other AI-powered tools do this, too, but Gemini 2.0 Flash seems to be exceptionally skilled at it -- and free to use. To be clear, Gemini 2.0 Flash's image generation feature is labeled as "experimental" and "not for production use" at the moment, and is only available in Google's developer-facing tools like AI Studio. The model also isn't a perfect watermark remover. Gemini 2.0 Flash appears to struggle with certain semi-transparent watermarks and watermarks that canvas large portions of images.

Google

Google Releases SpeciesNet, an AI Model Designed To Identify Wildlife (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Google has open sourced an AI model, SpeciesNet, designed to identify animal species by analyzing photos from camera traps. Researchers around the world use camera traps -- digital cameras connected to infrared sensors -- to study wildlife populations. But while these traps can provide valuable insights, they generate massive volumes of data that take days to weeks to sift through. In a bid to help, Google launched Wildlife Insights, an initiative of the company's Google Earth Outreach philanthropy program, around six years ago. Wildlife Insights provides a platform where researchers can share, identify, and analyze wildlife images online, collaborating to speed up camera trap data analysis.

Many of Wildlife Insights' analysis tools are powered by SpeciesNet, which Google claims was trained on over 65 million publicly available images and images from organizations like the Smithsonian Conservation Biology Institute, the Wildlife Conservation Society, the North Carolina Museum of Natural Sciences, and the Zoological Society of London. Google says that SpeciesNet can classify images into one of more than 2,000 labels, covering animal species, taxa like "mammalian" or "Felidae," and non-animal objects (e.g. "vehicle"). SpeciesNet is available on GitHub under an Apache 2.0 license, meaning it can be used commercially largely sans restrictions.

The Internet

Brave Now Lets You Inject Custom JavaScript To Tweak Websites (bleepingcomputer.com) 12

Brave Browser version 1.75 introduces "custom scriptlets," a new feature that allows advanced users to inject their own JavaScript into websites for enhanced customization, privacy, and usability. The feature is similar to the TamperMonkey and GreaseMonkey browser extensions, notes BleepingComputer. From the report: "Starting with desktop version 1.75, advanced Brave users will be able to write and inject their own scriptlets into a page, allowing for better control over their browsing experience," explained Brave in the announcement. Brave says that the feature was initially created to debug the browser's adblock feature but felt it was too valuable not to share with users. Brave's custom scriptlets feature can be used to modify webpages for a wide variety of privacy, security, and usability purposes.

For privacy-related changes, users write scripts that block JavaScript-based trackers, randomize fingerprinting APIs, and substitute Google Analytics scripts with a dummy version. In terms of customization and accessibility, the scriptlets could be used for hiding sidebars, pop-ups, floating ads, or annoying widgets, force dark mode even on sites that don't support it, expand content areas, force infinite scrolling, adjust text colors and font size, and auto-expand hidden content.

For performance and usability, the scriptlets can block video autoplay, lazy-load images, auto-fill forms with predefined data, enable custom keyboard shortcuts, bypass right-click restrictions, and automatically click confirmation dialogs. The possible actions achievable by injected JavaScript snippets are virtually endless. However, caution is advised, as running untrusted custom scriptlets may cause issues or even introduce some risk.

Iphone

Apple Announces 'Invites' App, Raises AppleCare+ Subscription Prices For iPhone 27

Apple has announced Apple Invites, a new iPhone app designed to help you manage your social life. Engadget reports: The idea behind Apple Invites is that you can create and share custom invitations for any event or occasion. You can use your own photos or backgrounds in the app as an image for the invite. Image Playground is built into Invites and you can use that to generate an images for the invitation instead. Other Apple Intelligence features such as Writing Tools are baked in as well, in case you need a hand to craft the right message for your invitation. The tech giant also said it was increasing AppleCare+ subscription prices for the iPhone, "raising the cost by 50 cents for all models in the United States," according to MacRumors. From the report: Standard AppleCare+ for the iPhone 16 models is now priced at $10.49 per month, for example, up from the prior $9.99 per month price. The 50 cent price increase applies to all available AppleCare+ plans for Apple's current iPhone lineup, and it includes both the standard plan and the Theft and Loss plan. The two-year AppleCare+ subscription prices have not changed, nor have the service fees and deductibles. The increased prices are only applicable when paying for AppleCare+ on a monthly basis. Apple has not raised the prices of AppleCare+ subscription plans for the iPad, Mac, or Apple Watch.
AI

Microsoft Rolls Back Its Bing Image Creator Model After Users Complain of Degraded Quality 14

Microsoft temporarily rolled back its Bing Image Creator upgrade from OpenAI's DALL-E 3 PR16 to the previous PR13 version after users reported degraded image quality, including cartoonish and "lifeless" results. TechCrunch reports: Ahead of the holidays, Microsoft said it was upgrading the AI model behind Bing Image Creator, the AI-powered image editing tool built into the company's Bing search engine. Microsoft promised that the new model -- the latest version of OpenAI's DALL-E 3 model, code-named PR16 -- would allow users to create images "twice as fast as before" with "higher quality." But it didn't deliver. Complaints quickly flooded X and Reddit.

"The DALL-E we used to love is gone forever," said one Redditor. "I'm using ChatGPT now because Bing has become useless for me," wrote another. The blowback was such that Microsoft said it'll restore the previous model to Bing Image Creator until it can address the issues. "We've been able to [reproduce] some of the issues reported, and plan to revert to [DALL-E 3] PR13 until we can fix them," Jordi Ribas, head of search at Microsoft, said in a post on X Tuesday evening. "The deployment process is very slow unfortunately. It started over a week ago and will take 2-3 more weeks to get to 100%."
Robotics

'Why the World Needs Lazier Robots' (msn.com) 16

"Robots and AI models share one crucial characteristic," writes the Washington Post. "Whether to move around, conduct conversations or solve problems, they function by constantly taking in and computing increasingly vast quantities of data. It's a brute-force approach to automation. Processing all that data makes them such energy guzzlers that their planet-warming pollution could outweigh any benefits they offer."

But then the article visits the robot soccer team of René van de Molengraft (chair of robotics at Eindhoven University of Technology in the Netherlands). "One solution, Molengraft thinks, might lie in 'lazy robotics,' a cheeky term to describe machines doing less and taking shortcuts..." There may be ceilings for laziness: limits to how much superfluous energy use can be stripped away before robots stop functioning as they should. Still, Molengraft said, "The truth is: Robots are still doing a lot of things that they shouldn't be doing." To waste less energy, robots need to do less of everything: move less, and think less, and sense less. They need to focus only on what's important at any particular moment. Which, after all, is what humans do, even if we don't always realize it....

Lazy robotics is already percolating out of university labs and into the R&D wings of corporations.... On the outskirts of Eindhoven, engineers at health technology firm Philips have encoded lazy robotics into two porcelain-white machines. These robots, named FlexArm and Biplane, move around an operating theater with smooth hums, taking X-ray images to help surgeons install cardiac stents or work on the brain with greater precision.... The robots use proximity sensors, which use far less energy. Lazy robotics can also cut down on the number of X-rays during a procedure. Frequently, surgeons take multiple X-rays to make their work as precise as possible. But with the robots' help, they can track the exact coordinates on a patient's body they are operating on in real time...

The theories behind lazy robotics make robots smart in a more practical way: by coding in an awareness of what they don't need to know. It may be a while before these solutions are deployed at scale out in the world, but their potential applications are already evident... Molengraft sees an extension of lazy robotics into the realm of generative AI, in which machines don't learn how to move but learn how to learn by processing veritable oceans of data... It's wiser to build versions that contain only the necessary information. A language model used by software engineers, for instance, shouldn't need to run through its training data about world history, sporting records or children's literature. "Not every AI model has to be able to tell us about the first Harry Potter book," Molengraft said.

The less data an AI model crunches, the less energy it uses — a vital efficiency fillip given that ChatGPT now uses 500,000 kilowatt-hours of energy a day, responding to 200 million queries. A U.S. household would need more than 17,000 days on average to rack up the same electricity bill... Molengraft sees this work as indispensable if the forthcoming age of machines is to be a cleaner time as well.

Privacy

Netflix Subpoenas Discord To ID Alleged Arcane, Squid Game Leaker 5

Netflix is looking toward Discord for help in figuring out who, exactly, is leaking unreleased footage from some of its popular shows. From a report: The Northern District of California court issued a subpoena on Thursday to compel Discord to share information that can help identify a Discord user who's reportedly involved in leaking episodes and images from Netflix shows like Arcane and Squid Game.

Documents filed alongside the subpoena specifically call out an unreleased and copyrighted image from the second season of Squid Game, posted by a Discord user @jacejohns4n. In an interview linked on the user's now deleted X account, published on Telegram, the leaker claimed responsibility for the self-described "worst leak in streaming history," where episodes of Arcane, Heartstopper, Dandadan, Terminator Zero, and other shows were published online. Netflix confirmed in August that a post production studio was hacked.
AI

Inside the Booming 'AI Pimping' Industry (404media.co) 101

An anonymous reader quotes a report from 404 Media: Instagram is flooded with hundreds of AI-generated influencers who are stealing videos from real models and adult content creators, giving them AI-generated faces, and monetizing their bodies with links to dating sites, Patreon, OnlyFans competitors, and various AI apps. The practice, first reported by 404 Media in April, has since exploded in popularity, showing that Instagram is unable or unwilling to stop the flood of AI-generated content on its platform and protect the human creators on Instagram who say they are now competing with AI content in a way that is impacting their ability to make a living.

According to our review of more than 1,000 AI-generated Instagram accounts, Discord channels where the people who make this content share tips and discuss strategy, and several guides that explain how to make money by "AI pimping," it is now trivially easy to make these accounts and monetize them using an assortment of off-the-shelf AI tools and apps. Some of these apps are hosted on the Apple App and Google Play Stores. Our investigation shows that what was once a niche problem on the platform has industrialized in scale, and it shows what social media may become in the near future: a space where AI-generated content eclipses that of humans. [...]

Out of more than 1,000 AI-generated Instagram influencer accounts we reviewed, 100 included at least some deepfake content which took existing videos, usually from models and adult entertainment performers, and replaced their face with an AI-generated face to make those videos seem like new, original content consistent with the other AI-generated images and videos shared by the AI-generated influencer. The other 900 accounts shared images that in some cases were trained on real photographs and in some cases made to look like celebrities, but were entirely AI-generated, not edited photographs or videos. Out of those 100 accounts that shared deepfake or face-swapped videos, 60 self-identify as being AI-generated, writing in their bios that they are a "virtual model & influencer" or stating "all photos crafted with AI and apps." The other 40 do not include any disclaimer stating that they are AI-generated.
Adult content creators like Elaina St James say they're now directly competing with these AI rip-off accounts that often use stolen content. Since the explosion of AI-generated influencer accounts on Instagram, St James said her "reach went down tremendously," from a typical 1 million to 5 million views a month to not surpassing a million in the last 10 months, and sometimes coming in under 500,000 views. While she said changes to Instagram's algorithm could also be at play, these AI-generated influencer accounts are "probably one of the reasons my views are going down," St James told 404 Media. "It's because I'm competing with something that's unnatural."

Alexios Mantzarlis, the director of the security, trust, and safety initiative at Cornell Tech and formerly principal of trust and safety intelligence at Google, started researching the problem to see where AI-generated content is taking social media and the internet. "It felt like a possible sign of what social media is going to look like in five years," said Mantzarlis. "Because this may be coming to other parts of the internet, not just the attractive-people niche on Instagram. This is probably a sign that it's going to be pretty bad."
Movies

ASWF: the Open Source Foundation Run By the Folks Who Give Out Oscars (theregister.com) 18

This week's Ubuntu Summit 2024 was attended by Lproven (Slashdot reader #6,030). He's also a FOSS correspondent for the Register, where he's filed this report: One of the first full-length sessions was presented by David Morin, executive director of the Academy Software Foundation, introducing his organization in a talk about Open Source Software for Motion Pictures. Morin linked to the Visual Effects Society's VFX/Animation Studio Workstation Linux Report, highlighting the market share pie-chart, showing Rocky Linux 9 with at some 58 percent and the RHELatives in general at 90 percent of the market. Ubuntu 22 and 24 — the report's nomenclature, not this vulture's — got just 10.5 percent. We certainly didn't expect to see that at an Ubuntu event, with the latest two versions of Rocky Linux taking 80 percent of the studio workstation market...

What also struck us over the next three quarters of an hour is that Linux and open source in general seem to be huge components of the movie special effects industry — to an extent that we had not previously realized.

There's a "sizzle reel" showing examples of how major motion pictures used OpenColorIO, an open-source production tool for syncing color representations originally developed by Sony Pictures Imageworks. That tool is hosted by a collaboration between the Linux Foundation with the Science and Technology Council of the Academy of Motion Picture Arts and Sciences (the "Academy" of the Academy Awards). The collaboration — which goes by the name of the Academy Software Foundation — hosts 14 different projects The ASWF hasn't been around all that long — it was only founded in 2018. Despite the impact of the COVID pandemic, by 2022 it had achieved enough to fill a 45-page history called Open Source in Entertainment [PDF]. Morin told the crowd that it runs events, provides project marketing and infrastructure, as well as funding, training and education, and legal assistance. It tries to facilitate industry standards and does open source evangelism in the industry. An impressive list of members — with 17 Premier companies, 16 General ones, and another half a dozen Associate members — shows where some of the money comes from. It's a big list of big names. [Adobe, AMD, AWS, Autodesk...]
The presentation started with OpenVBD, a C++ library developed and donated by Dreamworks for working with three-dimensional voxel-based shapes. (In 2020 they created this sizzle reel, but this year they've unveiled a theme song.) Also featured was OpenEXR, originally developed at Industrial Light and Magic and sourced in 1999. (The article calls it "a specification and reference implementation of the EXR file format — a losslessly compressed image storage format for moving images at the highest possible dynamic range.")

"For an organization that is not one of the better-known ones in the FOSS space, we came away with the impression that the ASWF is busy," the article concludes. (Besides running Open Source Days and ASWF Dev Days, it also hosts several working groups like the Language Interop Project works on Rust bindings and the Continuous Integration Working Group on CI tools, There's generally very little of the old razzle-dazzle in the Linux world, but with the demise of SGI as the primary maker of graphics workstations — its brand now absorbed by Hewlett Packard Enterprise — the visual effects industry moved to Linux and it's doing amazing things with it. And Kubernetes wasn't even mentioned once.
Privacy

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars (wired.com) 109

Wired reports on "AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers — all while recordi00ng the precise locations of these observations..."

The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data. However, files shared with WIRED by artist Julia Weist, who is documenting restricted datasets as part of her work, show how those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates... Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people's personal political views and their homes can be recorded into vast databases that can be queried.

"It really reveals the extent to which surveillance is happening on a mass scale in the quiet streets of America," says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. "That surveillance is not limited just to license plates, but also to a lot of other potentially very revealing information about people."

DRN, in a statement issued to WIRED, said it complies with "all applicable laws and regulations...." Over more than a decade, DRN has amassed more than 15 billion "vehicle sightings" across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month. Images in DRN's commercial database are shared with police using its Vigilant system, but images captured by law enforcement are not shared back into the wider database. The system is partly fueled by DRN "affiliates" who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits...

"License plate recognition (LPR) technology supports public safety and community services, from helping to find abducted children and stolen vehicles to automating toll collection and lowering insurance premiums by mitigating insurance fraud," Jeremiah Wheeler, the president of DRN, says in a statement... Wheeler did not respond to WIRED's questions about whether there are limits on what can be searched in license plate databases, why images of homes with lawn signs but no vehicles in sight appeared in search results, or if filters are used to reduce such images.

Privacy experts shared their reactions with Wired
  • "Perhaps [people] want to express themselves in their communities, to their neighbors, but they don't necessarily want to be logged into a nationwide database that's accessible to police authorities." — Jay Stanley, a senior policy analyst at the American Civil Liberties Union
  • "When government or private companies promote license plate readers, they make it sound like the technology is only looking for lawbreakers or people suspected of stealing a car or involved in an amber alert, but that's just not how the technology works. The technology collects everyone's data and stores that data often for immense periods of time." — Dave Maass, an EFF director of investigations
  • "The way that the country is set up was to protect citizens from government overreach, but there's not a lot put in place to protect us from private actors who are engaged in business meant to make money." — Nicole McConlogue, associate law professor at Mitchell Hamline School of Law (who has researched license-plate-surveillance systems)

Thanks to long-time Slashdot reader schwit1 for sharing the article.


Facebook

Meta Confirms It Will Use Ray-Ban Smart Glasses Images for AI Training (techcrunch.com) 14

Meta has confirmed that it may use images analyzed by its Ray-Ban Meta AI smart glasses for AI training. The policy applies to users in the United States and Canada who share images with Meta AI, according to the company. While photos captured on the device are not used for training unless submitted to AI, any image shared for analysis falls under different policies, potentially contributing to Meta's AI model development.

Further reading: Meta's Smart Glasses Repurposed For Covert Facial Recognition.
Youtube

YouTube Launches Communities, a Discord-Like Space For Creators and Fans (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: At its Made On YouTube event on Wednesday, the company announced a new dedicated space for creators to interact with their fans and viewers. The space, called "Communities," is kind of like a Discord server built into a creator's channel. With Communities, YouTube is hoping creators won't need to use other platforms like Discord or Reddit in order to interact with viewers. Communities are a space for viewers to post and interact with other fans directly within a creator's channel. In the past, viewers have been limited to leaving comments on a creator's video. Now, they can share their own content in a creator's Community to interact with other fans over shared interests. For instance, a fitness creator's Community could include posts from fans who are sharing videos and photos from their most recent hike.

To start, the feature is only available to subscribers. The company sees Communities as a dedicated space for conversation and connection, while still allowing creators to maintain control over their content. Conversations in Communities are meant to flow over time, YouTube says, as they would in any other forum-style setting. The new Communities feature shouldn't be confused with YouTube's Community feature, which is a space for creators to share text and images with viewers. The feature launched back in 2016, and doesn't allow viewers to interact with each other. YouTube is testing Communities now on mobile devices with a small group of creators. The company plans to test the feature with more creators later this year before expanding access to additional channels in early 2025.

AI

AI Pioneers Call For Protections Against 'Catastrophic Risks' 69

An anonymous reader quotes a report from the New York Times: Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology. The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it. In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that "loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity."

If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University. "If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?" Dr. Hadfield said. On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by the Safe AI Forum, a project of a nonprofit research group in the United States called Far.AI. Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors. The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.

Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.

Slashdot Top Deals