AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Space

SpaceX Launches New NASA Telescope to Help JWST Study Exoplanets (livescience.com) 13

Last week a University of Arizona astronomy professor "watched anxiously...as an awe-inspiring SpaceX Falcon 9 rocket carried NASA's new exoplanet telescope, Pandora, into orbit."

In 2018 NASA had approached Daniel Apai to help build the telescope, which he says will "shatter a barrier — to understand and remove a source of noise in the data — that limits our ability to study small exoplanets in detail and search for life on them." Astronomers have a trick to study exoplanet atmospheres. By observing the planets as they orbit in front of their host stars, we can study starlight that filters through their atmospheres... But, starting from 2007, astronomers noted that starspots — cooler, active regions on the stars — may disturb the transit measurements. In 2018 and 2019, then-Ph.D. student Benjamin V. Rackham, astrophysicist Mark Giampapa and I published a series of studies showing how darker starspots and brighter, magnetically active stellar regions can seriously mislead exoplanets measurements. We dubbed this problem "the transit light source effect...."

In our papers — published three years before the 2021 launch of the James Webb Space Telescope - we predicted that the Webb cannot reach its full potential. We sounded the alarm bell... Pandora will do what Webb cannot: It will be able to patiently observe stars to understand how their complex atmospheres change.

By staring at a star for 24 hours with visible and infrared cameras, it will measure subtle changes in the star's brightness and colors. When active regions in the star rotate in and out of view, and starspots form, evolve and dissipate, Pandora will record them. While Webb very rarely returns to the same planet in the same instrument configuration and almost never monitors their host stars, Pandora will revisit its target stars 10 times over a year, spending over 200 hours on each of them.

It's the first space telescope "built specifically for detailed multi-color observations of starlight filtered through the atmospheres of exoplanets," reports the Arizona Daily Star, noting the University of Arizona will serve as mission control: [T]echnicians will operate Pandora in real time and monitor its telemetry and overall health under a contract with NASA... The spacecraft will undergo about a month of commissioning before beginning science operations, which are scheduled to last for a year...

Pandora was selected as part of NASA's Astrophysics Pioneers program, which was created in 2020 to foster compelling, relatively low-cost science missions using smaller, cheaper hardware and flight platforms with a price cap of no more than $20 million. By comparison, the Webb telescope — the largest and most powerful astronomical observatory ever sent into space — carries a pricetag of about $10 billion.

Pandora is a joint mission NASA and California's Lawrence Livermore National Laboratory.
Technology

Razer Thinks You'd Rather Have AI Headphones Instead of Glasses (theverge.com) 21

Razer today unveiled Project Motoko, a concept pair of over-ear headphones equipped with dual cameras that the gaming peripherals company believes could serve as an alternative to the smart glasses that have proliferated across the wearable AI market. The headphones feature two 4K cameras positioned on the earcups along with near and far field microphones, all powered by a Qualcomm Snapdragon chip. Users can point the cameras at objects and ask questions to AI assistants including those from OpenAI, Anthropic, xAI and Microsoft.

Basic queries run locally on the device while more complex requests require a phone or PC connection. Razer's pitch centers on battery life: the wireless headset has achieved up to 36 hours on a charge during testing, according to the company, compared to the eight hours rated for Meta's second-generation Ray-Ban AI glasses. The company also argues that over-ear headphones offer more privacy since audio responses aren't audible to bystanders.

The concept remains unfinished, Bloomberg News cautioned. During a product demonstration, the headset's dual cameras failed occasionally to recognize objects even in a moderately lit room. Razer has not committed to final pricing but indicated the headphones would command a "slight premium" over other high-end headphones and would be available later this year. The company's most expensive current headset costs $400.
Botnet

DDoS Botnet Aisuru Blankets US ISPs In Record DDoS (krebsonsecurity.com) 14

An anonymous reader quotes a report from KrebsOnSecurity: The world's largest and most disruptive botnet is now drawing a majority of its firepower from compromised Internet-of-Things (IoT) devices hosted on U.S. Internet providers like AT&T, Comcast and Verizon, new evidence suggests. Experts say the heavy concentration of infected devices at U.S. providers is complicating efforts to limit collateral damage from the botnet's attacks, which shattered previous records this week with a brief traffic flood that clocked in at nearly 30 trillion bits of data per second.

Since its debut more than a year ago, the Aisuru botnet has steadily outcompeted virtually all other IoT-based botnets in the wild, with recent attacks siphoning Internet bandwidth from an estimated 300,000 compromised hosts worldwide. The hacked systems that get subsumed into the botnet are mostly consumer-grade routers, security cameras, digital video recorders and other devices operating with insecure and outdated firmware, and/or factory-default settings. Aisuru's owners are continuously scanning the Internet for these vulnerable devices and enslaving them for use in distributed denial-of-service (DDoS) attacks that can overwhelm targeted servers with crippling amounts of junk traffic.

As Aisuru's size has mushroomed, so has its punch. In May 2025, KrebsOnSecurity was hit with a near-record 6.35 terabits per second (Tbps) attack from Aisuru, which was then the largest assault that Google's DDoS protection service Project Shield had ever mitigated. Days later, Aisuru shattered that record with a data blast in excess of 11 Tbps. By late September, Aisuru was publicly flexing DDoS capabilities topping 22 Tbps. Then on October 6, its operators heaved a whopping 29.6 terabits of junk data packets each second at a targeted host. Hardly anyone noticed because it appears to have been a brief test or demonstration of Aisuru's capabilities: The traffic flood lasted less only a few seconds and was pointed at an Internet server that was specifically designed to measure large-scale DDoS attacks.

Aisuru's overlords aren't just showing off. Their botnet is being blamed for a series of increasingly massive and disruptive attacks. Although recent assaults from Aisuru have targeted mostly ISPs that serve online gaming communities like Minecraft, those digital sieges often result in widespread collateral Internet disruption. For the past several weeks, ISPs hosting some of the Internet's top gaming destinations have been hit with a relentless volley of gargantuan attacks that experts say are well beyond the DDoS mitigation capabilities of most organizations connected to the Internet today.

Robotics

CNN Warns Food Delivery Robots 'Are Not Our Friends' (cnn.com) 49

The food delivery robots that arrived in Atlanta in June "are not our friends," argues a headline at CNN.

The four-wheeled Serve Robotics machines "get confused at crosswalks. They move with the speed and caution of a first-time driver, stilted and shy, until they suddenly speed up without warning. Their four wheels look like they were made for off-roading, but they still get stuck in the cracks of craggy sidewalks. Most times I see the bots, they aren't moving at all... " Cyclists swerve to avoid them like any other obstacle in the road. Patrons of Shake Shack (a national partner of Serve) weave around the mess of robots parked in front of the restaurant to make their way inside and place orders on iPads... The dawn of everyday, "friendly" robots may be here, but they haven't proven themselves useful — or trustworthy — yet. "People think they are your friends, but they're actually cameras and microphones of corporations," said Joanna Bryson, a longtime AI scholar and professor of ethics and technology at the Hertie School in Berlin. "You're right to be nervous..."

When robots show up in a city, it's often not because the residents of said city actively wanted them there or had a say in their arrival said Edward Ongweso Jr. [a researcher at the Security in Context initiative, a tech journalist and self-proclaimed "decelerationist" urging a slower rollout for Silicon Valley tech pioneers and civic leaders embracing untested and unregulated technology]... "They're being rolled out without any sort of input from people, and as a result, in ways that are annoying and inconvenient," Ongweso Jr. said. "I suspect that people would feel a lot differently if they had a choice ... 'what kind of robots are we interested in rolling out in our homes, in our workplaces, on our college campuses or in our communities?'"

Delivery robots aren't unique to Atlanta. AI-driven companies including Avride and Coco Robotics have sent fleets of delivery robots to big cities like Chicago, Dallas and Jersey City, as well as sleepy college towns... "They're popping up everywhere," Ongweso Jr. continued, "because there's sort of a realization that you have to convince people to view them as inevitable. The way to do that is to just push it into as many places as possible, and have these spectacle demonstrations, get some friendly coverage, try to figure out the ways in which you're selling this as the only alternative.... If you humanize it, you're more willing to entertain it and rationalize it being in your area — 'That's just Jeffrey,' or whatever they name it — instead of seeing it for what it is, which is a bunch of investors privately encroaching on a community or workplace," Ongweso Jr. said. "It's not the future. It's a business model."

Serve Robotics CEO Ali Kashani told CNN their goal in Atlanta was reducing traffic — and that the robots' average delivery distance there was under a mile, taking about 18 minutes per delivery.

Serve Robotics has also launched their robots in Chicago, Los Angeles, Miami, Dallas-Fort Worth and Atlanta, according to the site Robotics 247, as part of an ongoing collaboration with Uber Eats. (Although after the robots launched in Los Angeles, a man in a mobility scooter complained the slow-moving robot swerved in front of him.) And "residents of other cities have had to rescue them when they've been felled by weather," reports CNN.

CNN also spoke to Dylan Losey, an assistant professor of mechanical engineering at Virginia Tech who studies human-robot interaction, who notes that the robots' AI algorithms are "completely unregulated... We don't know if a third party has checked the hardware and software and deemed the system 'safe' — in part because what it means for these systems to be 'safe' is not fully understood or standardized." (CNN's reporter adds that "the last time I got close to a bot, to peer down at a flier someone left on top of it, it revved at me loudly. Perhaps they can sense a hater.")

But Serve's CEO says there's one crucial way robot delivery will be cheaper than humans. "You don't have to tip the robots."
Microsoft

Microsoft's Analog Optical Computer Shows AI Promise (microsoft.com) 33

Four years ago a small Microsoft Research team started creating an analog optical computer. They used commercially available parts like sensors from smartphone cameras, optical lenses, and micro-LED lights finer than a human hair. "As the light passes through the sensor at different intensities, the analog optical computer can add and multiply numbers," explains a Microsoft blog post.

They envision the technology scaling to a computer that for certain problems is 100X faster and 100X more energy efficient — running AI workloads "with a fraction of the energy needed and at much greater speed than the GPUs running today's large language models." The results are described in a paper published in the scientific journal Nature, according to the blog post: At the same time, Microsoft is publicly sharing its "optimization solver" algorithm and the "digital twin" it developed so that researchers from other organizations can investigate this new computing paradigm and propose new problems to solve and new ways to solve them. Francesca Parmigiani, a Microsoft principal research manager who leads the team developing the AOC, explained that the digital twin is a computer-based model that mimics how the real analog optical computer [or "AOC"] behaves; it simulates the same inputs, processes and outputs, but in a digital environment — like a software version of the hardware. This allowed the Microsoft researchers and collaborators to solve optimization problems at a scale that would be useful in real situations. This digital twin will also allow other users to experiment with how problems, either in optimization or in AI, would be mapped and run on the analog optical computer hardware. "To have the kind of success we are dreaming about, we need other researchers to be experimenting and thinking about how this hardware can be used," Parmigiani said.

Hitesh Ballani, who directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. said he believes the AOC could be a game changer. "We have actually delivered on the hard promise that it can make a big difference in two real-world problems in two domains, banking and healthcare," he said. Further, "we opened up a whole new application domain by showing that exactly the same hardware could serve AI models, too." In the healthcare example described in the Nature paper, the researchers used the digital twin to reconstruct MRI scans with a good degree of accuracy. The research indicates that the device could theoretically cut the time it takes to do those scans from 30 minutes to five. In the banking example, the AOC succeeded in resolving a complex optimization test case with a high degree of accuracy...

As researchers refine the AOC, adding more and more micro-LEDs, it could eventually have millions or even more than a billion weights. At the same time, it should get smaller and smaller as parts are miniaturized, researchers say.

AI

Autonomous AI-Guided Black Hawk Helicopter Tested to Fight Wildfires (yahoo.com) 36

Imagine this. Lightning sparks a wildfire, but "within seconds, a satellite dish swirling overhead picks up on the anomaly and triggers an alarm," writes the Los Angeles Times. "An autonomous helicopter takes flight and zooms toward the fire, using sensors to locate the blaze and AI to generate a plan of attack. It measures the wind speed and fire movement, communicating constantly with the unmanned helicopter behind it, and the one behind that. Once over the site, it drops a load of water and soon the flames are smoldering. Without deploying a single human, the fire never grows larger than 10 square feet.

"This is the future of firefighting." On a recent morning in San Bernardino, state and local fire experts gathered for a demonstration of the early iterations of this new reality. An autonomous Sikorski Black Hawk helicopter, powered by technology from Lockheed Martin and a California-based software company called Rain, is on display on the tarmac of a logistics airport in Victorville — the word "EXPERIMENTAL" painted on its military green-black door. It's one of many new tools on the front lines of firefighting technology, which experts say is evolving rapidly as private industry and government agencies come face-to-face with a worsening global climate crisis...

Scientific studies and climate research models have found that the number of extreme fires could increase by as much as 30% globally by 2050. By 2100, California alone could see a 50% increase in wildfire frequency and a 77% increase in average annual acres burned, according to the state's most recent climate report. That's largely because human-caused climate change is driving up temperatures and drying out the landscape, priming it to burn, according to Kate Dargan Marquis, a senior advisor with the Gordon and Betty Moore Foundation who served as California's state fire marshal from 2007 to 2010.... "[T]he policies of today and the technologies of today are not going to serve us tomorrow."

Today, more than 1,100 mountaintop cameras positioned across California are already using artificial intelligence to scan the landscape for the first sign of flames and prompt crews to spring into action. NASA's Earth-observing satellites are studying landscape conditions to help better predict fires before they ignite, while a new global satellite constellation recently launched by Google is helping to detect fires faster than ever before.

One 35-year fire service veteran who consults on fire service technologies even predicts fire-fighting robots will also be used in high-risk situations like the Colossus robot that battled flames searing through Notre-Dame Cathedral in Paris...

And a bill moving through California's legislation "would direct the California Department of Forestry and Fire Protection to establish a pilot program to assess the viability of incorporating autonomous firefighting helicopters in the state."
Sony

NFL Adopts Sony's 'Virtual Measurements' for Football's First Downs (hawkeyeinnovations.com) 39

theodp writes: America's National Football League announced that beginning with the 2025 season, Sony's Hawk-Eye virtual measurement technology will assess and identify first downs after a ball spot.

Sony's Hawk-Eye virtual measurement technology, which consists of six 8K cameras for optical tracking of the position of the ball, is operated from the NFL's "Art McNally GameDay Central Officiating Center" in New York and is integrated with the League's existing replay system. It will serve as an efficient alternative to the process of having a three-person chain crew walk chains onto the field and manually measure whether 10 yards have been met after the official has spotted the ball.

However, the chain crew will remain on the field in a secondary capacity.

The NFL's executive VP of football operations says their move brings "world-class on field officiating with state-of-the-art technology to advance football excellence." (The NFL's announcement notes the whole process takes about 30 seconds, "saving up to 40 seconds from a measurement with the chains.")

The move comes a full seven years after Apple introduced its iPhone Measure app...
Television

Could an Upcoming Apple Smart-Home Tablet Lead to Mobile Robots - and Maybe Even a TV Set? (bloomberg.com) 25

"Here's how Apple's next major product will work," writes Bloomberg's Mark Gurman: The company has been developing a smart home command center that will rival products like the Amazon Echo Hub and Google Nest Hub... The product will run many of Apple's core apps, like Safari, Notes and Calendar, but the interface will be centered on a customizable home screen with iOS-like widgets and smart home controls... The device looks like a low-end iPad and will include a built-in battery, speakers and a FaceTime camera oriented for a horizontal landscape view. The square device, which includes a roughly 6-inch screen, has sensors that let it change the interface depending on how far a user is from the screen. It will also have attachments for walls, plus a base with additional speakers so it can be placed on a table, nightstand or desk.

Apple envisions customers using the device as an intercom, with people FaceTiming each other from different rooms. They'll also be able to pull up home security footage, control their lights, and videoconference with family while cooking in the kitchen. And it will control music throughout the home on HomePod speakers. The device will work with hundreds of HomeKit-compatible items, a lineup that includes third-party switches, lights, fans and other accessories. But the company doesn't plan to roll out a dedicated app store for the product. Given the lack of success with app marketplaces for the Vision Pro, Apple Watch and Apple TV, that's not too surprising.

Looking ahead, the article concludes "The success of this device is still far from assured. Apple's recent track record pushing into new categories has been spotty, and its previous home products haven't been major hits."

But Gurman shares the most interesting part on X.com: If the product does catch on, it will help set the stage for more home devices. Apple is working on a high-end AI companion with a [$1,000] robotic arm and large display that could serve as a follow-up. The company could also put more resources into developing mobile robots, privacy-focused home cameras and speakers. It may even revisit the idea of making an Apple-branded TV set, something it's evaluating. But if the first device fails, Apple may have to rethink its smart home ambitions once again.
Gurman also writes that Apple is also working on a new AirTag with more range and improved privacy features (including "making it more difficult for someone to remove the speaker.")
Government

ProPublica Argues US Police 'Have Undermined the Promise of Body Cameras' (propublica.org) 96

A new investigation from ProPublica argues that in the U.S., "Hundreds of millions in taxpayer dollars have been spent on what was sold as a revolution in transparency and accountability.

"Instead, police departments routinely refuse to release footage..." The technology represented the largest new investment in policing in a generation. Yet without deeper changes, it was a fix bound to fall far short of those hopes. In every city, the police ostensibly report to mayors and other elected officials. But in practice, they have been given wide latitude to run their departments as they wish and to police — and protect — themselves. And so as policymakers rushed to equip the police with cameras, they often failed to grapple with a fundamental question: Who would control the footage?

Instead, they defaulted to leaving police departments, including New York's, with the power to decide what is recorded, who can see it and when. In turn, departments across the country have routinely delayed releasing footage, released only partial or redacted video or refused to release it at all. They have frequently failed to discipline or fire officers when body cameras document abuse and have kept footage from the agencies charged with investigating police misconduct. Even when departments have stated policies of transparency, they don't always follow them. Three years ago, after George Floyd's killing by Minneapolis police officers and amid a wave of protests against police violence, the New York Police Department said it would publish footage of so-called critical incidents "within 30 days." There have been 380 such incidents since then. The department has released footage within a month just twice.

And the department often does not release video at all. There have been 28 shootings of civilians this year by New York officers (through the first week of December). The department has released footage in just seven of these cases (also through the first week of December) and has not done so in any of the last 16.... For a snapshot of disclosure practices across the country, we conducted a review of civilians killed by police officers in June 2022, roughly a decade after the first body cameras were rolled out. We counted 79 killings in which there was body-worn-camera footage. A year and a half later, the police have released footage in just 33 cases — or about 42%.

The reporting reveals that without further intervention from city, state and federal officials and lawmakers, body cameras may do more to serve police interests than those of the public they are sworn to protect... The pattern has become so common across the country — public talk of transparency followed by a deliberate undermining of the stated goal — that the policing-oversight expert Hans Menos, who led Philadelphia's civilian police-oversight board until 2020, coined a term for it: the "body-cam head fake."

The article includes examples where when footage was ultimately released, it contradicted initial police accounts.

In one instance, past footage of Minneapolis police officer Derek Chauvin "was left in the control of a department where impunity reigned..." the article points out, adding that Minneapolis "fought against releasing the videos, even after Chauvin pleaded guilty in December 2021 to federal civil rights violations."
Apple

How Apple's 'Reality Pro' Headset Will Work (9to5mac.com) 66

An anonymous reader quotes a report from 9to5Mac: Apple's first AR/VR headset could be unveiled sometime this spring, and rumors continue to offer more information about what Apple has in the works. A wide-ranging new report from Bloomberg now offers a slew of details on Apple's "Reality Pro" headset, including that the "eye- and hand-tracking capabilities will be a major selling point" for the product. Using external cameras, the headset will be able to analyze the user's hands, while internal sensors will be used to read the user's eyes.

The report explains: "The headset will have several external cameras that can analyze a user's hands, as well as sensors within the gadget's housing to read eyes. That allows the wearer to control the device by looking at an on-screen item -- whether it's a button, app icon or list entry -- to select it. Users will then pinch their thumb and index finger together to activate the task -- without the need to hold anything. The approach differs from other headsets, which typically rely on a hand controller."

More details on the hardware of the headset include that there will be a Digital Crown similar to the Apple Watch for switching between AR and VR. The VR mode will fully immerse the wearer, but when AR mode is enabled the "content fades back and becomes surrounded by the user's real environment." This is reportedly one of the features Apple hopes will be a "highlight of the product." To address overheating concerns, the Reality Pro headset will use an external battery that "rests in a user's pocket and connects over a cable." There will also be a cooling fan to further reduce the likelihood of the headset overheating. "The headset can last about two hours per battery pack," Bloomberg reports. The battery pack is "roughly the size of two iPhone 14 Pro Maxes stacked on top of each other, or about six inches tall and more than half an inch thick."
Another tidbit from the report is that the headset will be able to serve as an external display for Mac. "Users will be able to see their Mac's display in virtual reality but still control the computer with their trackpad or mouse and physical keyboard," reports Bloomberg. Apple is also "developing technology that will let users type in midair with their hands."

Additionally, FaceTime on the headset will "realistically render a user's face and full body in virtual reality."

A team of more than 1,000 people have been reportedly working on the first version of the device for the past seven years. It's slated to cost "roughly $3,000" when it debuts sometime this spring.
Robotics

Robots Are Making French Fries Faster, Better Than Humans (reuters.com) 161

Fast-food French fries and onion rings are going high-tech, thanks to a company in Southern California. From a report: Miso Robotics in Pasadena has started rolling out its Flippy 2 robot, which automates the process of deep frying potatoes, onions and other foods. A big robotic arm like those in auto plants -- directed by cameras and artificial intelligence -- takes frozen French fries and other foods out of a freezer, dips them into hot oil, then deposits the ready-to-serve product into a tray.

Flippy 2 can cook several meals with different recipes simultaneously, reducing the need for catering staff and, says Miso, speed up order delivery at drive-through windows. "When an order comes in through the restaurant system, it automatically spits out the instructions to Flippy," Miso Chief Executive Mike Bell said in an interview. " ... It does it faster or more accurately, more reliably and happier than most humans do it," Bell added.

Privacy

Give Us Your Biometric Data To Get Your Lunch In 5 Seconds, UK Schools Tell Children (theregister.com) 121

An anonymous reader quotes a report from The Register: In North Ayrshire Council, a Scottish authority encompassing the Isle of Arran, nine schools are set to begin processing meal payments for school lunches using facial scanning technology. The authority and the company implementing the technology, CRB Cunninghams, claim the system will help reduce queues and is less likely to spread COVID-19 than card payments and fingerprint scanners, according to the Financial Times. Speaking to the publication, David Swanston, the MD of supplier CRB Cunninghams, said the cameras verify the child's identity against "encrypted faceprint templates," and will be held on servers on-site at the 65 schools that have so far signed up. He added: "In a secondary school you have around about a 25-minute period to serve potentially 1,000 pupils. So we need fast throughput at the point of sale." He told the paper that with the system, the average transaction time was cut to five seconds per pupil. The system has already been piloted in 2020 at Kingsmeadow Community School in Gateshead, England. North Ayrshire council said 97 per cent of parents had given their consent for the new system, although some said they were unsure whether their children had been given enough information to make their decision. Seemingly unaware of the controversy surrounding facial recognition, education solutions provider CRB Cunninghams announced its introduction of the technology in schools in June as the "next step in cashless catering."
Facebook

Facebook Details Experimental Mixed Reality and Passthrough API (uploadvr.com) 4

Facebook shared some details about its experimental Passthrough API to enable new kinds of mixed reality apps for Oculus Quest 2. UploadVR reports: The feature may also serve as the foundation for the company's long-term efforts in augmented reality, effectively turning Quest 2 into a $299 AR developer kit. When asked if the feature is coming to the original Oculus Quest, a Facebook representative replied "today, this is only available for Quest 2." The new feature will be available to Unity developers in an upcoming software development kit release "with support for other development platforms coming in the future." Facebook says apps using the API "cannot access, view, or store images or videos of your physical environment from the Oculus Quest 2 sensors" and raw images from the four on-board cameras "are processed on-device."

The following capabilities will be available with the passthrough API, according to Facebook: "Composition: You can composite Passthrough layers with other VR layers via existing blending techniques like hole punching and alpha blending. Styling: You'll be able to apply styles and tint to layers from a predefined list, including applying a color overlay to the feed, rendering edges, customizing opacity, and posterizing. Custom Geometry: You can render Passthrough images to a custom mesh instead of relying on the default style mesh -- for example, to project Passthrough on a planar surface."

Robotics

Boston Dynamics Is Selling its 70-Pound Robot Dog To Police Departments (yahoo.com) 126

The New York Times reports on what the city's police department calls Digidog, "a 70-pound robotic dog with a loping gait, cameras and lights affixed to its frame, and a two-way communication system that allows the officer maneuvering it remotely to see and hear what is happening." Police said the robot can see in the dark and assess how safe it is for officers to enter an apartment or building where there may be a threat. "The NYPD has been using robots since the 1970s to save lives in hostage situations & hazmat incidents," the department said on Twitter. "This model of robot is being tested to evaluate its capabilities against other models in use by our emergency service unit and bomb squad."

But the robot has skeptics. Rep. Alexandria Ocasio-Cortez, a Democrat, described Digidog on Twitter as a "robotic surveillance ground" drone.... Jay Stanley, a senior policy analyst with the American Civil Liberties Union, said empowering a robot to do police work could have implications for bias, mobile surveillance, hacking and privacy. There is also concern that the robot could be paired with other technology and be weaponized. "We do see a lot of police departments adopting powerful new surveillance and other technology without telling, let alone asking, the communities they serve," he said. "So openness and transparency is key...."

A mobile device that can gather intelligence about a volatile situation remotely has "tremendous potential" to limit injuries and fatalities, said Keith Taylor, a former SWAT team sergeant at the police department who teaches at John Jay College of Criminal Justice. "It's important to question police authority; however, this appears to be pretty straightforward," he said. "It is designed to help law enforcement get the information they need without having a deadly firefight, for instance."

The Times reports that Boston Dynamics has been selling the dog since June. It's also already being used by the Massachusetts State Police and the Honolulu Police Department, "while other police departments have called the company to learn more about the robot, which has a starting price of about $74,000 and may cost more with extra features," according to Michael Perry, vice president of business development at the company.

The Times points out that the robot dog is also being purchased by utility and energy companies as well as manufacturers and construction companies, which use it to get into dangerous spaces. "The robot has been used to inspect sites with hazardous material. Early in the pandemic, it was used by health care workers to communicate with potentially sick patients at hospital triage sites, Perry said."
Cellphones

A New Lens Technology Is Primed To Jump-Start Phone Cameras (wired.com) 50

An anonymous reader quotes a report from Wired: A new company called Metalenz, which emerges from stealth mode today, is looking to disrupt smartphone cameras with a single, flat lens system that utilizes a technology called optical metasurfaces. A camera built around this new lens tech can produce an image of the same if not better quality as traditional lenses, collect more light for brighter photos, and can even enable new forms of sensing in phones, all while taking up less space.

Instead of using plastic and glass lens elements stacked over an image sensor, Metalenz's design uses a single lens built on a glass wafer that is between 1x1 to 3x3 millimeter in size. Look very closely under a microscope and you'll see nanostructures measuring one-thousandth the width of a human hair. Those nanostructures bend light rays in a way that corrects for many of the shortcomings of single-lens camera systems. The core technology was formed through a decade of research when cofounder and CEO Robert Devlin was working on his PhD at Harvard University with acclaimed physicist and Metalenz cofounder Federico Capasso. The company was spun out of the research group in 2017.

Light passes through these patterned nanostructures, which look like millions of circles with differing diameters at the microscopic level. The resulting image quality is just as sharp as what you'd get from a multilens system, and the nanostructures do the job of reducing or eliminating many of the image-degrading aberrations common to traditional cameras. And the design doesn't just conserve space. Devlin says a Metalenz camera can deliver more light back to the image sensor, allowing for brighter and sharper images than what you'd get with traditional lens elements. Another benefit? The company has formed partnerships with two semiconductor leaders (that can currently produce a million Metalenz "chips" a day), meaning the optics are made in the same foundries that manufacture consumer and industrial devices -- an important step in simplifying the supply chain. Metalenz will go into mass production toward the end of the year. Its first application will be to serve as the lens system of a 3D sensor in a smartphone. (The company did not give the name of the phone maker.)

Space

Capella Space Defends High-Resolution Satellite Photos Described as 'Eerily Observant' (inputmag.com) 79

"A new satellite from Capella Space was described as "pretty creepy" by Bustle's technology site Input: Like other hunks of metal currently orbiting Earth, the Capella-2 satellite's onboard radar system makes it capable of producing ludicrously high-resolution visuals from its data. More unconventional is the service Capella has launched to match: the government or private customers can, at any time, request a view of anything on the planet that's visible from the sky...

The Capella-2's system of cameras and sensors is nothing short of magnificent. The satellite uses something called Synthetic Aperture Radar (SAR), a technology used by NASA since the 1970s, to detect the Earth's surface through even the densest of clouds. SAR sends a 9.65 GHz radio signal toward the Earth and interprets the signal as it returns, using that data to form a visual... The Capella-2 is now the highest-resolution commercial SAR satellite in the world, capable of 50 cm x 50 cm resolution imaging. Other satellites are only capable of resolution up to about five meters....

Once Capella's full squadron of satellites is airborne, the company will have the ability to quickly snap views of just about any place in the world. That power could quickly be abused if left unchecked.

The article notes Capella already has a contract with the U.S. Air Force, adding "It's not much of a stretch to imagine high-resolution SAR technology turning into a tool for national surveillance...

"Right now there's just one Capella-2 satellite roaming around in the atmosphere, so that functionality is somewhat limited. Capella plans to launch six additional satellites with similar capabilities in the next year."

In response on Friday Capella Space penned a blog post reminding readers that their satellite "does not see through buildings," and that at 50-centimeter resolution "What it cannot do...is see people, license plates or reveal any personally identifiable information. Unlike other technologies that have recently been under scrutiny for privacy infringement such as cell phone geolocation data or automatic license plate readers, SAR imaging specializes in a macro view of the world to see the general patterns of life.

"Our company was founded on the belief that technology in space can significantly benefit life on Earth, and invading privacy does not help that mission. Part of that also means thoroughly vetting our customers and partners to ensure they will use our information for ethical purposes."
AI

Software Could Help Reform Policing -- If Only Police Unions Wanted It (fastcompany.com) 258

tedlistens writes: The CEO of Taser maker Axon, Rick Smith, has a lot of high-tech ideas for fixing policing. One idea for identifying potentially abusive behavior is AI, integrated with the company's increasingly ubiquitous body cameras and the footage they produce. In a patent application filed last month, Axon describes the ability to search video not only for words and locations but also for clothing, weapons, buildings, and other objects. AI could also tag footage to enable searches for things such as "the characteristics [of] the sounds or words of the audio," including "the volume (e.g., intensity), tone (e.g., menacing, threatening, helpful, kind), frequency range, or emotions (e.g., anger, elation) of a word or a sound."

Building that kind of software is a difficult task, and in the realm of law enforcement, one with particularly high stakes. But Smith also faces a more low-tech challenge, he tells Fast Company: making his ideas acceptable both to intransigent police unions and to the communities those police serve. Of course, right now many of those communities aren't calling for more technology for their police but for deep reform, if not deep budget cuts. And police officers aren't exactly clamoring for more scrutiny, especially if it's being done by a computer.

AI

Pool Owners Take Up AI To Prevent Drownings (wsj.com) 42

Homeowners and pool operators are turning to artificial intelligence for an extra layer of safety to prevent drownings in backyard and public pools. From a report: The detection systems, which use submerged cameras and a form of AI known as computer vision, analyze live videos of swimmers and send alerts if they spot a person who appears to be drowning. Jenny Naggatz, 33, of Gulf Breeze, Fla., installed an AI device from technology company Coral Detection Systems in her family's pool to safeguard her two children, both of whom are under 4. Coral Detection's triangle-shaped device sits in the corner of a pool with an attached camera hanging a few inches below the water surface. "It has definitely given me more peace of mind," Ms. Naggatz said. "I'm just as careful around the water as I would be without it, but it's just another layer of protection."

The safety of young children around swimming pools remains a cause for concern, according to a U.S. Consumer Product Safety Commission report released last week. On average, 379 children under 15 drowned each year in pools, spas or hot tubs from 2015 through 2017, the most recent statistics available, and hit a peak of 395 in 2017, the commission said. Noting that most child drownings occur at home during the summer months, the commission urged caution given that Covid-19 measures had confined more families to their homes and delayed the opening of public pools. AI drowning-detection products are not intended to replace adult supervision or lifeguards, but rather to serve as an extra safeguard.

Privacy

30,000 Unsuspecting Rose Bowl Attendees Were Scooped Up in a Facial Recognition Test (medium.com) 35

On New Year's Day 2020, more than 90,000 college football fans piled into the Rose Bowl Stadium in Pasadena, California, to watch the Oregon Ducks play the Wisconsin Badgers. It turns out some of those fans were being watched, too. From a report: Before they even entered the stadium, thousands of attendees were being captured by a facial recognition system in the Rose Bowl's FanFest activity area by an ad tech company called VSBLTY. Four cameras hidden underneath digital signs captured data on attendees, generating 30,000 points of data on how long they looked at advertisements, their gender and age, and an analysis to try and identify weapons or whether they were on a watch list of suspicious persons. Three fans who attended the Rose Bowl game and spoke to OneZero said they didn't remember seeing any notice that they were being surveilled.

[...] The data gathering and surveillance operation has not been reported in the mainstream press before and was revealed after VSBLTY issued a press release of its findings. Neither VSBLTY nor the Rose Bowl Stadium responded to multiple requests for comment or questions about how data was gathered, whether fans were informed, and where the watch list of suspicious persons came from. "Facts about fans, their habits and actions -- in addition to demographic and psychographic information -- will help plan audience activities as well as serve as a tool to validate the value of on-site advertising impressions to sponsors," wrote Jay Hutton, VSBLTY's CEO. VSBLTY is a small, Philadelphia-based company that anticipates generating $15 million to $20 million in revenue in 2020, according to a company slide deck targeted at investors reviewed by OneZero. The company has fewer than 50 employees according to LinkedIn data. Despite its relatively small size, the company has contracts around the world, including conducting real-time facial recognition in Mexico City through a partnership with intelligent lighting company Energetika.

Slashdot Top Deals