Privacy

Is a Backlash Building Against Smart Glasses That Record? (futurism.com) 68

Remember those Harvard dropouts who built smart glasses for covert facial recognition — and then raised $1 million to develop AI-powered glasses to continuously listen to conversations and display its insights?

"People Are REALLY Mad," writes Futurism, noting that some social media users "have responded with horror and outrage." One of its selling points is that the specs don't come with a visual indicator that lights up to let people know when they're being recorded, which is a feature that Meta's smart glasses do currently have. "People don't want this," wrote Whitney Merill, a privacy lawyer. "Wanting this is not normal. It's weird...."

[S]ome mocked the deleterious effects this could have on our already smartphone-addicted, brainrotted cerebrums. "I look forward to professional conversations with people who just read robot fever dream hallucinations at me in response to my technical and policy questions," one user mused.

The co-founder of the company told TechCrunch their glasses would be the "first real step towards vibe thinking."

But there's already millions of other smart glasses out in the world, and they're now drawing a backlash, reports the Washington Post, citing the millions of people viewing "a stream of other critical videos" about Meta's smart glasses.

The article argues that Generation Z, "who grew up in an internet era defined by poor personal privacy, are at the forefront of a new backlash against smart glasses' intrusion into everyday life..." Opal Nelson, a 22-year-old in New York, said the more she learns about smart glasses, the angrier she becomes. Meta Ray-Bans have a light that turns on when the gadget is recording video, but she said it doesn't seem to protect people from being recorded without consent... "And now there's more and more tutorials showing people how to cover up the [warning light] and still allow you to record," Nelson said. In one such tutorial with more than 900,000 views, a man claims to explain how to cover the warning light on Meta Ray-Bans without triggering the sensor that prevents the device from secretly recording.
One 26-year-old attracted 10 million views to their video on TikTok about the spread of Meta's photography-capable smart glasses. "People specifically in my generation are pretty concerned about the future of technology," the told the Post, "and what that means for all of us and our privacy."

The article cites figures from a devices analyst at IDC who estimates U.S. sales for Meta Ray-Bans will hit 4 million units by the end of 2025, compared to 1.2 million in 2024.
AI

AI Is Reshaping Hacking. No One Agrees How Fast (axios.com) 18

"Several cybersecurity companies debuted advancements in AI agents at the Black Hat conference last week," reports Axios, "signaling that cyber defenders could soon have the tools to catch up to adversarial hackers." - Microsoft shared details about a prototype for a new agent that can automatically detect malware — although it's able to detect only 24% of malicious files as of now.

- Trend Micro released new AI-driven "digital twin" capabilities that let companies simulate real-world cyber threats in a safe environment walled off from their actual systems.

- Several companies and research teams also publicly released open-source tools that can automatically identify and patch vulnerabilities as part of the government-backed AI Cyber Challenge.

Yes, but: Threat actors are now using those AI-enabled tools to speed up reconnaissance and dream up brand-new attack vectors for targeting each company, John Watters, CEO of iCounter and a former Mandiant executive, told Axios.

The article notes "two competing narratives about how AI is transforming the threat landscape." One says defenders still have the upper hand. Cybercriminals lack the money and computing resources to build out AI-powered tools, and large language models have clear limitations in their ability to carry out offensive strikes. This leaves defenders with time to tap AI's potential for themselves. [In a DEF CON presentation a member of Anthropic's red team said its Claude AI model will "soon" be able to perform at the level of a senior security researcher, the article notes later]

Then there's the darker view. Cybercriminals are already leaning on open-source LLMs to build tools that can scan internet-connected devices to see if they have vulnerabilities, discover zero-day bugs, and write malware. They're only going to get better, and quickly...

Right now, models aren't the best at making human-like judgments, such as recognizing when legitimate tools are being abused for malicious purposes. And running a series of AI agents will require cybercriminals and nation-states to have enough resources to pay the cloud bills they rack up, Michael Sikorski, CTO of Palo Alto Networks' Unit 42 threat research team, told Axios. But LLMs are improving rapidly. Sikorski predicts that malicious hackers will use a victim organization's own AI agents to launch an attack after breaking into their infrastructure.

Transportation

Norway Reached 96.9% Market Share For EVs In June (mobilityportal.eu) 250

Electric vehicles claimed a dominant 96.9% market share in Norway in June 2025, with the Tesla Model Y alone accounting for over 27% of all new car registrations. Mobility Portal Europe reports: According to the Norwegian Public Roads Administration (OFV), 17,799 new electric cars were registered in Norway in June out of a total of 18,376 new registrations. In this context, electric vehicles (EVs) held a market share of 96.9%. Compared to June 2024 -- when EVs made up 80% of all new registrations -- this technology increased by 3,790 units. In addition, in May 2025, Norway recorded 4,415 new EV registrations.

Last month, only 577 new registrations were for vehicles without fully electric drive systems. Among these were 152 plug-in hybrids (an 83.7% drop compared to June 2024) and 223 other types of hybrids (an 89.1% decline). Over the year, hybrids lost market share, falling from 17% to 2%. Pure combustion engines also further reduced their market presence: 142 new diesel vehicles represented 0.8% of the market share, down from 2% a year earlier, and 57 new petrol vehicles made up 0.3% of the market, compared to 1% in June 2024.
"Several campaigns with 0% or very low interest rates on new car purchases significantly boosted sales. The first interest rate cut by Norges Bank helped ensure that many people bought their dream car," said Oyvind Solberg Thorsen, Director of OFV.

"It remained to be seen whether Tesla could maintain its strong position, and for how long."
Space

'Space Is Hard. There Is No Excuse For Pretending It's Easy' (spacenews.com) 163

"For-profit companies are pushing the narrative that they can do space inexpensively," writes Slashdot reader RUs1729 in response to an opinion piece from SpaceNews. "Their track record reveals otherwise: cutting corners won't do it for the foreseeable future." Here's an excerpt from the article, written by Robert N. Eberhart: The headlines in the space industry over the past month have delivered a sobering reminder: space is not forgiving, and certainly not friendly to overpromising entrepreneurs. From iSpace's second failed lunar landing attempt (making them 0 for 2) to SpaceX's ongoing Starship test flight setbacks -- amid a backdrop of exploding prototypes and shifting goalposts -- the evidence is mounting that the commercialization of space is not progressing in the triumphant arc that press releases might suggest. This isn't just a series of flukes. It points to a structural, strategic and cultural problem in how we talk about innovation, cost and success in space today.

Let's be blunt: 50 years ago, we did this. We sent humans to the moon, not once but repeatedly, and brought them back. With less computational power than your phone, using analog systems and slide rules, we achieved feats of incredible precision, reliability and coordination. Today's failures, even when dressed up as "learning opportunities," raises the obvious question: Why are we struggling to do now what we once achieved decades ago with far more complexity and far less technology?

Until very recently, the failure rate of private lunar exploration efforts underscored this reality. Over the past two decades, not a single private mission had fully succeeded -- until last March when Firefly Aerospace's Blue Ghost lander touched down on the moon. It marked the first fully successful soft landing by a private company. That mission deserves real credit. But that credit comes with important context: It took two decades of false starts, crashes and incomplete landings -- from Space IL's Beresheet to iSpace's Hakuto-R and Astrobotic's Peregrine -- before even one private firm delivered on the promise of lunar access. The prevailing industry answer -- "we need to innovate for lower cost" -- rings hollow. What's happening now isn't innovation; it's aspiration masquerading as disruption...
"This is not a call for a retreat to Cold War models or Apollo-era budgets," writes Eberhart, in closing. "It's a call for seriousness. If we're truly entering a new space age, then it needs to be built on sound engineering, transparent economics and meaningful technical leadership -- not PR strategy. Let's stop pretending that burning money in orbit is a business model."

"The dream of a sustainable, entrepreneurial space ecosystem is still alive. But it won't happen unless we stop celebrating hype and start demanding results. Until then, the real innovation we need is not in spacecraft -- it's in accountability."

Robert N. Eberhart, PhD, is an associate professor of management and the faculty director of the Ahlers Center for International Business at the Knauss School of Business of University of San Diego. He is the author of several academic publications and books. He is also part of Oxford University's Smart Space Initiative and contributed to Berkeley's Space Sciences Laboratory. Before his academic career, Prof. Eberhart founded and ran a successful company in Japan.
Microsoft

Microsoft Releases Classic MS-DOS Editor For Linux (arstechnica.com) 74

Microsoft has released a modern, open-source version of its classic MS-DOS Editor -- built with Rust and compatible with Windows, macOS, and Linux. It's now simple called "Edit." Ars Technica reports: Aside from ease of use, Microsoft's main reason for creating the new version of Edit stems from a peculiar gap in modern Windows. "What motivated us to build Edit was the need for a default CLI text editor in 64-bit versions of Windows," writes [Christopher Nguyen, a product manager on Microsoft's Windows Terminal team] while referring to the command-line interface, or CLI. "32-bit versions of Windows ship with the MS-DOS editor, but 64-bit versions do not have a CLI editor installed inbox." [...]

Linux users can download Edit from the project's GitHub releases page or install it through an unofficial snap package. Oh, and if you're a fan of the vintage editor and crave a 16-bit text-mode for your retro machine that actually runs MS-DOS, you can download a copy on the Internet Archive. [...]

At 250KB, the new Edit maintains the lightweight philosophy of its predecessor while adding features the original couldn't dream of: Unicode support, regular expressions, and the ability to handle gigabyte-sized files. The original editor was limited to files smaller than 300KB depending on available conventional memory -- a constraint that seems quaint in an era of terabyte storage. But the web publication OMG! Ubuntu found that the modern Edit not only "works great on Ubuntu" but noted its speed when handling gigabyte-sized documents.

Youtube

Google's Frighteningly Good Veo 3 AI Videos To Be Integrated With YouTube Shorts (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: YouTube CEO Neal Mohan has announced that the Google Veo 3 AI video generator will be integrated with YouTube Shorts later this summer. According to Mohan, YouTube Shorts has seen a rise in popularity even compared to YouTube as a whole. The streaming platform is now the most watched source of video in the world, but Shorts specifically have seen a massive 186 percent increase in viewership over the past year. Mohan says Shorts now average 200 billion daily views.

YouTube has already equipped creators with a few AI tools, including Dream Screen, which can produce AI video backgrounds with a text prompt. Veo 3 support will be a significant upgrade, though. At the Cannes festival, Mohan revealed that the streaming site will begin offering integration with Google's leading video model later this summer. "I believe these tools will open new creative lanes for everyone to explore," said Mohan. [...]

While you can add Veo 3 videos (or any video) to a YouTube Short right now, they don't fit with the format's portrait orientation focus. Veo 3 outputs 720p landscape videos, meaning you'd have black bars in a Short. Presumably, Google will create a custom version of the model for YouTube to spit out vertical video clips. Mohan didn't mention a pricing model, but Veo 3 probably won't be cheap for Shorts creators. Currently, you must pay for Google's $250 AI Ultra plan to access Veo 3, and that still limits you to 125 8-second videos per month.

Supercomputing

IBM Says It's Cracked Quantum Error Correction (ieee.org) 26

Edd Gent reporting for IEEE Spectrum: IBM has unveiled a new quantum computing architecture it says will slash the number of qubits required for error correction. The advance will underpin its goal of building a large-scale, fault-tolerant quantum computer, called Starling, that will be available to customers by 2029. Because of the inherent unreliability of the qubits (the quantum equivalent of bits) that quantum computers are built from, error correction will be crucial for building reliable, large-scale devices. Error-correction approaches spread each unit of information across many physical qubits to create "logical qubits." This provides redundancy against errors in individual physical qubits.

One of the most popular approaches is known as a surface code, which requires roughly 1,000 physical qubits to make up one logical qubit. This was the approach IBM focused on initially, but the company eventually realized that creating the hardware to support it was an "engineering pipe dream," Jay Gambetta, the vice president of IBM Quantum, said in a press briefing. Around 2019, the company began to investigate alternatives. In a paper published in Nature last year, IBM researchers outlined a new error-correction scheme called quantum low-density parity check (qLDPC) codes that would require roughly one-tenth of the number of qubits that surface codes need. Now, the company has unveiled a new quantum-computing architecture that can realize this new approach.
"We've cracked the code to quantum error correction and it's our plan to build the first large-scale, fault-tolerant quantum computer," said Gambetta, who is also an IBM Fellow. "We feel confident it is now a question of engineering to build these machines, rather than science."
Social Networks

Bluesky's Decline Stems From Never Hearing From the Other Side (washingtonpost.com) 183

Bluesky's user engagement has fallen roughly 50% since peaking in mid-November, according to a recent Pew Research Center analysis, as progressive groups' efforts to migrate users from Elon Musk's X platform show signs of failure. The research found that while many news influencers maintain Bluesky accounts, two-thirds post irregularly compared to more than 80% who still post daily to X. A Washington Post columnist tries to make sense of it: The people who have migrated to Bluesky tend to be those who feel the most visceral disgust for Musk and Trump, plus a smattering of those who are merely curious and another smattering who are tired of the AI slop and unregenerate racism that increasingly pollutes their X feeds. Because the Musk and Trump haters are the largest and most passionate group, the result is something of an echo chamber where it's hard to get positive engagement unless you're saying things progressives want to hear -- and where the negative engagement on things they don't want to hear can be intense. That's true even for content that isn't obviously political: Ethan Mollick, a professor at the University of Pennsylvania's Wharton School who studies AI, recently announced that he'll be limiting his Bluesky posting because AI discussions on the platform are too "fraught."

All this is pretty off-putting for folks who aren't already rather progressive, and that creates a threefold problem for the ones who dream of getting the old band back together. Most obviously, it makes it hard for the platform to build a large enough userbase for the company to become financially self-sustaining, or for liberals to amass the influence they wielded on old Twitter. There, they accumulated power by shaping the contours of a conversation that included a lot of non-progressives. On Bluesky, they're mostly talking among themselves.

Television

'King of the Hill' (and Dale Gribble) Return To TV After 15 Years (cinemablend.com) 40

Mike Judge always seemed to have secret geek sympathies. He co-created the HBO series Silicon Valley, as well as the movie Office Space (reviewed in 1999 by Slashdot contributor Jon Katz).

Now comes the word that besides rebooting Buffy the Vampire Slayer — and an animated scifi/action/horror film called Predator: Killer of Killers — Hulu is also relaunching Judge's animated series King of the Hill on August 4th. And Cinemablend notes they took great pains to ensure the inclusion of internet-loving neighbor Dale Gribble despite the death of voice actor Johnny Hardwick: Co-creators Mike Judge and Greg Daniels joined the cast of returning voice actors for a revealing Q&A at ATX Fest while also revealing longtime cast member Toby Huss took over the role of Dale Gribble... Hardwick passed away in August 2023 at 64, with fans and co-stars paying tribute soon after. It was revealed at the time that he'd recorded some audio for the new season, but it was clear that another actor would be needed to fill those intimidating and conspiracy-obsessed shoes. Among other characters, Huss provided the voice of Cotton Hill and Kahn Sr. in the O.G. run, and feels to me like a natural fit to take over as Dale. And he sounds humbled to have been given the task, telling the ATX Fest crowd:

"Johnny was one-of-a-kind and a wonderful fellow. I'm not trying to copy Johnny...I guess I'm trying to be Johnny. He laid down a really wonderful goofball character...he had a lot of weird heart to him and that's a credit to Johnny. So all I'm trying to do is hold on to his Dale-ness. We love our guy Johnny and it's so sad that he's not here...."

I can already hear Dale himself questioning why he sounds different, and whether or not the government has replaced him with a lizard creature or some other sentient organism... In the immediate aftermath of Johnny Hardwick's death, the word was that the actor had filmed a couple of episodes' worth of material for the Hulu revival, but Mike Judge went on the record at ATX Fest to reveal that initial assessment undershot things entirely. From the voice of Hank Hill himself: "Johnny Hardwick is in six episodes. He's still going to be in the show."

Hulu uploaded the new opening credits to YouTube eight days ago — and it's already been viewed 2.1 million times, attracting 55,000 upvotes and 7,952 comments...

Long-time Slashdot reader theodp shared the official blurb describing the new show: After years working a propane job in Saudi Arabia to earn their retirement nest egg, Hank and Peggy Hill return to a changed Arlen, Texas to reconnect with old friends Dale, Boomhauer and Bill. Meanwhile, Bobby is living his dream as a chef in Dallas and enjoying his 20s with his former classmates Connie, Joseph and Chane.
AI

Is the AI Job Apocalypse Already Here for Some Recent Grads? (msn.com) 117

"This month, millions of young people will graduate from college," reports the New York Times, "and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favor of artificial intelligence." That is the troubling conclusion of my conversations over the past several months with economists, corporate executives and young job seekers, many of whom pointed to an emerging crisis for entry-level workers that appears to be fueled, at least in part, by rapid advances in AI capabilities.

You can see hints of this in the economic data. Unemployment for recent college graduates has jumped to an unusually high 5.8% in recent months, and the Federal Reserve Bank of New York recently warned that the employment situation for these workers had "deteriorated noticeably." Oxford Economics, a research firm that studies labor markets, found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains. "There are signs that entry-level positions are being displaced by artificial intelligence at higher rates," the firm wrote in a recent report.

But I'm convinced that what's showing up in the economic data is only the tip of the iceberg. In interview after interview, I'm hearing that firms are making rapid progress toward automating entry-level work and that AI companies are racing to build "virtual workers" that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become "AI-first," testing whether a given task can be done by AI before hiring a human to do it. One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by AI coding tools. Another told me that his startup now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company...

"This is something I'm hearing about left and right," said Molly Kinder, a fellow at the Brookings Institution, a public policy think tank, who studies the impact of AI on workers. "Employers are saying, 'These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.'" Using AI to automate white-collar jobs has been a dream among executives for years. (I heard them fantasizing about it in Davos back in 2019.) But until recently, the technology simply wasn't good enough...

AI

Google Tries Funding Short Films Showing 'Less Nightmarish' Visions of AI (yahoo.com) 74

"For decades, Hollywood directors including Stanley Kubrick, James Cameron and Alex Garland have cast AI as a villain that can turn into a killing machine," writes the Los Angeles Times. "Even Steven Spielberg's relatively hopeful A.I.: Artificial Intelligence had a pessimistic edge to its vision of the future."

But now "Google — a leading developer in AI technology — wants to move the cultural conversations away from the technology as seen in The Terminator, 2001: A Space Odyssey and Ex Machina.". So they're funding short films "that portray the technology in a less nightmarish light," produced by Range Media Partners (which represents many writers and actors) So far, two short films have been greenlit through the project: One, titled "Sweetwater," tells the story of a man who visits his childhood home and discovers a hologram of his dead celebrity mother. Michael Keaton will direct and appear in the film, which was written by his son, Sean Douglas. It is the first project they are working on together. The other, "Lucid," examines a couple who want to escape their suffocating reality and risk everything on a device that allows them to share the same dream....

Google has much riding on convincing consumers that AI can be a force for good, or at least not evil. The hot space is increasingly crowded with startups and established players such as OpenAI, Anthropic, Apple and Facebook parent company Meta. The Google-funded shorts, which are 15 to 20 minutes long, aren't commercials for AI, per se. Rather, Google is looking to fund films that explore the intersection of humanity and technology, said Mira Lane, vice president of technology and society at Google. Google is not pushing their products in the movies, and the films are not made with AI, she added... The company said it wants to fund many more movies, but it does not have a target number. Some of the shorts could eventually become full-length features, Google said....

Negative public perceptions about AI could put tech companies at a disadvantage when such cases go before juries of laypeople. That's one reason why firms are motivated to makeover AI's reputation. "There's an incredible amount of skepticism in the public world about what AI is and what AI will do in the future," said Sean Pak, an intellectual property lawyer at Quinn Emanuel, on a conference panel. "We, as an industry, have to do a better job of communicating the public benefits and explaining in simple, clear language what it is that we're doing and what it is that we're not doing."

Java

Java Turns 30 (theregister.com) 100

Richard Speed writes via The Register: It was 30 years ago when the first public release of the Java programming language introduced the world to Write Once, Run Anywhere -- and showed devs something cuddlier than C and C++. Originally called "Oak," Java was designed in the early 1990s by James Gosling at Sun Microsystems. Initially aimed at digital devices, its focus soon shifted to another platform that was pretty new at the time -- the World Wide Web.

The language, which has some similarities to C and C++, usually compiles to a bytecode that can, in theory, run on any Java Virtual Machine (JVM). The intention was to allow programmers to Write Once Run Anywhere (WORA) although subtle differences in JVM implementations meant that dream didn't always play out in reality. This reporter once worked with a witty colleague who described the system as Write Once Test Everywhere, as yet another unexpected wrinkle in a JVM caused their application to behave unpredictably. However, the language soon became wildly popular, rapidly becoming the backbone of many enterprises. [...]

However, the platform's ubiquity has meant that alternatives exist to Oracle Java, and the language's popularity is undiminished by so-called "predatory licensing tactics." Over 30 years, Java has moved from an upstart new language to something enterprises have come to depend on. Yes, it may not have the shiny baubles demanded by the AI applications of today, but it continues to be the foundation for much of today's modern software development. A thriving ecosystem and a vast community of enthusiasts mean that Java remains more than relevant as it heads into its fourth decade.

AI

Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com) 261

OpenAI CEO and Sam Altman believe Artificial General Intelligence could arrive within the next few years. But the speculations of some technologists "are getting ahead of reality," writes the New York Times, adding that many scientists "say no one will reach AGI without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it." "The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.

Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.

Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."

While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that the real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.

"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
Businesses

Why Two Amazon Drones Crashed at a Test Facility in December (msn.com) 39

While Amazon won FAA approval to fly beyond an operators' visual line of sight, "the program remains a work in progress," reports Bloomberg: A pair of Amazon.com Inc. package delivery drones were flying through a light rain in mid-December when, within minutes of one another, they both committed robot suicide... [S]ome 217 feet (66 meters) in the air [at a drone testing facility], the aircraft cut power to its six propellers, fell to the ground and was destroyed. Four minutes later and 183 feet over the taxiway, a second Prime Air drone did the same thing.

Not long after the incidents, Amazon paused its experimental drone flights to tweak the aircraft software but said the crashes weren't the "primary reason" for halting the program. Now, five months after the twin crashes, a more detailed explanation of what happened is starting to emerge. Faulty readings from lidar sensors made the drones think they had landed, prompting the software to shut down the propellers, according to National Transportation Safety Board documents reviewed by Bloomberg. The sensors failed after a software update made them more susceptible to being confused by rain, the NTSB said.

Amazon also removed a backup sensor present that had been present on earlier iterations, according to the article — though an Amazon spokesperson said the company had found ways to replicate the removed sensors.

But Bloomberg notes Amazon's drone efforts has faced "technical challenges and crashes, including one in 2021 that set a field ablaze at the company's testing facility in Pendleton, Oregon." Deliveries are currently limited to College Station, Texas, and greater Phoenix, with plans to expand to Kansas City, Missouri, the Dallas area and San Antonio, as well as the UK and Italy. Starting with a craft that looked like a hobbyist drone — and was vulnerable to even modest gusts of wind — Amazon went through dozens of designs to toughen the vehicle and ultimately make it capable of carting about 5 pounds, giving it the capability to transport items typically ordered from its warehouses. Engineers settled on a six-propeller design that takes off vertically before cruising like a plane. The first model to make regular customer deliveries, the MK27, was succeeded last year by the MK30, which flies at about 67 miles an hour and can deliver packages up to 7.5 miles from its launch point. The craft takes off, flies and lands autonomously.
Open Source

OSU's Open Source Lab Eyes Infrastructure Upgrades and Sustainability After Recent Funding Success (osuosl.org) 11

It's a nonprofit that's provide hosting for the Linux Foundation, the Apache Software Foundation, Drupal, Firefox, and 160 other projects — delivering nearly 430 terabytes of information every month. (It's currently hosting Debian, Fedora, and Gentoo Linux.) But hosting only provides about 20% of its income, with the rest coming from individual and corporate donors (including Google and IBM). "Over the past several years, we have been operating at a deficit due to a decline in corporate donations," the Open Source Lab's director announced in late April.

It's part of the CS/electrical engineering department at Oregon State University, and while the department "has generously filled this gap, recent changes in university funding makes our current funding model no longer sustainable. Unless we secure $250,000 in committed funds, the OSL will shut down later this year."

But "Thankfully, the call for support worked, paving the way for the OSU Open Source Lab to look ahead, into what the future holds for them," reports the blog It's FOSS.

"Following our OSL Future post, the community response has been incredible!" posted director Lance Albertson. "Thanks to your amazing support, our team is funded for the next year. This is a huge relief and lets us focus on building a truly self-sustaining OSL." To get there, we're tackling two big interconnected goals:

1. Finding a new, cost-effective physical home for our core infrastructure, ideally with more modern hardware.
2. Securing multi-year funding commitments to cover all our operations, including potential new infrastructure costs and hardware refreshes.


Our current data center is over 20 years old and needs to be replaced soon. With Oregon State University evaluating the future of this facility, it's very likely we'll need to relocate in the near future. While migrating to the State of Oregon's data center is one option, it comes with significant new costs. This makes finding free or very low-cost hosting (ideally between Eugene and Portland for ~13-20 racks) a huge opportunity for our long-term sustainability. More power-efficient hardware would also help us shrink our footprint.

Speaking of hardware, refreshing some of our older gear during a move would be a game-changer. We don't need brand new, but even a few-generations-old refurbished systems would boost performance and efficiency. (Huge thanks to the Yocto Project and Intel for a recent hardware donation that showed just how impactful this is!) The dream? A data center partner donating space and cycled-out hardware. Our overall infrastructure strategy is flexible. We're enhancing our OpenStack/Ceph platforms and exploring public cloud credits and other donated compute capacity. But whatever the resource, it needs to fit our goals and come with multi-year commitments for stability. And, a physical space still offers unique value, especially the invaluable hands-on data center experience for our students....

[O]ur big focus this next year is locking in ongoing support — think annualized pledges, different kinds of regular income, and other recurring help. This is vital, especially with potential new data center costs and hardware needs. Getting this right means we can stop worrying about short-term funding and plan for the future: investing in our tech and people, growing our awesome student programs, and serving the FOSS community. We're looking for partners, big and small, who get why foundational open source infrastructure matters and want to help us build this sustainable future together.

The It's FOSS blog adds that "With these prerequisites in place, the OSUOSL intends to expand their student program, strengthen their managed services portfolio for open source projects, introduce modern tooling like Kubernetes and Terraform, and encourage more community volunteers to actively contribute."

Thanks to long-time Slashdot reader I'm just joshin for suggesting the story.
Medicine

Theranos Fraudster's Partner Launches His Own Blood-Testing Startup (thedailybeast.com) 34

"The romantic partner of Theranos fraudster Elizabeth Holmes has launched a start-up that sounds eerily similar to the venture that landed his girlfriend behind bars," writes The Daily Beast.

He's incorporated "Haemanthus" in Delaware a year and a half ago (though the company operates out of his neighborhood in Austin), according to the New York Times. Haemanthus appears to have around 10 employees.

From The Daily Beast: California hotel heir Billy Evans' new company is a blood-testing firm that markets itself as "the future of diagnostics," offering "a radically new approach to health testing," according to The New York Times. In other words, exactly what Theranos said it would do. Holmes is even advising the start-up from the Texas prison where she is serving out an 11-year prison sentence for fraud, sources told NPR... Evans has managed to raise nearly $20 million in funds from both friends and established investors in Austin and San Francisco, according to the investor materials.
The Times reports that Evan's company "plans to begin with testing pets for diseases before progressing to humans, according to two investors pitched on the company."

And TechCrunch reminds readers that Elizabeth Holmes said in a recent interview "that she remains 'completely committed to my dream of making affordable healthcare solutions available to everyone.'"
Programming

You Should Still Learn To Code, Says GitHub CEO (businessinsider.com) 45

You should still learn to code, says GitHub's CEO. And you should start as soon as possible. From a report: "I strongly believe that every kid, every child, should learn coding," Thomas Dohmke said in a recent podcast interview with EO. "We should actually teach them coding in school, in the same way that we teach them physics and geography and literacy and math and what-not." Coding, he added, is one such fundamental skill -- and the only reason it's not part of the curriculum is because it took "us too long to actually realize that."

Dohmke, who's been a programmer since the 90s, said he's never seen "anything more exciting" than the current moment in engineering -- the advent of AI, he believes, has made the field that much easier to break into, and is poised to make software more ubiquitous than ever. "It's so much easier to get into software development. You can just write a prompt into Copilot or ChatGPT or similar tools, and it will likely write you a basic webpage, or a small application, a game in Python," Dohmke said. "And so, AI makes software development so much more accessible for anyone who wants to learn coding."

AI, Dohmke said, helps to "realize the dream" of bringing an idea to life, meaning that fewer projects will end up dead in the water, and smaller teams of developers will be enabled to tackle larger-scale projects. Dohmke said he believes it makes the overall process of creation more efficient. "You see some of the early signs of that, where very small startups -- sometimes five developers and some of them actually only one developer -- believe they can become million, if not billion dollar businesses by leveraging all the AI agents that are available to them," he added.

Operating Systems

FreeDOS Celebrates More Than 30 Years of Command Prompts With New Release (arstechnica.com) 19

When Microsoft announced it would stop developing MS-DOS after 1995, college student Jim Hall "packaged my own extended DOS utilities, as did others," according to the web site for the resulting "FreeDOS" project.

Jim Hall is also Slashdot reader #2,985, and more than 30 years later he's "keeping the dream of the command prompt alive," writes Ars Technica. In a new article they note that last week the FreeDOS team released version 1.4, the first new stable update since 2022: The release has "a focus on stability" and includes an updated installer, new versions of common tools like fdisk, and format and the edlin text editor. The release also includes updated HTML Help files... As with older versions, the FreeDOS installer is available in multiple formats based on the kind of system you're installing it on. For any "modern" PC (where "modern" covers anything that's shipped since the turn of the millennium), ISO and USB installers are available for creating bootable CDs, DVDs, or USB drives. FreeDOS is also available for vintage systems as a completely separate "Floppy-Only Edition" that fits on 720KB, 1.44MB, or 1.2MB 5.25 and 3.5-inch floppy disks.
Jim Hall composed a detailed introduction to FreeDOS 1.4 here.

He also answered questions from Slashdot's readers back in 2000 and again in 2019.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Power

Aptera Takes First 300-Mile Highway Trip in Solar-Powered EV (aptera.us) 94

"I've been dreaming of this moment for 20 years," says Aptera co-CEO Steve Fambro. Aptera's solar-powered electric car just drove 300 miles on a single charge.

"We're one step closer to a future where every journey is powered by the sun," Aptera says in their announcement.

"This go around, Aptera took to the highway for the first time ever..." writes the EV blog Electrek. "At one point, Aptera's video noted that its solar EV was pulling over 545 watts of solar input, even though it was overcast."

"Less time searching for chargers," Aptera says in their announcement, adding that their "production-intent" car proved "that a solar EV isn't just a concept for the future, but a real-world solution ready for the present" — while turning Route 66 into "a test bed for a vehicle built to thrive independently..." "The panoramic windshield gives you this incredible view of the landscape," Steve said [in a video accompanying the announcement], describing the drive. "It's like a big picture window into the future."

The final stretch took the team back into California, where they reflected on the journey, the data, and the excited reactions from drivers who caught a glimpse of the vehicle on the road. "Almost everyone we passed had their phones out filming us," Steve laughed. "It's clear that Aptera's design stops traffic — without needing to stop for a charge."

"I was struck by how normal this trip seemed, except for all the gawking from fellow travelers," writes long-time Slashdot reader AirHog. "Best of luck to Aptera to reach their funding and production goals this year for this remarkable vehicle."

They drove on highways to Lake Havasu, and then to California's Imperial Valley — starting in Flagstaff, Arizona on symbolic Route 66. It was 100 years ago that Route 66 was proposed to link Chicago and Los Angeles, which Fambro credits to a visionary who believed in "something bigger than the road itself — believing in what it could unlock for the world." "And they did it. Route 66 became one of the most iconic highways in America, proving that what once seemed improbable could become inevitable.

"I think about that alot with Aptera. We're building something people say can't be done. History shows us the boldest ideas, the ones that challenge that status quo are the ones that truly change the world.

They take their futuristic, tear-dropped shaped "Jetsons" car to a drive-through wildlife refuge named Bearizona. They stop at a general store for some beef jerky. "We're just having a fun time seeing all the sights."

"I've been dreaming of this moment for 20 years," says Aptera co-CEO Steve Fambro. "Driving in the most efficient vehicle on the road. Watching the sights go by. I got emotional just taking it all in." "This company. This idea. It's real. It's visceral. And I'm just so proud of each and every person who helped make this dream a reality.

"We have the chance to make a real change in how the world moves. The road hasn't been easy. It's been painful, difficult. And it's brought me to my breaking point sometimes. But being in this moment right now? I can say it's all been worth it...

"I feel we're at the forefront of something truly revolutionary. We're not fighting an uphill battle any more. We're standing at the edge of something incredible. Ready to break through.

"To all of you who supported us, my commitment is this. We're not stopping. We're moving forward with more energy and more passion than ever. The road ahead is an open highway. And the future is ours to shape."

To celebrate Aptera is holding a giveaway for a camping kit, a $100 gift card to their online store, and a free Aptera pre-order to a winner chosen at random from those who subscribe/watch/comment on their new video...

Slashdot Top Deals