AI

Enterprise AI Adoption Stalls As Inferencing Costs Confound Cloud Customers 18

According to market analyst firm Canalys, enterprise adoption of AI is slowing due to unpredictable and often high costs associated with model inferencing in the cloud. Despite strong growth in cloud infrastructure spending, businesses are increasingly scrutinizing cost-efficiency, with some opting for alternatives to public cloud providers as they grapple with volatile usage-based pricing models. The Register reports: [Canalys] published stats that show businesses spent $90.9 billion globally on infrastructure and platform-as-a-service with the likes of Microsoft, AWS and Google in calendar Q1, up 21 percent year-on-year, as the march of cloud adoption continues. Canalys says that growth came from enterprise users migrating more workloads to the cloud and exploring the use of generative AI, which relies heavily on cloud infrastructure.

Yet even as organizations move beyond development and trials to deployment of AI models, a lack of clarity over the ongoing recurring costs of inferencing services is becoming a concern. "Unlike training, which is a one-time investment, inference represents a recurring operational cost, making it a critical constraint on the path to AI commercialization," said Canalys senior director Rachel Brindley. "As AI transitions from research to large-scale deployment, enterprises are increasingly focused on the cost-efficiency of inference, comparing models, cloud platforms, and hardware architectures such as GPUs versus custom accelerators," she added.

Canalys researcher Yi Zhang said many AI services follow usage-based pricing models that charge on a per token or API call basis. This makes cost forecasting hard as the use of the services scale up. "When inference costs are volatile or excessively high, enterprises are forced to restrict usage, reduce model complexity, or limit deployment to high-value scenarios," Zhang said. "As a result, the broader potential of AI remains underutilized." [...] According to Canalys, cloud providers are aiming to improve inferencing efficiency via a modernized infrastructure built for AI, and reduce the cost of AI services.
The report notes that AWS, Azure, and Google Cloud "continue to dominate the IaaS and PaaS market, accounting for 65 percent of customer spending worldwide."

"However, Microsoft and Google are slowly gaining ground on AWS, as its growth rate has slowed to 'only' 17 percent, down from 19 percent in the final quarter of 2024, while the two rivals have maintained growth rates of more than 30 percent."
Power

There Aren't Enough Cables To Meet Growing Electricity Demand (bloomberg.com) 87

High-voltage electricity cables have become a major constraint throttling the clean energy transition, with manufacturing facilities booked out for years as demand far exceeds supply capacity. The energy transition, trade barriers, and overdue grid upgrades have turbocharged demand for these highly sophisticated cables that connect wind farms, solar installations, and cross-border power networks.

The International Energy Agency estimates that 80 million kilometers of grid infrastructure must be built between now and 2040 to meet clean energy targets -- equivalent to rebuilding the entire existing global grid that took a century to construct, but compressed into just 15 years. Each high-voltage cable requires custom engineering and months-long production in specialized 200-meter towers, with manufacturers reporting that 80-90% of major projects now use high-voltage direct current technology versus traditional alternating current systems.
Security

Apple Previews New Import/Export Feature To Make Passkeys More Interoperable (arstechnica.com) 36

During this week's Worldwide Developers Conference, Apple unveiled a secure import/export feature for passkeys that addresses one of their biggest limitations: lack of interoperability across platforms and credential managers. The feature, built in collaboration with the FIDO Alliance, enables encrypted, user-initiated passkey transfers between apps and systems. Ars Technica's Dan Goodin says it "provides the strongest indication yet that passkey developers are making meaningful progress in improving usability." From the report: "People own their credentials and should have the flexibility to manage them where they choose," the narrator of the Apple video says. "This gives people more control over their data and the choice of which credential manager they use." The transfer feature, which will also work with passwords and verification codes, provides an industry-standard means for apps and OSes to more securely sync these credentials.

As the video explains: "This new process is fundamentally different and more secure than traditional credential export methods, which often involve exporting an unencrypted CSV or JSON file, then manually importing it into another app. The transfer process is user initiated, occurs directly between participating credential manager apps and is secured by local authentication like Face ID. This transfer uses a data schema that was built in collaboration with the members of the FIDO Alliance. It standardizes the data format for passkeys, passwords, verification codes, and more data types. The system provides a secure mechanism to move the data between apps. No insecure files are created on disk, eliminating the risk of credential leaks from exported files. It's a modern, secure way to move credentials."

OS X

Apple Quietly Launches Container On GitHub To Bring Linux Development To macOS (nerds.xyz) 60

BrianFagioli shares a report from NERDS.xyz: Apple has released a new developer tool on GitHub called Container, offering a fresh approach to running Linux containers directly on macOS. Unlike Docker or Podman, this tool is designed to feel at home in the Apple ecosystem and hooks into frameworks already built into the operating system. Container runs standard OCI images, but it doesn't use a single shared Linux VM. Instead, it creates a small Linux virtual machine for every container you spin up. That sounds heavy at first, but the VMs are lightweight and boot quickly. Each one is isolated, which Apple claims improves both security and privacy. Developers can run containerized workloads locally with native macOS support and without needing to install third-party container platforms.
Robotics

Scientists Built a Badminton-Playing Robot With AI-Powered Skills (arstechnica.com) 10

An anonymous reader quotes a report from Ars Technica: The robot built by [Yuntao Ma and his team at ETH Zurich] was called ANYmal and resembled a miniature giraffe that plays badminton by holding a racket in its teeth. It was a quadruped platform developed by ANYbotics, an ETH Zurich spinoff company that mainly builds robots for the oil and gas industries. "It was an industry-grade robot," Ma said. The robot had elastic actuators in its legs, weighed roughly 50 kilograms, and was half a meter wide and under a meter long. On top of the robot, Ma's team fitted an arm with several degrees of freedom produced by another ETH Zurich spinoff called Duatic. This is what would hold and swing a badminton racket. Shuttlecock tracking and sensing the environment were done with a stereoscopic camera. "We've been working to integrate the hardware for five years," Ma said.

Along with the hardware, his team was also working on the robot's brain. State-of-the-art robots usually use model-based control optimization, a time-consuming, sophisticated approach that relies on a mathematical model of the robot's dynamics and environment. "In recent years, though, the approach based on reinforcement learning algorithms became more popular," Ma told Ars. "Instead of building advanced models, we simulated the robot in a simulated world and let it learn to move on its own." In ANYmal's case, this simulated world was a badminton court where its digital alter ego was chasing after shuttlecocks with a racket. The training was divided into repeatable units, each of which required that the robot predict the shuttlecock's trajectory and hit it with a racket six times in a row. During this training, like a true sportsman, the robot also got to know its physical limits and to work around them.

The idea behind training the control algorithms was to develop visuo-motor skills similar to human badminton players. The robot was supposed to move around the court, anticipating where the shuttlecock might go next and position its whole body, using all available degrees of freedom, for a swing that would mean a good return. This is why balancing perception and movement played such an important role. The training procedure included a perception model based on real camera data, which taught the robot to keep the shuttlecock in its field of view while accounting for the noise and resulting object-tracking errors.

Once the training was done, the robot learned to position itself on the court. It figured out that the best strategy after a successful return is to move back to the center and toward the backline, which is something human players do. It even came with a trick where it stood on its hind legs to see the incoming shuttlecock better. It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage -- it was committed, but not suicidal. But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best.
The findings have been published in the journal Science Robotics.

You can watch a video of the four-legged robot playing badminton on YouTube.
AI

Starbucks To Roll Out Microsoft Azure OpenAI Assistant For Baristas 37

Starbucks is piloting a generative AI assistant called "Green Dot Assist" to streamline barista tasks and improve service speed, with plans for a broader rollout in fiscal 2026. The assistant is built on Microsoft Azure's OpenAI platform. CNBC reports: Instead of flipping through manuals or accessing Starbucks' intranet, baristas will be able to use a tablet behind the counter equipped with Green Dot Assist to get answers to a range of questions, from how to make an iced shaken espresso to troubleshooting equipment errors. Baristas can either type or verbally ask their queries in conversational language.

As the AI assistant evolves, Starbucks has even bigger plans for its next generation. Those ideas include automatically creating a ticket with IT for equipment issues or generating suggestions for a substitute when a barista calls out of work, according to [Starbucks Chief Technology Officer Deb Hall Lefevre]. [...] Lefevre said tenured baristas have been learning to use the new POS in as little as an hour. Plus, the technology can offer personalized recommendations and loyal customers' repeat orders, helping Starbucks achieve the personalized touch it's looking to bring back to its cafes.
"It's just another example of how innovation technology is coming into service of our partners and making sure that we're doing all we can to simplify the operations, make their jobs just a little bit easier, maybe a little bit more fun, so that they can do what they do best," Lefevre told CNBC.
AI

Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com) 14

An anonymous reader quotes a report from TechCrunch: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple's family of models that power a number of iOS features and capabilities.

"For example, if you're getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging," Federighi said. "And because it happens using on-device models, this happens without cloud API costs [] We couldn't be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you're offline, and that protect your privacy."

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple's programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

United Kingdom

UK Renewable Energy Firms are Being Paid Huge Sums to Not Provide Power (bbc.com) 76

The U.K. electricity grid "was built to deliver power generated by coal and gas plants near the country's major cities and towns," reports the BBC, "and doesn't always have sufficient capacity in the wires that carry electricity around the country to get the new renewable electricity generated way out in the wild seas and rural areas.

"And this has major consequences." The way the system currently works means a company like Ocean Winds gets what are effectively compensation payments if the system can't take the power its wind turbines are generating and it has to turn down its output. It means Ocean winds was paid £72,000 [nearly $100,000 USD] not to generate power from its wind farms in the Moray Firth during a half-hour period on 3 June because the system was overloaded — one of a number of occasions output was restricted that day. At the same time, 44 miles (70km) east of London, the Grain gas-fired power station on the Thames Estuary was paid £43,000 to provide more electricity.

Payments like that happen virtually every day. Seagreen, Scotland's largest wind farm, was paid £65 million last year to restrict its output 71% of the time, according to analysis by Octopus Energy. Balancing the grid in this way has already cost the country more than £500 million this year alone, the company's analysis shows. The total could reach almost £8bn a year by 2030, warns the National Electricity System Operator (NESO), the body in charge of the electricity network. It's pushing up all our energy bills and calling into question the government's promise that net zero would end up delivering cheaper electricity... the potential for renewables to deliver lower costs just isn't coming through to consumers.

Renewables now generate more than half the country's electricity, but because of the limits to how much electricity can be moved around the system, even on windy days some gas generation is almost always needed to top the system up. And because gas tends to be more expensive, it sets the wholesale price.

The UK government is now considering smaller regional markets, so wind companies "would have to sell that spare power to local people instead of into a national market. The theory is prices would fall dramatically — on some days Scottish customers might even get their electricity for free...

"Supporters argue that it would attract energy-intensive businesses such as data centres, chemical companies and other manufacturing industries."
NASA

NASA Pulls the Plug on Jupiter-Moon Lander, So Scientists Propose Landing It on Saturn (gizmodo.com) 45

"NASA engineers have spent the past decade developing a rugged, partially autonomous lander designed to explore Europa, one of Jupiter's most intriguing moons," reports Gizmodo.

But though NASA "got cold feet over the project," the engineers behind the project are now suggesting the probe could instead explore Enceladus, the sixth-largest moon of Saturn: Europa has long been a prime target in the search for extraterrestrial biology because scientists suspect it harbors a subsurface ocean beneath its icy crust, potentially teeming with microbial life. But the robot — packed with radiation shielding, cutting-edge software, and ice-drilling appendages — won't be going anywhere anytime soon.

In a recent paper in Science Robotics, engineers at NASA's Jet Propulsion Laboratory (JPL) outlined the design and testing of what was once the Europa Lander prototype, a four-legged robotic explorer built to survive the brutal surface conditions of the Jovian moon. The robot was designed to walk — as opposed to roll — analyze terrain, collect samples, and drill into Europa's icy crust — all with minimal guidance from Earth, due to the major communication lag between our planet and the moon 568 million miles (914 million kilometers) away. Designed to operate autonomously for hours at a time, the bot came equipped with stereoscopic cameras, a robotic arm, LED lights, and a suite of specialized materials tough enough to endure harsh radiation and bone-chilling cold....

According to the team, the challenges of getting to Europa — its radiation exposure, immense distance, and short observation windows — proved too daunting for NASA's higher-ups. And that's before you take into consideration the devastating budget cuts planned by the Trump administration, which would see the agency's funding fall from $7.3 billion to $3.9 billion. The lander, once the centerpiece of a bold astrobiology initiative, is now essentially mothballed.

But the engineers aren't giving up. They're now lobbying for the robot to get a second shot — on Enceladus, Saturn's ice-covered moon, which also boasts a subsurface ocean and has proven more favorable for robotic exploration. Enceladus is still frigid, but `has lower radiation and better access windows than Europa.

AI

'Welcome to Campus. Here's Your ChatGPT.' (nytimes.com) 68

The New York Times reports: California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system..." Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
And other U.S. campuses including the University of Maryland are also "working to make A.I. tools part of students' everyday experiences," according to the article. It's all part of an OpenAI initiative "to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life."

The Times calls it "a national experiment on millions of students." If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch "A.I.-native universities..." To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT...

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training...

[Leah Belsky, OpenAI's vice president of education] said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.

Programming

Bill Atkinson, Hypercard Creator and Original Mac Team Member, Dies at Age 74 (appleinsider.com) 53

AppleInsider reports: The engineer behind much of the Mac's early graphical user interfaces, QuickDraw, MacPaint, Hypercard and much more, William D. "Bill" Atkinson, died on June 5 of complications from pancreatic cancer...

Atkinson, who built a post-Apple career as a noted nature photographer, worked at Apple from 1978 to 1990. Among his lasting contributions to Apple's computers were the invention of the menubar, the selection lasso, the "marching ants" item selection animation, and the discovery of a midpoint circle algorithm that enabled the rapid drawing of circles on-screen.

He was Apple Employee No. 51, recruited by Steve Jobs. Atkinson was one of the 30 team members to develop the first Macintosh, but also was principle designer of the Lisa's graphical user interface (GUI), a novelty in computers at the time. He was fascinated by the concept of dithering, by which computers using dots could create nearly photographic images similar to the way newspapers printed photos. He is also credited (alongside Jobs) for the invention of RoundRects, the rounded rectangles still used in Apple's system messages, application windows, and other graphical elements on Apple products.

Hypercard was Atkinson's main claim to fame. He built the a hypermedia approach to building applications that he once described as a "software erector set." The Hypercard technology debuted in 1987, and greatly opened up Macintosh software development.

In 2012 some video clips of Atkinson appeared in some rediscovered archival footage. (Original Macintosh team developer Andy Hertzfeld uploaded "snippets from interviews with members of the original Macintosh design team, recorded in October 1983 for projected TV commercials that were never used.")

Blogger John Gruber calls Atkinson "One of the great heroes in not just Apple history, but computer history." If you want to cheer yourself up, go to Andy Hertzfeld's Folklore.org site and (re-)read all the entries about Atkinson. Here's just one, with Steve Jobs inspiring Atkinson to invent the roundrect. Here's another (surely near and dear to my friend Brent Simmons's heart) with this kicker of a closing line: "I'm not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied."

Some of his code and algorithms are among the most efficient and elegant ever devised. The original Macintosh team was chock full of geniuses, but Atkinson might have been the most essential to making the impossible possible under the extraordinary technical limitations of that hardware... In addition to his low-level contributions like QuickDraw, Atkinson was also the creator of MacPaint (which to this day stands as the model for bitmap image editorsâ — âPhotoshop, I would argue, was conceptually derived directly from MacPaint) and HyperCard ("inspired by a mind-expanding LSD journey in 1985"), the influence of which cannot be overstated.

I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he's on the short list. What a man, what a mind, what gifts to the world he left us.

AI

AI Firms Say They Can't Respect Copyright. But A Nonprofit's Researchers Just Built a Copyright-Respecting Dataset (msn.com) 100

Is copyrighted material a requirement for training AI? asks the Washington Post. That's what top AI companies are arguing, and "Few AI developers have tried the more ethical route — until now.

"A group of more than two dozen AI researchers have found that they could build a massive eight-terabyte dataset using only text that was openly licensed or in public domain. They tested the dataset quality by using it to train a 7 billion parameter language model, which performed about as well as comparable industry efforts, such as Llama 2-7B, which Meta released in 2023." A paper published Thursday detailing their effort also reveals that the process was painstaking, arduous and impossible to fully automate. The group built an AI model that is significantly smaller than the latest offered by OpenAI's ChatGPT or Google's Gemini, but their findings appear to represent the biggest, most transparent and rigorous effort yet to demonstrate a different way of building popular AI tools....

As it turns out, the task involves a lot of humans. That's because of the technical challenges of data not being formatted in a way that's machine readable, as well as the legal challenges of figuring out what license applies to which website, a daunting prospect when the industry is rife with improperly licensed data. "This isn't a thing where you can just scale up the resources that you have available" like access to more computer chips and a fancy web scraper, said Stella Biderman [executive director of the nonprofit research institute Eleuther AI]. "We use automated tools, but all of our stuff was manually annotated at the end of the day and checked by people. And that's just really hard."

Still, the group managed to unearth new datasets that can be used ethically. Those include a set of 130,000 English language books in the Library of Congress, which is nearly double the size of the popular-books dataset Project Gutenberg. The group's initiative also builds on recent efforts to develop more ethical, but still useful, datasets, such as FineWeb from Hugging Face, the open-source repository for machine learning... Still, Biderman remained skeptical that this approach could find enough content online to match the size of today's state-of-the-art models... Biderman said she didn't expect companies such as OpenAI and Anthropic to start adopting the same laborious process, but she hoped it would encourage them to at least rewind back to 2021 or 2022, when AI companies still shared a few sentences of information about what their models were trained on.

"Even partial transparency has a huge amount of social value and a moderate amount of scientific value," she said.

Programming

Morgan Stanley Says Its AI Tool Processed 9 Million Lines of Legacy Code This Year And Saved 280,000 Developer Hours (msn.com) 88

Morgan Stanley has deployed an in-house AI tool called DevGen.AI that has reviewed nine million lines of legacy code this year, saving the investment bank's developers an estimated 280,000 hours by translating outdated programming languages into plain English specifications that can be rewritten in modern code.

The tool, built on OpenAI's GPT models and launched in January, addresses what Mike Pizzi, the company's global head of technology and operations, calls one of enterprise software's biggest pain points -- modernizing decades-old code that weakens security and slows new technology adoption. While commercial AI coding tools excel at writing new code, they lack expertise in older or company-specific programming languages like Cobol, prompting Morgan Stanley to train its own system on its proprietary codebase.

The tool's primary strength, the bank said, lies in creating English specifications that map what legacy code does, enabling any of the company's 15,000 developers worldwide to rewrite it in modern programming languages rather than relying on a dwindling pool of specialists familiar with antiquated coding systems.
AI

Web-Scraping AI Bots Cause Disruption For Scientific Databases and Journals (nature.com) 37

Automated web-scraping bots seeking training data for AI models are flooding scientific databases and academic journals with traffic volumes that render many sites unusable. The online image repository DiscoverLife, which contains nearly 3 million species photographs, started receiving millions of daily hits in February this year that slowed the site to the point that it no longer loaded, Nature reported Monday.

The surge has intensified since the release of DeepSeek, a Chinese large language model that demonstrated effective AI could be built with fewer computational resources than previously thought. This revelation triggered what industry observers describe as an "explosion of bots seeking to scrape the data needed to train this type of model." The Confederation of Open Access Repositories reported that more than 90% of 66 surveyed members experienced AI bot scraping, with roughly two-thirds suffering service disruptions. Medical journal publisher BMJ has seen bot traffic surpass legitimate user activity, overloading servers and interrupting customer services.
AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,
AI

GitHub Users Angry at the Prospect of AI-Written Issues From Copilot (github.com) 47

Earlier this month the "Create New Issue" page on GitHub got a new option. "Save time by creating issues with Copilot" (next to a link labeled "Get started.") Though the option later disappeared, they'd seemed very committed to the feature. "With Copilot, creating issues...is now faster and easier," GitHub's blog announced May 19. (And "all without sacrificing quality.")

Describe the issue you want and watch as Copilot fills in your issue form... Skip lengthy descriptions — just upload an image with a few words of context.... We hope these changes transform issue creation from a chore into a breeze.
But in the GitHub Community discussion, these announcements prompted a request. "Allow us to block Copilot-generated issues (and Pull Requests) from our own repositories." This says to me that GitHub will soon start allowing GitHub users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated "AI" content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).

As I am not the only person on this website with "AI"-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account.

1,239 GitHub users upvoted the comment — and 125 comments followed.
  • "I have now started migrating repos off of github..."
  • "Disabling AI generated issues on a repository should not only be an option, it should be the default."
  • "I do not want any AI in my life, especially in my code."
  • "I am not against AI necessarily but giving it write-access to most of the world's mission-critical code-bases including building-blocks of the entire web... is an extremely tone-deaf move at this early-stage of AI. "

One user complained there was no "visible indication" of the fact that an issue was AI-generated "in either the UI or API." Someone suggested a Copilot-blocking Captcha test to prevent AI-generated slop. Another commenter even suggested naming it "Sloptcha".

And after more than 10 days, someone noticed the "Create New Issue" page seemed to no longer have the option to "Save time by creating issues with Copilot."

Thanks to long-time Slashdot reader jddj for sharing the news.


Businesses

Amazon Purges Billions of Product Listings in Cost-Cutting Drive (businessinsider.com) 28

Amazon has quietly removed billions of product listings through a confidential initiative called "Bend the Curve," according to Business Insider. The project planned to eliminate at least 24 billion ASINs -- unique product identifiers -- from Amazon's marketplace, reducing the total from a projected 74 billion to under 50 billion by December 2024. The purge targets "unproductive selection" including poor-selling items, listings without actual inventory, and product pages inactive for over two years.

The initiative represents a shift for the company that built its reputation as "The Everything Store" through three decades of relentless catalog expansion. Bend the Curve forms part of CEO Andy Jassy's broader cost-cutting strategy, saving Amazon's retail division over $22 million in AWS server costs during 2024 by reducing the number of hosted product pages.
Businesses

United Chief Dismisses Budget Airline Model as 'Dead' and 'Crappy' (marketwatch.com) 67

United Airlines CEO Scott Kirby has harsh words for budget carriers, calling their business model "dead."

"It's dead. Look, it's a crappy model. Sorry," he said when asked about the budget airline approach. Kirby argued that budget carriers like Southwest, Spirit, and Frontier built their operations around what he characterized as customer-hostile practices, saying "The model was, screw the customer ... Trick people, get them to buy, get them to come, and then charge them a whole bunch of fees that they aren't expecting."

He said he believes that these airlines struggle to retain customers once they reach sufficient scale to require repeat business.
AI

Nothing's Carl Pei Says Your Smartphone's OS Will Replace All of Its Apps 70

In an interview with Wired (paywalled), OnePlus co-founder and Nothing CEO, Carl Pei, said the future of smartphones will center around the OS and AI to get things done -- rendering traditional apps a thing of the past. 9to5Google reports: Pei says that Nothing's strength is in "creativity," adding that "the creative companies of the past" such as Apple "have become very big and very corporate, and they're no longer very creative." He then dives into what else but AI, explaining that Nothing wants to create the "iPod" of AI, saying that Apple built a product that simply built a better user experience: "If you look back, the iPod was not launched as 'an MP3 player with a hard disk drive.' The hard disk drive was merely a means to a better user experience. AI is just a new technology that enables us to create better products for users. So, our strategy is not to make big claims that AI is going to change the world and revolutionize smartphones. For us, it's about using it to solve a consumer problem, not to tell a big story. We want the product to be the story."

Pei then says that he doesn't see the current trend of AI products -- citing wearables such as smart glasses -- as the future of the technology. Rather, he sees the smartphone as the most important device for AI "for the foreseeable future," but as one that will "change dramatically." According to Pei, the future of the smartphone is one without apps, with the experience instead just revolving around the OS and what it can do and how it can "optimize" for the user, acting as a proactive, automated agent and that, in the end, the user "will spend less time doing boring things and more time on what they care about."

Slashdot Top Deals