AI

Anthropic Unveils 'Claude Mythos', Powerful AI With Major Cyber Implications 61

"Anthropic has unveiled Claude Mythos, a new AI model capable of discovering critical vulnerabilities at scale," writes Slashdot reader wiredmikey. "It's already powering Project Glasswing, a joint effort with major tech firms to secure critical software. But the same capabilities could also accelerate offensive cyber operations." SecurityWeek reports: Mythos is not an incremental improvement but a step change in performance over Anthropic's current range of frontier models: Haiku (smallest), Sonnet (middle ground), and Opus (most powerful). Mythos sits in a fourth tier named Copybara, and Anthropic describes it as superior to any other existing AI frontier model. It incorporates the current trend in the use of AI: the modern use of agentic AI. "The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills... the model has the highest scores of any model yet developed on a variety of software coding tasks," notes Anthropic in a blog titled Project Glasswing -- Securing critical software for the AI era.

In the last few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities with many classified as critical. Several are ten or 20 years old -- the oldest found so far is a 27-years old bug in OpenBSD. Elsewhere, a 16-years old vulnerability found in video software has survived five million hits from other automated testing tools without ever being discovered. And it autonomously found and chained together several in the Linux kernel allowing an attacker to escalate from ordinary user access to complete control of the machine. [...] Anthropic is concerned that Mythos' capabilities could unleash cyberattacks too fast and too sophisticated for defenders to block. It hopes that Mythos can be used to improve cybersecurity generally before malicious actors can get access to it.

To this end, the firm has announced the next stage of this preparation as Project Glasswing, powered by Mythos Preview. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. "Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play." Claude Mythos Preview is described as a general-purpose, unreleased frontier model from Anthropic that has nevertheless completed its training phase. The firm does not plan to make Mythos Preview generally available. The implication is that 'Preview' is a term used solely to describe the current state of Mythos and the market's readiness to receive it, and will be dropped when the firm gets closer to general release.
News

AP Offers Buyouts As Part of Pivot Away From Newspaper Journalism (apnews.com) 27

The Associated Press is offering buyouts to U.S. journalists "as part of an acceleration away from the focus on newspaper journalism that sustained the company since the mid-1800s," the not-for-profit outlet reported today. AP says it is making the move from a position of strength, responding to shrinking newspaper revenue and growing demand from digital, broadcast, and tech clients.

"The AP is not in trouble," said Julie Pace, executive editor and senior vice president of the AP. "We're making these changes from a position of strength but we're doing so now to recognize our changing customer base." From the report: The news organization is becoming more focused on visual journalism and developing new revenue sources, particularly through companies investing in artificial intelligence, to cope with the economic collapse of many legacy news outlets. Once the lion's share of AP's revenue, big newspaper companies now account for 10% of its income. "We're not a newspaper company and we haven't been for quite some time," [said Pace].

Despite changes -- the company has doubled the number of video journalists it employs in the United States since 2022 -- remnants of a staffing structure built largely to provide stories to newspapers and broadcasters in individual states have remained. That has its roots well back in American history; the AP was started in the mid-19th century by New York newspapers looking to share the costs of reporting outside their immediate territory.

The number of AP journalists who will lose jobs is murky, in part intentionally. The AP does not say how many journalists it employs, though it has a large international presence as well as its U.S. staff. Pace said the AP's goal is to reduce its global staff by less than 5%. The Marketing and Media Alliance estimated the AP had 3,700 staffers, but it was not clear when that estimate was made. Since buyouts are being offered now to only U.S. journalists, it stands to reason that the cut among that workforce will be more than 5%. Whether there are layoffs depends on how many people take the offer, Pace said.

Open Source

Is It Time For Open Source to Start Charging For Access? (theregister.com) 97

"It's time to charge for access," argues a new opinion piece at The Register. Begging billion-dollar companies to fund open source projects just isn't enough, writes long-time tech reporter Steven J. Vaughan-Nichols: Screw fair. Screw asking for dimes. You can't live off one-off charity donations... Depending on what people put in a tip jar is no way to fund anything of value... [A]ccording to a 2024 Tidelift maintainer report, 60 percent of open source maintainers are unpaid, and 60 percent have quit or considered quitting, largely due to burnout and lack of compensation. Oh, and of those getting paid, only 26 percent earn more than $1,000 a year for their work. They'd be better paid asking "Would you like fries with that?" at your local McDonald's...

Some organizations do support maintainers, for example, there's HeroDevs and its $20 million Open Source Sustainability Fund. Its mission is to pay maintainers of critical, often end-of-life open source components so they can keep shipping patches without burning out. Sentry's Open Source Pledge/Fund has given hundreds of thousands of dollars per year directly to maintainers of the packages Sentry depends on. Sentry is one of the few vendors that systematically maps its dependency tree and then actually cuts checks to the people maintaining that stack, as opposed to just talking about "giving back."

Sentry is on to something. We have the Linux Foundation to manage commercial open source projects, the Apache Foundation to oversee its various open source programs, the Open Source Initiative (OSI) to coordinate open source licenses, and many more for various specific projects. It's time we had an organization with the mission of ensuring that the top programmers and maintainers of valuable open source projects get a cut of the tech billionaire pie.

We must realign how businesses work with open source so that payment is no longer an optional charitable gift but a cost of doing business. To do that, we need an organization to create a viable, supportable path from big business to individual programmer. It's time for someone to step up and make this happen. Businesses, open source software, and maintainers will all be better off for it.

One possible future... Bruce Perens wrote the original Open Source definition in 1997, and now proposes a not-for-profit corporation developing "the Post Open Collection" of software, distributing its licensing fees to developers while providing services like user support, documentation, hardware-based authentication for developers, and even help with government compliance and lobbying.
Open Source

Nvidia Bets On OpenClaw, But Adds a Security Layer Via NemoClaw (zdnet.com) 11

During today's Nvidia GTC keynote, the company introduced NemoClaw, a security-focused stack designed to make the autonomous AI agent platform OpenClaw safer. ZDNet explains how it works: NemoClaw installs Nvidia's OpenShell, a new open-source runtime that keeps agents safer to use by enforcing an organization's policy-based guardrails. OpenShell keeps models sandboxed, adds data privacy protections and additional security for agents, and makes them more scalable. "This provides the missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security, network, and privacy guardrails," Nvidia said in the announcement. The company built OpenShell with security companies like CrowdStrike, Cisco, and Microsoft Security to ensure it is compatible with other cybersecurity tools.

Nvidia said NemoClaw can be installed in a single command, runs on any platform, and can use any coding agent, including Nvidia's own Nemotron open model family, on a local system. Through a privacy router, it allows agents to access frontier models in the cloud, which unites local and cloud models to help teach agents how to complete tasks within privacy guardrails, Nvidia explained. Nvidia seems to be hoping that the additional security can make OpenClaw agents more popular and accessible, with less risk than they currently carry. The bigger picture here is how NemoClaw could give companies the added peace of mind to let AI agents complete actions for their employees, where they wouldn't have previously.
Nvidia did not specify when NemoClaw would be available.
Government

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws (9to5linux.com) 168

System76 isn't the only one criticizing new age-verification laws. The blog 9to5Linux published an "informal" look at other discussions in various Linux communities. Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. "Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response," said Jon Seager, VP Engineering at Canonical. "The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical."

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.

Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization "has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet."

And there's another problem. "Many of these mandates imagine technology that does not currently exist." Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

These burdens fall particularly heavily on developers who aren't at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users' and developers' right to free expression, their digital liberties, privacy, and ability to create and use open platforms...

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

IOS

Apple Blocks US Users From Downloading ByteDance's Chinese Apps (wired.com) 25

An anonymous reader quotes a report from Wired: While TikTok operates in the United States under new ownership, Apple has deployed technical restrictions to block iOS users in the United States from downloading other apps made by the video platform's Chinese parent organization ByteDance. ByteDance owns a vast array of different apps spanning social media, entertainment, artificial intelligence, and other sectors. The leading one is Douyin, the Chinese version of TikTok, which has over 1 billion monthly active users. While most of those users reside in China, iPhone owners around the world have traditionally been able to download these apps from anywhere without using a VPN, as long as they have a valid App Store account registered in China.

That's not true anymore. Starting in late January, iPhone users in the U.S. with Chinese App Store accounts began reporting that they were encountering new obstacles when they tried to download apps developed by ByteDance. WIRED has confirmed that even with a valid Chinese App Store account, downloading or updating a ByteDance-owned Chinese app is blocked on Apple devices located in the United States. Instead, a pop-up window appears that says, "This app is unavailable in the country or region you're in." The restriction appears to apply only to ByteDance-owned apps and not those developed by other Chinese companies.

The timing and technical specifics suggest the restriction is related to the deal TikTok agreed to in January to divest Chinese ownership of its U.S. operations. The agreement was the result of the so-called TikTok ban law passed by Congress in 2024, which also barred companies like Apple and Google from distributing other apps majority-owned by ByteDance. The Protecting Americans from Foreign Adversary Controlled Applications Act states that no company can "distribute, maintain, or update" any app majority-controlled by ByteDance "within the land or maritime borders of the United States."

The law was primarily aimed at TikTok, which has more than 100 million users in the U.S. and had been the subject of years of debate in Washington over whether its Chinese ownership posed a national security risk. But ByteDance also has dozens of other apps that at some point were also removed from Apple's and Google's app stores in the U.S.. Now it seems like the scope of impact has reached even more apps that are not technically designed for U.S. audiences, such as Douyin, the AI chatbot Doubao, and the fiction reading platform Fanqie Novel.

Open Source

Norway's Consumer Council Calls for Right to Repair and Antitrust Enforcement - and Mocks 'Enshittification' (forbrukerradet.no) 69

The Norwegian Consumer Council, a government funded organization advocating for consumer's rights, released a report on the trend of "enshittification" in digital consumer goods and services, suggesting ways consumers for consumers to resist. But they've also dramatized the problem with a funny four-minute video about the man whose calls for him to make things shitty for people.

"It's not just your imagination. Digital services are getting worse," the video concludes — before adding that "Luckily, it doesn't have to be this way." The Consumer Council's announcement recommends:
  • Stronger rights for consumers to control, adapt, repair, and alter their products and services,
  • Interoperability, data portability, and decentralisation as the norm, so the threshold for moving to different services becomes as low as possible,
  • Deterrent and vigorous enforcement of competition law, so that Big Tech companies are not allowed to indiscriminately acquire start-ups, competitors or otherwise steer the market to their advantage,
  • Better financing of initiatives to build, maintain or improve alternative digital services and infrastructure based on open source code and open protocols,
  • Reduce public sector dependence on big tech, to regain control and to contribute to a functioning market for service providers that respect fundamental rights,
  • Deterrent and consistent enforcement of other laws, including consumer and data protection law.

The Norwegian Consumer Council is also joining 58 organisations and experts in a letter asking the Norwegian government to rebalance power with enforcement resources and by prioritizing the procurement of services based on open source code. And "Our sister organisations are sending similar letters to their own governments in 12 countries."

They're also sending a second letter to the European Commission with 29 civil society organisations (including the EFF and Amnesty International) warning about the risks of deregulation and calling for reducing dependency on big tech.

Thanks to Slashdot reader DeanonymizedCoward for sharing the news.


Businesses

Fintech CEO and Forbes 30 Under 30 Alum Charged for Alleged Fraud (techcrunch.com) 20

An anonymous reader shares a report: By now, the Forbes 30 Under 30 list has become more than a little notorious for the amount of entrants who go on to be charged with fraud.[...] Gokce Guven, a 26-year-old Turkish national and the founder and CEO of fintech startup Kalder, was charged last week with alleged securities fraud, wire fraud, visa fraud, and aggravated identity theft. The New York-based fintech startup -- which uses the "Turn Your Rewards into [a] Revenue Engine" tagline -- says it can help companies create and monetize individual rewards programs. The company was founded in 2022, and offers participating firms the opportunity to earn ongoing revenue streams via partner affiliate sales, Axios previously reported.

Guven was featured in last year's Forbes 30 Under 30 list. The magazine notes in the writeup that Guven's clients included major chocolatier Godiva and the International Air Transport Association, the trade organization that represents a majority of the world's airlines. Kalder also claims to have enjoyed the backing of a number of prominent VC firms. The U.S. Department of Justice alleges that, during Kalder's seed round in April of 2024, Guven managed to raise $7 million from more than a dozen investors after presenting a pitch deck that was rife with false information.

According to the government, Kalder's pitch deck claimed that there were 26 brands "using Kalder" and another 53 brands in "live freemium." However, officials say that, in reality, Kalder had, in many cases, only been offering heavily discounted pilot programs to many of those companies. Other brands "had no agreement with Kalder whatsoever -- not even for free services," officials said in a press release announcing the indictment. The pitch deck also "falsely reported that Kalder's recurring revenue had steadily grown month over month since February 2023 and that by March 2024, Kalder had reached $1.2 million in annual recurring revenue." The government also accuses Guven of having kept two separate sets of financial books.

Mozilla

Mozilla is Building an AI 'Rebel Alliance' To Take on Industry Heavweights OpenAI, Anthropic (cnbc.com) 47

Mozilla, the nonprofit organization behind the Firefox browser that has spent two decades battling tech giants over control of the internet, is now turning its attention to AI and deploying roughly $1.4 billion in reserves to fund what president Mark Surman calls a "rebel alliance" of startups focused on AI safety, transparency and governance.

The organization released a report Tuesday outlining its strategy to counter the growing dominance of OpenAI and Anthropic, which have raised more than $60 billion and $30 billion respectively from investors and now command valuations of $500 billion and $350 billion. Mozilla Ventures, a fund launched in 2022 with an initial $35 million commitment, has invested in more than 55 companies to date and is exploring raising additional capital.

Surman, who runs the organization from a farm outside Toronto, acknowledged the financial mismatch but said Mozilla is playing the long game. By 2028, he wants Mozilla to be funding a "mainstream" open-source AI ecosystem for developers. The effort faces headwinds from the Trump administration, which has criticized AI safety efforts as "woke AI" and signed an executive order establishing a task force to challenge state AI regulations.
Space

Is Russia Developing an Anti-Satellite Weapon to Target Starlink? (apnews.com) 140

An anonymous reader shared this report from the Associated Press: Two NATO-nation intelligence services suspect Russia is developing a new anti-satellite weapon to target Elon Musk's Starlink constellation with destructive orbiting clouds of shrapnel, with the aim of reining in Western space superiority that has helped Ukraine on the battlefield. Intelligence findings seen by The Associated Press say the so-called "zone-effect" weapon would seek to flood Starlink orbits with hundreds of thousands of high-density pellets, potentially disabling multiple satellites at once but also risking catastrophic collateral damage to other orbiting systems.

Analysts who haven't seen the findings say they doubt such a weapon could work without causing uncontrollable chaos in space for companies and countries, including Russia and its ally China, that rely on thousands of orbiting satellites for communications, defense and other vital needs. Such repercussions, including risks to its own space systems, could steer Moscow away from deploying or using such a weapon, analysts said. "I don't buy it. Like, I really don't," said Victoria Samson, a space-security specialist at the Secure World Foundation who leads the Colorado-based nongovernmental organization's annual study of anti-satellite systems. "I would be very surprised, frankly, if they were to do something like that." [Later they suggested the research might just be experimental.]

But the commander of the Canadian military's Space Division, Brig. Gen. Christopher Horner, said such Russian work cannot be ruled out in light of previous U.S. allegations that Russia also has been pursuing an indiscriminate nuclear, space-based weapon. "I can't say I've been briefed on that type of system. But it's not implausible," he said... The French military's Space Command said in a statement to the AP that it could not comment on the findings but said, "We can inform you that Russia has, in recent years, been multiplying irresponsible, dangerous, and even hostile actions in space."

The article also points out that this month Russia "said it has fielded a new ground-based missile system, the S-500, which is capable of hitting low-orbit targets..."
The Internet

The Battle Over Africa's Great Untapped Resource: IP Addresses (msn.com) 55

In his mid-20s, Lu Heng "got an idea that has made him a lot richer," writes the Wall Street Journal.

He scooped up 10 million unused IP addresses, mostly form Africa, and then leases them to companies, mostly outside Africa, "that need them badly." [A]round half of internet traffic continues to use IPv4, because changing to IPv6 can be expensive and complex and many older devices still need IPv4. Companies including Amazon, Microsoft and Google still want IPv4 addresses because their cloud-hosting businesses need them as bridges between the IPv4 and IPv6 worlds... Africa, which has been slower to develop internet infrastructure than the rest of the world, is the only region that still has some of the older addresses to dole out... He searches for IPv4 addresses that aren't being used — by ISPs or anyone else that holds them — and uses his Hong Kong-based company, Larus, to lease them out to others.

In 2013, Lu registered a new company in the Seychelles, an African archipelago in the Indian Ocean, to apply for IP addresses from Africa's internet registry, called the African Network Information Centre, or Afrinic. Between 2013 and 2016, Afrinic granted that company, Cloud Innovation, 6.2 million IPv4 addresses. That's more addresses than are assigned to Nigeria, Africa's most populous nation. A single IPv4 address can be worth about $50 on its transfer to a company like Larus, which leases it onward for around 5% to 10% of that value annually. Larus and its affiliate companies, Lu said, control just over 10 million IPv4 addresses. The architects of the internet don't appear to have contemplated the possibility that anyone would seek to monetize IP addresses...

Lu's activities triggered a showdown with Africa's internet registry. In 2020, after what it said was an internal review, Afrinic sent letters to Lu and others seeking to reclaim the IP addresses they held. In Lu's case, Afrinic said he shouldn't be using the addresses outside Africa. Lu responded that he wasn't violating rules in place when he got the addresses... After some back-and-forth, Lu sued Afrinic in Mauritius to keep his allocated addresses, eventually filing dozens of lawsuits... One of the lawsuits that Lu filed in Mauritius prompted a court there to freeze Afrinic's bank accounts in July 2021, effectively paralyzing the organization and eventually sending it into receivership. The receivership choked off distributions of new IPv4 addresses, leaving the continent's service providers struggling to expand capacity...

In September, Afrinic elected a new board. Since then, some internet-service providers have been granted IPv4 addresses.

AI

Advocacy Groups Urge Parents To Avoid AI Toys This Holiday Season 32

An anonymous reader quotes a report from the Associated Press: They're cute, even cuddly, and promise learning and companionship -- but artificial intelligence toys are not safe for kids, according to children's and consumer advocacy groups urging parents not to buy them during the holiday season. These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAI's ChatGPT, according to an advisory published Thursday by the children's advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.

"The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm," Fairplay said. AI toys, made by companies including Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but disrupt children's relationships and resilience, the group said. "What's different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters," said Rachel Franz, director of Fairplay's Young Children Thrive Offline Program. Because of this, she added, the trust young children are placing in these toys can exacerbate the types of harms older children are already experiencing with AI chatbots.

A separate report Thursday by Common Sense Media and psychiatrists at Stanford University's medical school warned teenagers against using popular AI chatbots as therapists. Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for years. They just weren't as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattel's talking Hello Barbie doll that it said was recording and analyzing children's conversations. This time, though AI toys are mostly sold online and more popular in Asia than elsewhere, Franz said some have started to appear on store shelves in the U.S. and more could be on the way. "Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products," Franz said.
Last week, consumer advocates at U.S. PIRG called out the trend of buying AI toys in its annual "Trouble in Toyland" report. This year, the organization tested four toys that use AI chatbots. "We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls," the report said.
Businesses

Who is OpenAI's Auditor? (ft.com) 7

OpenAI won't say who audits its books. The company, which projects to hit an ARR of $20 billion this year and is valued at $500 billion, has committed to spending about $1.4 trillion on data centers over the next decade. It accounts for roughly two-thirds of unfulfilled contracts at Oracle and two-fifths at CoreWeave. Microsoft alone holds around $375 billion in unfulfilled contracts with OpenAI.

Reuters reported the company may target a $1 trillion valuation for a potential IPO in coming years. Most companies at this scale use one of the Big Four accounting firms: Deloitte, EY, KPMG or PwC. OpenAI declined to comment to Financial Times. A person close to the organization told the publication the company has "an industry standard audit with one of the Big Four firms." The company's latest Form 990 filing lists Fontanello, Duffield, & Otake -- a small San Francisco accountancy firm -- as the paid preparer. The form does say an independent accountant audited the statements.

Michael Burry, last night: "Can anyone name [OpenAI's] auditor?"
Wikipedia

Wikipedia Urges AI Companies To Use Its Paid API, and Stop Scraping (techcrunch.com) 51

Wikipedia on Monday laid out a simple plan to ensure its website continues to be supported in the AI era, despite its declining traffic. From a report: In a blog post, the Wikimedia Foundation, the organization that runs the popular online encyclopedia, called on AI developers to use its content "responsibly" by ensuring its contributions are properly attributed and that content is accessed through its paid product, the Wikimedia Enterprise platform.

The opt-in, paid product allows companies to use Wikipedia's content at scale without "severely taxing Wikipedia's servers," the Wikimedia Foundation blog post explains. In addition, the product's paid nature allows AI companies to support the organization's nonprofit mission. While the post doesn't go so far as to threaten penalties or any sort of legal action for use of its material through scraping, Wikipedia recently noted that AI bots had been scraping its website while trying to appear human.

Python

Python Foundation Donations Surge After Rejecting Grant - But Sponsorships Still Needed (blogspot.com) 64

After the Python Software Foundation rejected a $1.5 million grant because it restricted DEI activity, "a flood of new donations followed," according to a new report. By Friday they'd raised over $157,000, including 295 new Supporting Members paying an annual $99 membership fee, says PSF executive director Deb Nicholson.

"It doesn't quite bridge the gap of $1.5 million, but it's incredibly impactful for us, both financially and in terms of feeling this strong groundswell of support from the community." Could that same security project still happen if new funding materializes? The PSF hasn't entirely given up. "The PSF is always looking for new opportunities to fund work benefiting the Python community," Nicholson told me in an email last week, adding pointedly that "we have received some helpful suggestions in response to our announcement that we will be pursuing." And even as things stand, the PSF sees itself as "always developing or implementing the latest technologies for protecting PyPI project maintainers and users from current threats," and it plans to continue with that commitment.
The Python Software Foundation was "astounded and deeply appreciative at the outpouring of solidarity in both words and actions," their executive director wrote in a new blog post this week, saying the show of support "reminds us of the community's strength."

But that post also acknowledges the reality that the Python Software Foundation's yearly revenue and assets (including contributions from major donors) "have declined, and costs have increased,..." Historically, PyCon US has been a source of revenue for the PSF, enabling us to fund programs like our currently paused Grants Program... Unfortunately, PyCon US has run at a loss for three years — and not from a lack of effort from our staff and volunteers! Everyone has been working very hard to find areas where we can trim costs, but even with those efforts, inflation continues to surge, and changing U.S. and economic conditions have reduced our attendance... Because we have so few expense categories (the vast majority of our spending goes to running PyCon US, the Grants Program, and our small 13-member staff), we have limited "levers to pull" when it comes to budgeting and long-term sustainability...
While Python usage continues to surge, "corporate investment back into the language and the community has declined overall. The PSF has longstanding sponsors and partners that we are ever grateful for, but signing on new corporate sponsors has slowed." (They're asking employees at Python-using companies to encourage sponsorships.) We have been seeking out alternate revenue channels to diversify our income, with some success and some challenges. PyPI Organizations offers paid features to companies (PyPI features are always free to community groups) and has begun bringing in monthly income. We've also been seeking out grant opportunities where we find good fits with our mission.... We currently have more than six months of runway (as opposed to our preferred 12 months+ of runway), so the PSF is not at immediate risk of having to make more dramatic changes, but we are on track to face difficult decisions if the situation doesn't shift in the next year.

Based on all of this, the PSF has been making changes and working on multiple fronts to combat losses and work to ensure financial sustainability, in order to continue protecting and serving the community in the long term. Some of these changes and efforts include:

— Pursuing new sponsors, specifically in the AI industry and the security sector
— Increasing sponsorship package pricing to match inflation
— Making adjustments to reduce PyCon US expenses
— Pursuing funding opportunities in the US and Europe
— Working with other organizations to raise awareness
— Strategic planning, to ensure we are maximizing our impact for the community while cultivating mission-aligned revenue channels

The PSF's end-of-year fundraiser effort is usually run by staff based on their capacity, but this year we have assembled a fundraising team that includes Board members to put some more "oomph" behind the campaign. We'll be doing our regular fundraising activities; we'll also be creating a unique webpage, piloting temporary and VERY visible pop-ups to python.org and PyPI.org, and telling more stories from our Grants Program recipients...

Keep your eyes on the PSF Blog, the PSF category on Discuss, and our social media accounts for updates and information as we kick off the fundraiser this month. Your boosts of our posts and your personal shares of "why I support the PSF" stories will make all the difference in our end-of-year fundraiser. If this post has you all fired up to personally support the future of Python and the PSF right now, we always welcome new PSF Supporting Members and donations.

AI

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers' (msn.com) 42

For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models.

"In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not.

I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers.

Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles.

Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls.

"In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...
United States

Why Manufacturing's Last Boom Will Be Hard To Repeat (msn.com) 92

American manufacturing's postwar boom from the 1940s through the 1970s resulted from conditions that cannot be recreated, a story on WSJ argues. Global competitors had been destroyed by war. Energy was cheap. Unions could demand concessions without fearing job losses to foreign rivals.

Strikes were frequent in steel, auto, trucking, rubber and coal mining. That relentless pressure from an organized working class raised real wages and created fringe benefits including health insurance and retirement pay. Government support for unions kept executive salaries at just a few times median income. Stock buybacks were illegal or frowned upon. President Eisenhower declared at the 1956 dedication of the AFL-CIO national headquarters that "Labor is the United States."

The system began unraveling by the mid-1960s. The Vietnam War drained federal coffers. Inflation accelerated as government deficits exploded. Nixon abandoned the gold standard in 1971, unleashing currency volatility. The 1973 OPEC oil embargo quadrupled energy prices. Foreign competition returned from Japan, Korea and West Germany. American companies carried mounting legacy costs like pensions that discouraged investment in upgrades and research.

Milton Friedman declared in a 1970 New York Times essay that the social responsibility of business is to increase its profits. Clinton signed NAFTA in 1993 and championed the World Trade Organization in 1995. Bethlehem Steel employed around 150,000 people in the mid-1950s. The company filed for bankruptcy in 2001. Its former hometown plant in Bethlehem, Pa., is now a casino.
Networking

Are Network Security Devices Endangering Orgs With 1990s-Era Flaws? (csoonline.com) 57

Critics question why basic flaws like buffer overflows, command injections, and SQL injections are "being exploited remain prevalent in mission-critical codebases maintained by companies whose core business is cybersecurity," writes CSO Online. Benjamin Harris, CEO of cybersecurity/penetration testing firm watchTowr tells them that "these are vulnerability classes from the 1990s, and security controls to prevent or identify them have existed for a long time. There is really no excuse." Enterprises have long relied on firewalls, routers, VPN servers, and email gateways to protect their networks from attacks. Increasingly, however, these network edge devices are becoming security liabilities themselves... Google's Threat Intelligence Group tracked 75 exploited zero-day vulnerabilities in 2024. Nearly one in three targeted network and security appliances, a strikingly high rate given the range of IT systems attackers could choose to exploit. That trend has continued this year, with similar numbers in the first 10 months of 2025, targeting vendors such as Citrix NetScaler, Ivanti, Fortinet, Palo Alto Networks, Cisco, SonicWall, and Juniper. Network edge devices are attractive targets because they are remotely accessible, fall outside endpoint protection monitoring, contain privileged credentials for lateral movement, and are not integrated into centralized logging solutions...

[R]esearchers have reported vulnerabilities in these systems for over a decade with little attacker interest beyond isolated incidents. That shifted over the past few years with a rapid surge in attacks, making compromised network edge devices one of the top initial access vectors into enterprise networks for state-affiliated cyberespionage groups and ransomware gangs. The COVID-19 pandemic contributed to this shift, as organizations rapidly expanded remote access capabilities by deploying more VPN gateways, firewalls, and secure web and email gateways to accommodate work-from-home mandates. The declining success rate of phishing is another factor... "It is now easier to find a 1990s-tier vulnerability in a border device where Endpoint Detection and Response typically isn't deployed, exploit that, and then pivot from there" [says watchTowr CEL Harris]...

Harris of watchTowr doesn't want to minimize the engineering effort it takes to build a secure system. But he feels many of the vulnerabilities discovered in the past two years should have been caught with automatic code analysis tools or code reviews, given how basic they have been. Some VPN flaws were "trivial to the point of embarrassing for the vendor," he says, while even the complex ones should have been caught by any organization seriously investing in product security... Another problem? These appliances have a lot of legacy code, some that is 10 years or older.

Attackers may need to chain together multiple hard-to-find vulnerabilities across multiple components, the article acknowleges. And "It's also possible that attack campaigns against network-edge devices are becoming more visible to security teams because they are looking into what's happening on these appliances more than they did in the past... "

The article ends with reactions from several vendors of network edge security devices.

Thanks to Slashdot reader snydeq for sharing the article.
IT

Some Startups Are Demanding 12-Hour Days, Six Days a Week from Workers (msn.com) 151

The Washington Post reports on 996, "a term popularized in China that refers to a rigid work schedule in which people work from 9 a.m. to 9 p.m., six days a week..." As the artificial intelligence race heats up, many start-ups in Silicon Valley and New York are promoting hardcore culture as a way of life, pushing the limits of work hours, demanding that workers move fast to be first in the market. Some are even promoting 996 as a virtue in the hiring process and keeping "grind scores" of companies... Whoever builds first in AI will capture the market, and the window of opportunity is two to three years, "so you better run faster than everyone else," said Inaki Berenguer, managing partner of venture-capital firm LifeX Ventures.

At San Francisco-based AI start-up Sonatic, the grind culture also allows for meal, gym and pickleball time, said Kinjal Nandy, its CEO. Nandy recently posted a job opening on X that requires in-person work seven days a week. He said working 10-hour days sounds like a lot but the company also offers its first hires perks such as free housing in a hacker house, food delivery credits and a free subscription to the dating service Raya... Mercor, a San Francisco-based start-up that uses AI to match people to jobs, recently posted an opening for a customer success engineer, saying that candidates should have a willingness to work six days a week, and it's not negotiable. "We know this isn't for everyone, so we want to put it up top," the listing reads.

Being in-person rather than remote is a requirement at some start-ups. AI start-up StarSling had two engineering job descriptions that required six days a week of in-person work. In a job description for an engineer, Rilla, an AI company in New York, said candidates should not work at the company if they're not excited about working about 70 hours a week in person. One venture capitalist even started tracking "grind scores." Jared Sleeper, a partner at New York-based venture capital firm Avenir, recently ranked public software companies' "grind score" in a post on X, which went viral. Using data from Glassdoor, it ranks the percentage of employees who have a positive outlook for the company compared with their views on work-life balance.

"At Google's AI division, cofounder Sergey Brin views 60 hours per week as the 'sweet spot' for productivity," notes the Independent: Working more than 55 hours a week, compared with a standard 35-40-hour week, is linked to a 35 percent higher risk of stroke and a 17 percent higher risk of death from heart disease, according to the World Health Organization. Productivity also suffers. A British study shows that working beyond 60 hours a week can reduce overall output, slow cognitive performance, and impair tasks ranging from call handling to problem-solving.

Shorter workweeks, in contrast, appear to boost productivity. Microsoft Japan saw a roughly 40% increase in output after adopting a four-day work week. In a UK trial, 61 companies that tested a four-day schedule reported revenue gains, with 92 percent choosing to keep the policy, according to Bloomberg.

Microsoft

Extortion and Ransomware Drive Over Half of Cyberattacks — Sometimes Using AI, Microsoft Finds (microsoft.com) 23

Microsoft said in a blog post this week that "over half of cyberattacks with known motives were driven by extortion or ransomware... while attacks focused solely on espionage made up just 4%."

And Microsoft's annual digital threats report found operations expanding even more through AI, with cybercriminals "accelerating malware development and creating more realistic synthetic content, enhancing the efficiency of activities such as phishing and ransomware attacks." [L]egacy security measures are no longer enough; we need modern defenses leveraging AI and strong collaboration across industries and governments to keep pace with the threat...

Over the past year, both attackers and defenders harnessed the power of generative AI. Threat actors are using AI to boost their attacks by automating phishing, scaling social engineering, creating synthetic media, finding vulnerabilities faster, and creating malware that can adapt itself... For defenders, AI is also proving to be a valuable tool. Microsoft, for example, uses AI to spot threats, close detection gaps, catch phishing attempts, and protect vulnerable users. As both the risks and opportunities of AI rapidly evolve, organizations must prioritize securing their AI tools and training their teams...

Amid the growing sophistication of cyber threats, one statistic stands out: more than 97% of identity attacks are password attacks. In the first half of 2025 alone, identity-based attacks surged by 32%. That means the vast majority of malicious sign-in attempts an organization might receive are via large-scale password guessing attempts. Attackers get usernames and passwords ("credentials") for these bulk attacks largely from credential leaks. However, credential leaks aren't the only place where attackers can obtain credentials. This year, we saw a surge in the use of infostealer malware by cybercriminals...

Luckily, the solution to identity compromise is simple. The implementation of phishing-resistant multifactor authentication (MFA) can stop over 99% of this type of attack even if the attacker has the correct username and password combination.

"Security is not only a technical challenge but a governance imperative..." Microsoft adds in their blog post. "Governments must build frameworks that signal credible and proportionate consequences for malicious activity that violates international rules." (The report also found that America is the #1 most-targeted country — and that many U.S. companies have outdated cyber defenses.)

But while "most of the immediate attacks organizations face today come from opportunistic criminals looking to make a profit," Microsoft writes that nation-state threats "remain a serious and persistent threat." More details from the Associated Press: Russia, China, Iran and North Korea have sharply increased their use of artificial intelligence to deceive people online and mount cyberattacks against the United States, according to new research from Microsoft. This July, the company identified more than 200 instances of foreign adversaries using AI to create fake content online, more than double the number from July 2024 and more than ten times the number seen in 2023.
Examples of foreign espionage cited by the article:
  • China is continuing its broad push across industries to conduct espionage and steal sensitive data...
  • Iran is going after a wider range of targets than ever before, from the Middle East to North America, as part of broadening espionage operations..
  • "[O]utside of Ukraine, the top ten countries most affected by Russian cyber activity all belong to the North Atlantic Treaty Organization (NATO) — a 25% increase compared to last year."
  • North Korea remains focused on revenue generation and espionage...

There was one especially worrying finding. The report found that critical public services are often targeted, partly because their tight budgets limit their incident response capabilities, "often resulting in outdated software.... Ransomware actors in particular focus on these critical sectors because of the targets' limited options. For example, a hospital must quickly resolve its encrypted systems, or patients could die, potentially leaving no other recourse but to pay."


Slashdot Top Deals