The Internet

Cloudflare Explains Its Worst Outage Since 2019 57

Cloudflare suffered its worst network outage in six years on Tuesday, beginning at 11:20 UTC. The disruption prevented the content delivery network from routing traffic for roughly three hours. The failure, writes Cloudflare in a blog post, originated from a database permissions change deployed at 11:05 UTC. The modification altered how a database query returned information about bot detection features. The query began returning duplicate entries. A configuration file used to identify automated traffic doubled in size and spread across the network's machines. Cloudflare's traffic routing software reads this file to distinguish bots from legitimate users. The software had a built-in limit of 200 bot detection features. The enlarged file contained more than 200 entries. The software crashed when it encountered the unexpected file size.

Users attempting to access websites behind Cloudflare's network received error messages. The outage affected multiple services. Turnstile security checks failed to load. The Workers KV storage service returned elevated error rates. Users could not log into Cloudflare's dashboard. Access authentication failed for most customers.

Engineers initially suspected a coordinated attack. The configuration file was automatically regenerated every five minutes. Database servers produced either correct or corrupted files during a gradual system update. Services repeatedly recovered and failed as different versions of the file circulated. Teams stopped generating new files at 14:24 UTC and manually restored a working version. Most traffic resumed by 14:30 UTC. All systems returned to normal at 17:06 UTC.
Microsoft

'Talking To Windows' Copilot AI Makes a Computer Feel Incompetent' (theverge.com) 56

Microsoft's Copilot AI assistant in Windows 11 fails to replicate the capabilities shown in the company's TV advertisements. The Verge tested Copilot Vision over a week using the same prompts featured in ads airing during NFL games. When asked to identify a HyperX QuadCast 2S microphone visible in a YouTube video -- a task successfully completed in Microsoft's ad -- Copilot gave multiple incorrect answers. The assistant identified the microphone as a first-generation HyperX QuadCast, then as a Shure SM7b on two other occasions. Copilot couldn't identify the Saturn V rocket from a PowerPoint presentation despite the words "Saturn V" appearing on screen. When asked about a cave image from Microsoft's ad, Copilot gave inconsistent responses.

About a third of the time it provided directions to find the photo in File Explorer. On two occasions it explained how to launch Google Chrome. Four times it offered advice about booking flights to Belize. The cave is Rio Secreto in Playa del Carmen, Mexico. Microsoft spokesperson Blake Manfre said "Copilot Actions on Windows, which can take actions on local files, is not yet available." He described it as "an opt-in experimental feature that will be coming soon to Windows Insiders in Copilot Labs, starting with a narrow set of use cases while we optimize model performance and learn." Copilot cannot toggle basic Windows settings like dark mode. When asked to analyze a benchmark table in Google Sheets, it "constantly misread clear-as-day scores both in the spreadsheet and in the on-page review."
Google

Google Launches Gemini 3, Its 'Most Intelligent' AI Model Yet (blog.google) 27

Google released Gemini 3 on Tuesday, launching its latest AI model with a breakthrough score of 1501 Elo on the LMArena Leaderboard alongside state-of-the-art performance across multiple benchmarks including 91.9% on GPQA Diamond for PhD-level reasoning and 37.5% on Humanity's Last Exam without tool usage. The model is available starting today in the Gemini app, AI Mode in Search for Google AI Pro, Google AI Studio, Vertex AI and the newly launched Google Antigravity agentic development platform. Third-party platforms including Cursor, GitHub, JetBrains, Manus, and Replit are also gaining access.

Separately, Google said AI Overviews now have 2 billion users every month. Gemini app has topped 650 million users per month.
The Internet

Cloudflare Outage Knocks Many Popular Websites Offline 56

An outage at Cloudflare that began moments ago has knocked many popular websites, including ChatGPT and X, according to user reports. Cloudflare says on its website: "Cloudflare is aware of, and investigating an issue which potentially impacts multiple customers. Further detail will be provided as more information becomes available."

Update: In a statement after the outage was resolved, Cloudflare CTO said: Earlier today we failed our customers and the broader Internet when a problem in Cloudflare network impacted large amounts of traffic that rely on us. The sites, businesses, and organizations that rely on Cloudflare depend on us being available and I apologize for the impact that we caused.

Transparency about what happened matters, and we plan to share a breakdown with more details in a few hours. In short, a latent bug in a service underpinning our bot mitigation capability started to crash after a routine configuration change we made. That cascaded into a broad degradation to our network and other services. This was not an attack.

That issue, impact it caused, and time to resolution is unacceptable. Work is already underway to make sure it does not happen again, but I know it caused real pain today. The trust our customers place in us is what we value the most and we are going to do what it takes to earn that back.
Programming

Security Researchers Spot 150,000 Function-less npm Packages in Automated 'Token Farming' Scheme (theregister.com) 11

An anonymous reader shared this report from The Register: Yet another supply chain attack has hit the npm registry in what Amazon describes as "one of the largest package flooding incidents in open source registry history" — but with a twist. Instead of injecting credential-stealing code or ransomware into the packages, this one is a token farming campaign.

Amazon Inspector security researchers, using a new detection rule and AI assistance, originally spotted the suspicious npm packages in late October, and, by November 7, the team had flagged thousands. By November 12, they had uncovered more than 150,000 malicious packages across "multiple" developer accounts. These were all linked to a coordinated tea.xyz token farming campaign, we're told. This is a decentralized protocol designed to reward open-source developers for their contributions using the TEA token, a utility asset used within the tea ecosystem for incentives, staking, and governance.

Unlike the spate of package poisoning incidents over recent months, this one didn't inject traditional malware into the open source code. Instead, the miscreants created a self-replicating attack, infecting the packages with code to automatically generate and publish, thus earning cryptocurrency rewards on the backs of legitimate open source developers. The code also included tea.yaml files that linked these packages to attacker-controlled blockchain wallet addresses.

At the moment, Tea tokens have no value, points out CSO Online. "But it is suspected that the threat actors are positioning themselves to receive real cryptocurrency tokens when the Tea Protocol launches its Mainnet, where Tea tokens will have actual monetary value and can be traded..." In an interview on Friday, an executive at software supply chain management provider Sonatype, which wrote about the campaign in April 2024, told CSO that number has now grown to 153,000. "It's unfortunate that the worm isn't under control yet," said Sonatype CTO Brian Fox. And while this payload merely steals tokens, other threat actors are paying attention, he predicted. "I'm sure somebody out there in the world is looking at this massively replicating worm and wondering if they can ride that, not just to get the Tea tokens but to put some actual malware in there, because if it's replicating that fast, why wouldn't you?"

When Sonatype wrote about the campaign just over a year ago, it found a mere 15,000 packages that appeared to come from a single person. With the swollen numbers reported this week, Amazon researchers wrote that it's "one of the largest package flooding incidents in open source registry history, and represents a defining moment in supply chain security...." For now, says Sonatype's Fox, the scheme wastes the time of npm administrators, who are trying to expel over 100,000 packages. But Fox and Amazon point out the scheme could inspire others to take advantage of other reward-based systems for financial gain, or to deliver malware.

After deplooying a new detection rule "paired with AI", Amazon's security researchers' write, "within days, the system began flagging packages linked to the tea.xyz protocol... By November 7, the researchers flagged thousands of packages and began investigating what appeared to be a coordinated campaign. The next day, after validating the evaluation results and analyzing the patterns, they reached out to OpenSSF to share their findings and coordinate a response.
Their blog post thanks the Open Source Security Foundation (OpenSSF) for rapid collaboration, while calling the incident "a defining moment in supply chain security..."
Power

A 'Peak Oil' Prediction Surprise From the International Energy Agency (cnbc.com) 73

"The International Energy Agency's latest outlook signals that oil demand could keep growing through to the middle of the century," reports CNBC, "reflecting a sharp tonal shift from the world's energy watchdog and raising further questions about the future of fossil fuels." In its flagship World Energy Outlook, the Paris-based agency on Wednesday laid out a scenario in which demand for oil climbs to 113 million barrels per day by 2050, up 13% from 2024 levels. The IEA had previously estimated a peak in global fossil fuel demand before the end of this decade and said that, in order to reach net-zero emissions by 2050, there should be no new investments in coal, oil and gas projects... The IEA's end-of-decade peak oil forecast kick-started a long-running war of words with OPEC, an influential group of oil exporting countries, which accused the IEA of fearmongering and risking the destabilization of the global economy.

The IEA's latest forecast of increasing oil demand was outlined in its "Current Policies Scenario" — one of a number of scenarios outlined by the IEA. This one assumes no new policies or regulations beyond those already in place. The CPS was dropped five years ago amid energy market turmoil during the coronavirus pandemic, and its reintroduction follows pressure from the Trump administration... Gregory Brew, an analyst at Eurasia Group's Energy, Climate and Resources team, said the IEA's retreat on peak oil demand signified "a major shift" from the group's position over the last five years. "The justifications offered for the shift include policy changes in the U.S., where slow EV penetration indicates robust oil [consumption], but is also tied to expected increases in petrochemical and aviation fuel in East and Southeast Asia," Brew told CNBC by email. "It's unlikely the agency is adjusting based on political pressure — though there has been some of that, with the Trump administration criticizing the group's supposed bias in favor of renewable energy — and the shift reflects a broader skepticism that oil demand is set to peak any time soon," he added...

Alongside its CPS, the IEA also laid out projections under its so-called "Stated Policies Scenario" (STEPS), which reflects the prevailing direction of travel for the global energy system. In this assumption, the IEA said it expects oil demand to peak at 102 million barrels per day around 2030, before gradually declining. Global electric car sales are much stronger under this scenario compared to the CPS. The IEA said its multiple scenarios explore a range of consequences from various policy choices and should not be considered forecasts.

Thanks to Slashdot reader magzteel for sharing the news.
Science

All Lupus Cases May Be Linked To a Common Virus, Study Finds (nbcnews.com) 49

One of the most common viruses in the world could be the cause of lupus, an autoimmune disease with wide-ranging symptoms, according to a new study. From a report: Until now, lupus was somewhat mysterious: No single root cause of the disease had been found, and while there is no cure, there are medications that can treat it.

The research, published in the journal Science Translational Medicine, suggests that Epstein-Barr virus -- which 95% of people acquire at some point in life -- could cause lupus by driving the body to attack its own healthy cells.

It adds to mounting evidence that Epstein-Barr is associated with multiple long-term health issues, including other autoimmune conditions. As this evidence stacks up, scientists have accelerated calls for a vaccine that targets the virus.

"If we now better understand how this fastidious virus is responsible for autoimmune diseases, I think it's time to figure out how to prevent it," said Dr. Anca Askanase, clinical director of the Lupus Center at Columbia University, who wasn't involved in the new research.

Education

UC San Diego Reports 'Steep Decline' in Student Academic Preparation 174

The University of California, San Diego has documented a steep decline in the academic preparation of its entering freshmen over the past five years, according to a report [PDF] released this month by the campus's Senate-Administration Working Group on Admissions. Between 2020 and 2025, the number of students whose math skills fall below middle-school level increased nearly thirtyfold, from roughly 30 to 921 students. These students now represent one in eight members of the entering cohort.

The Mathematics Department redesigned its remedial program this year to focus entirely on elementary and middle school content after discovering students struggled with basic fractions and could not perform arithmetic operations taught in grades one through eight. The deterioration extends beyond mathematics. Nearly one in five domestic freshmen required remedial writing instruction in 2024, returning to pre-pandemic levels after a brief decline.

Faculty across disciplines report students increasingly struggle to engage with longer and complex texts. The decline coincided with multiple disrupting factors. The COVID-19 pandemic forced remote learning starting in spring 2020. The UC system eliminated SAT and ACT requirements in 2021. High school grade inflation accelerated during this period, leaving transcripts unreliable as indicators of actual preparation. UC San Diego simultaneously doubled its enrollment from under-resourced high schools designated LCFF+, admitting more such students than any other UC campus between 2022 and 2024.

The working group concluded that admitting large numbers of underprepared students risks harming those students while straining limited instructional resources. The report recommends developing predictive models to identify at-risk applicants and calls for the UC system to reconsider standardized testing requirements.
Iphone

Apple Delays Release of Next iPhone Air Amid Weak Sales (theinformation.com) 58

An anonymous reader shares a report: Apple is delaying the release of next year's version of the iPhone Air, its thinnest smartphone, after the first model sold below expectations, according to three people involved in the project.

Although the length of the delay remains uncertain, the product won't be released in fall 2026 as previously planned, they said. Apple has already sharply scaled back production of the first version, according to multiple people with direct knowledge of the matter.

Python

Python Foundation Donations Surge After Rejecting Grant - But Sponsorships Still Needed (blogspot.com) 64

After the Python Software Foundation rejected a $1.5 million grant because it restricted DEI activity, "a flood of new donations followed," according to a new report. By Friday they'd raised over $157,000, including 295 new Supporting Members paying an annual $99 membership fee, says PSF executive director Deb Nicholson.

"It doesn't quite bridge the gap of $1.5 million, but it's incredibly impactful for us, both financially and in terms of feeling this strong groundswell of support from the community." Could that same security project still happen if new funding materializes? The PSF hasn't entirely given up. "The PSF is always looking for new opportunities to fund work benefiting the Python community," Nicholson told me in an email last week, adding pointedly that "we have received some helpful suggestions in response to our announcement that we will be pursuing." And even as things stand, the PSF sees itself as "always developing or implementing the latest technologies for protecting PyPI project maintainers and users from current threats," and it plans to continue with that commitment.
The Python Software Foundation was "astounded and deeply appreciative at the outpouring of solidarity in both words and actions," their executive director wrote in a new blog post this week, saying the show of support "reminds us of the community's strength."

But that post also acknowledges the reality that the Python Software Foundation's yearly revenue and assets (including contributions from major donors) "have declined, and costs have increased,..." Historically, PyCon US has been a source of revenue for the PSF, enabling us to fund programs like our currently paused Grants Program... Unfortunately, PyCon US has run at a loss for three years — and not from a lack of effort from our staff and volunteers! Everyone has been working very hard to find areas where we can trim costs, but even with those efforts, inflation continues to surge, and changing U.S. and economic conditions have reduced our attendance... Because we have so few expense categories (the vast majority of our spending goes to running PyCon US, the Grants Program, and our small 13-member staff), we have limited "levers to pull" when it comes to budgeting and long-term sustainability...
While Python usage continues to surge, "corporate investment back into the language and the community has declined overall. The PSF has longstanding sponsors and partners that we are ever grateful for, but signing on new corporate sponsors has slowed." (They're asking employees at Python-using companies to encourage sponsorships.) We have been seeking out alternate revenue channels to diversify our income, with some success and some challenges. PyPI Organizations offers paid features to companies (PyPI features are always free to community groups) and has begun bringing in monthly income. We've also been seeking out grant opportunities where we find good fits with our mission.... We currently have more than six months of runway (as opposed to our preferred 12 months+ of runway), so the PSF is not at immediate risk of having to make more dramatic changes, but we are on track to face difficult decisions if the situation doesn't shift in the next year.

Based on all of this, the PSF has been making changes and working on multiple fronts to combat losses and work to ensure financial sustainability, in order to continue protecting and serving the community in the long term. Some of these changes and efforts include:

— Pursuing new sponsors, specifically in the AI industry and the security sector
— Increasing sponsorship package pricing to match inflation
— Making adjustments to reduce PyCon US expenses
— Pursuing funding opportunities in the US and Europe
— Working with other organizations to raise awareness
— Strategic planning, to ensure we are maximizing our impact for the community while cultivating mission-aligned revenue channels

The PSF's end-of-year fundraiser effort is usually run by staff based on their capacity, but this year we have assembled a fundraising team that includes Board members to put some more "oomph" behind the campaign. We'll be doing our regular fundraising activities; we'll also be creating a unique webpage, piloting temporary and VERY visible pop-ups to python.org and PyPI.org, and telling more stories from our Grants Program recipients...

Keep your eyes on the PSF Blog, the PSF category on Discuss, and our social media accounts for updates and information as we kick off the fundraiser this month. Your boosts of our posts and your personal shares of "why I support the PSF" stories will make all the difference in our end-of-year fundraiser. If this post has you all fired up to personally support the future of Python and the PSF right now, we always welcome new PSF Supporting Members and donations.

AI

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers' (msn.com) 42

For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models.

"In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not.

I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers.

Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles.

Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls.

"In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...
Windows

Bank of America Faces Lawsuit Over Alleged Unpaid Time for Windows Bootup, Logins, and Security Token Requests (hcamag.com) 181

A former Business Analyst reportedly filed a class action lawsuit claiming that for years, hundreds of remote employees at Bank of America first had to boot up complex computer systems before their paid work began, reports Human Resources Director magazine: Tava Martin, who worked both remotely and at the company's Jacksonville facility, says the financial institution required her and fellow hourly workers to log into multiple security systems, download spreadsheets, and connect to virtual private networks — all before the clock started ticking on their workday. The process wasn't quick. According to the filing in the United States District Court for the Western District of North Carolina, employees needed 15 to 30 minutes each morning just to get their systems running. When technical problems occurred, it took even longer...

Workers turned on their computers, waited for Windows to load, grabbed their cell phones to request a security token for the company's VPN, waited for that token to arrive, logged into the network, opened required web applications with separate passwords, and downloaded the Excel files they needed for the day. Only then could they start taking calls from business customers about regulatory reporting requirements...

The unpaid work didn't stop at startup. During unpaid lunch breaks, many systems would automatically disconnect or otherwise lose connection, forcing employees to repeat portions of the login process — approximately three to five minutes of uncompensated time on most days, sometimes longer when a complete reboot was required. After shifts ended, workers had to log out of all programs and shut down their computers securely, adding another two to three minutes.

Thanks to Slashdot reader Joe_Dragon for sharing the article.
AI

'Stratospheric' AI Spending By Four Wealthy Companies Reaches $360B Just For Data Centers (msn.com) 63

"Maybe you've heard that artificial intelligence is a bubble poised to burst," writes a Washington Post technology columnist. "Maybe you have heard that it isn't. (No one really knows either way, but that won't stop the bros from jabbering about it constantly.)"

"But I can confidently tell you that the money being thrown around for AI is so huge that numbers have lost all meaning." The companies pouring money in are so rich and so power-hungry (in multiple meanings of that term) that our puny human brains cannot really comprehend. So let's try to give some meaning and context to the stratospheric numbers in AI. Is it a bubble? Eh, who knows. But it is completely bonkers. In just the past year, the four richest companies developing AI — Microsoft, Google, Amazon and Meta — have spent roughly $360 billion combined for big-ticket projects, which included building AI data centers and stuffing them with computer chips and equipment, according to my analysis of financial disclosures.... How do companies pay for the enormous sums they are lavishing on AI? Mostly, these companies make so much money that they can afford to go bananas...

Eight of the world's top 10 most valuable companies are AI-centric or AI-ish American corporate giants — Nvidia, Apple, Microsoft, Google, Amazon, Broadcom, Meta and Tesla. That's according to tallies from S&P Global Market Intelligence based on the total price of the companies' stock held by investors. My analysis of the S&P data shows that the collective worth of those eight giants, $23 trillion, is more than the value of the next 96 most valuable U.S. companies put together, which includes many still very rich names such as JPMorgan, Walmart, Visa and ExxonMobil. No. 1 on that list, the AI computer chip seller Nvidia, last week become the first company in history to reach a stock market value of $5 trillion. That alone was more than the value of entire stock markets in most countries, Bloomberg News reported, other than the five biggest (in the U.S., China, Japan, Hong Kong and India)...

All the announced or under-construction data centers for powering AI would consume roughly as much electricity as 44 million households in the United States if they run full tilt, according to a recent analysis by the Barclays investment bank as reported by the Financial Times. For context, that's nearly one-third of the total number of residential housing units in the entire country, according to U.S. Census Bureau housing estimates for 2024.

Android

Gemini Starts Rolling Out On Android Auto 7

Gemini is (finally) rolling out on Android Auto, replacing Google Assistant while keeping "Hey Google," adding Gemini Live ("let's talk live"), message auto-translation, and new privacy toggles. "One feature lost between Assistant and Gemini, though, is the ability to use nicknames for contacts," notes 9to5Google. From the report: Over the past 24 hours, Google has quietly started the rollout of Gemini for Android Auto, seemingly starting with beta users. The change is server-side, with multiple users reporting that Gemini has gone live in the car. One user mentions that they noticed this on Android Auto 15.6, and we're seeing the same on our Pixel 10 Pro XL connected to different car displays, and also on a Galaxy Z Fold 7 running Android Auto 15.7.

It's unclear if this particular version is what delivers support, but that seems unlikely seeing as this very started rolling out last week. Android Auto 15.6 and 15.7 are currently only available in beta, so it's also unclear at this time if the rollout is tied to the Android Auto beta or simply showing up on that version as a coincidence.
The Almighty Buck

Direct File Won't Happen in 2026, IRS Tells States (nextgov.com) 93

NextGov: The IRS has notified states that offered the free, government tax filing service known as Direct File in 2025 that the program won't be available next filing season. In an email sent from the IRS to 25 states, the tax agency thanked them for collaborating and noted that "no launch date has been set for the future."

"IRS Direct File will not be available in Filing Season 2026," says the Monday email, obtained by Nextgov/FCW and confirmed by multiple sources. It follows reports that the program was ending and Trump's former tax chief, Billy Long, remarking over the summer that the service was "gone."

The program, which debuted in 2024, was a big shift from the decades-long IRS policy of not competing with the tax prep industry in offering its own free, online tax filing service for Americans. Many Republicans had opposed Direct File, and tax prep companies also lobbied against it.

Hardware

Manufacturer Bricks Smart Vacuum After Engineer Blocks It From Collecting Data (tomshardware.com) 35

A curious engineer discovered that his iLife A11 smart vacuum was remotely "killed" after he blocked it from sending data to the manufacturer's servers. By reverse-engineering it with custom hardware and Python scripts, he managed to revive the device to run fully offline. Tom's Hardware reports: An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device. That's when he noticed it was constantly sending logs and telemetry data to the manufacturer -- something he hadn't consented to. The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after. After a lengthy investigation, he discovered that a remote kill command had been issued to his device.

He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again. After several rounds of back-and-forth, the service center probably got tired and just stopped accepting it, saying it was out of warranty. Because of this, he decided to disassemble the thing to determine what killed it and to see if he could get it working again. [...] So, why did the A11 work at the service center but refuse to run in his home? The technicians would reset the firmware on the smart vacuum, thus removing the kill code, and then connect it to an open network, making it run normally. But once it connected again to the network that had its telemetry servers blocked, it was bricked remotely because it couldn't communicate with the manufacturer's servers. Since he blocked the appliance's data collection capabilities, its maker decided to just kill it altogether.

"Someone -- or something -- had remotely issued a kill command," says Harishankar. "Whether it was intentional punishment or automated enforcement of 'compliance,' the result was the same: a consumer device had turned on its owner." In the end, the owner was able to run his vacuum fully locally without manufacturer control after all the tweaks he made. This helped him retake control of his data and make use of his $300 software-bricked smart device on his own terms. As for the rest of us who don't have the technical knowledge and time to follow his accomplishments, his advice is to "Never use your primary WiFi network for IoT devices" and to "Treat them as strangers in your home."

Windows

Windows 11 Store Gets Ninite-Style Multi-App Installer Feature (bleepingcomputer.com) 37

An anonymous reader shares a report: The Microsoft Store on the web now lets you create a multi-app install package on Windows 11 that installs multiple applications from a single installer. This means you can now install multiple apps simultaneously without having to download each one manually. The experience is similar to that of the third-party app Ninite, a package manager that lets you install multiple apps at once.
Piracy

Google Removed 749 Million Anna's Archive URLs From Its Search Results (torrentfreak.com) 38

Google has delisted over 749 million URLs from Anna's Archive, a shadow library and meta-search engine for pirated books, representing 5% of all copyright takedown requests ever filed with the company. TorrentFreak reports: Google's transparency report reveals that rightsholders asked Google to remove 784 million URLs, divided over the three main Anna's Archive domains. A small number were rejected, mainly because Google didn't index the reported links, resulting in 749 million confirmed removals. The comparison to sites such as The Pirate Bay isn't fair, as Anna's Archive has many more pages in its archive and uses multiple country-specific subdomains. This means that there's simply more content to take down. That said, in terms of takedown activity, the site's three domain names clearly dwarf all pirate competition.

Since Google published its first transparency report in May 2012, rightsholders have flagged 15.1 billion allegedly infringing URLs. That's a staggering number, but the fact that 5% of the total targeted Anna's Archive URLs is remarkable. Penguin Random House and John Wiley & Sons are the most active publishers targeting the site, but they are certainly not alone. According to Google data, more than 1,000 authors or publishers have sent DMCA notices targeting Anna's Archive domains. Yet, there appears to be no end in sight. Rightsholders are reporting roughly 10 million new URLs per week for the popular piracy library, so there is no shortage of content to report.

Privacy

Data Breach At Major Swedish Software Supplier Impacts 1.5 Million (bleepingcomputer.com) 6

A massive cyberattack on Swedish IT supplier Miljodata exposed personal data from up to 1.5 million citizens, prompting a national privacy investigation and scrutiny into security failures across multiple municipalities. BleepingComputer reports: MiljÃdata is an IT systems supplier for roughly 80% of Sweden's municipalities. The company disclosed the incident on August 25, saying that the attackers stole data and demanded 1.5 Bitcoin to not leak it. The attack caused operational disruptions that affected citizens in multiple regions in the country, including Halland, Gotland, Skelleftea, Kalmar, Karlstad, and Monsteras.

Because of the large impact, the state monitored the situation from the time of disclosure, with CERT-SE and the police starting to investigate immediately. According to IMY, the attacker exposed on the dark web data that corresponds to 1.5 million people in the country, creating the basis for investigating potential General Data Protection Regulation (GDPR) violations. [...] Although no ransomware groups had claimed the attack when Miljodata disclosed the incident, BleepingComputer found that the threat group Datacarry posted the stolen data on its dark web portal on September 13.
The leaked database has been added to Have I Been Pwned, which contains information such as names, email addresses, physical addresses, phone numbers, government IDs, and dates of birth.
The Internet

ISPs More Likely To Throttle Netizens Who Connect Through Carrier-Grade NAT: Cloudflare (theregister.com) 55

An anonymous reader shares a report: Before the potential of the internet was appreciated around the world, nations that understood its importance managed to scoop outsized allocations of IPv4 addresses, actions that today mean many users in the rest of the world are more likely to find their connections throttled or blocked.

So says Cloudflare, which last week published research that recalls how once the world started to run out of IPv4 addresses, engineers devised network address translation (NAT) so that multiple devices can share a single IPv4 address. NAT can handle tens of thousands of devices, but carriers typically operate many more. Internetworking wonks therefore developed Carrier-Grade NAT (CGNAT), which can handle over 100 devices per IPv4 address and scale to serve millions of users.

That's useful for carriers everywhere, but especially valuable for carriers in those countries that missed out on big allocations of IPv4 because their small pool of available number resources means they must employ CGNAT to handle more users and devices. Cloudflare's research suggests carriers in Africa and Asia use CGNAT more than those on other continents.

Cloudflare worried that could be bad for individual netizens. "CGNATs also create significant operational fallout stemming from the fact that hundreds or even thousands of clients can appear to originate from a single IP address," wrote Cloudflare researchers Vasilis Giotsas and Marwan Fayed. "This means an IP-based security system may inadvertently block or throttle large groups of users as a result of a single user behind the CGNAT engaging in malicious activity. Blocking the shared IP therefore penalizes many innocent users along with the abuser."

Slashdot Top Deals