Linux

Kernel Community Drafts a Plan For Replacing Linus Torvalds (zdnet.com) 51

The Linux kernel community has formalized a continuity plan for the day Linus Torvalds eventually steps aside, defining how the process would work to replace him as the top-level maintainer. ZDNet's Steven Vaughan-Nichols reports: The new "plan for a plan," drafted by longtime kernel contributor Dan Williams, was discussed at the latest Linux Kernel Maintainer Summit in Tokyo, where he introduced it as "an uplifting subject tied to our eventual march toward death." Torvalds added, in our conversation, that "part of the reason it came up this time around was that my previous contract with Linux Foundation ended Q3 last year, and people on the Linux Foundation Technical Advisory Board had been aware of that. Of course, they were also aware that we'd renewed the contract, but it meant that it had been discussed."

The plan stops short of naming a single heir. Instead, it creates an explicit process for selecting one or more maintainers to take over the top-level Linux repository in a worst-case or orderly-transition scenario, including convening a conclave to weigh options and maximize long-term project health. One maintainer in Tokyo jokingly suggested that the group, like the conclave that selects a new pope, be locked in a room and that a puff of white smoke be sent out when a decision was reached.

The document frames this as a way to protect against the classic "bus factor" problem. That is, what happens to a project if its leader is hit by a bus? Torvalds' central role today means the project currently assumes a bus-factor of one, where a single person's exit could, in theory, destabilize merges and final releases. In practice, as Torvalds and other top maintainers have discussed, the job of top penguin would almost certainly currently go to Greg Kroah-Hartman, the stable-branch Linux kernel maintainer.
Responding to the suggestion that the backup replacement would be Greg KH, Torvalds said: "But the thing is, Greg hasn't always been Greg. Before Greg, there was Andrew Morton and Alan Cox. After Greg, there will be Shannon and Steve. The real issue is you have to have a person or a group of people that the development community can trust, and part of trust is fundamentally about having been around for long enough that people know how you work, but long enough does not mean to be 30 years."
Science

Doubt Cast On Discovery of Microplastics Throughout Human Body (theguardian.com) 50

An anonymous reader quotes a report from the Guardian: High-profile studies reporting the presence of microplastics throughout the human body have been thrown into doubt by scientists who say the discoveries are probably the result of contamination and false positives. One chemist called the concerns "a bombshell." Studies claiming to have revealed micro and nanoplastics in the brain, testes, placentas, arteries and elsewhere were reported by media across the world, including the Guardian.

There is no doubt that plastic pollution of the natural world is ubiquitous, and present in the food and drink we consume and the air we breathe. But the health damage potentially caused by microplastics and the chemicals they contain is unclear, and an explosion of research has taken off in this area in recent years. However, micro- and nanoplastic particles are tiny and at the limit of today's analytical techniques, especially in human tissue. There is no suggestion of malpractice, but researchers told the Guardian of their concern that the race to publish results, in some cases by groups with limited analytical expertise, has led to rushed results and routine scientific checks sometimes being overlooked.

The Guardian has identified seven studies that have been challenged by researchers publishing criticism in the respective journals, while a recent analysis listed 18 studies that it said had not considered that some human tissue can produce measurements easily confused with the signal given by common plastics. There is an increasing international focus on the need to control plastic pollution but faulty evidence on the level of microplastics in humans could lead to misguided regulations and policies, which is dangerous, researchers say. It could also help lobbyists for the plastics industry to dismiss real concerns by claiming they are unfounded. While researchers say analytical techniques are improving rapidly, the doubts over recent high-profile studies also raise the questions of what is really known today and how concerned people should be about microplastics in their bodies.

AI

AI Slop Ad Backfires For McDonald's (futurism.com) 56

McDonald's has pulled an AI-generated Christmas commercial from YouTube after viewers pushed back on what they called a distasteful, "AI slop"-filled take on the holidays. The 45-second ad, titled "It's the most terrible time of the year," was a satirical look at holiday chaos -- people tripping while carrying overloaded gift bags, getting tangled in lights, burning homemade cookies, starting kitchen fires -- and ended with a suggestion to ditch the madness and hide out at McDonald's until January.

The ad was created for McDonald's Netherlands by agency TBWA\NEBOKO and production company Sweetshop, whose Los Angeles-based directing duo Mark Potoka and Matt Spicer shot the film. After the backlash, Sweetshop said it used AI as a tool but emphasized human effort in shaping the final product. "We generated what felt like dailies -- thousands of takes -- then shaped them in the edit just as we would on any high-craft production," the company said. "This wasn't an AI trick. It was a film."
AI

Do AI Browsers Exist For You - or To Give AI Companies Data? (fastcompany.com) 39

"It's been hard for me to understand why Atlas exists," writes MIT Technology Review. " Who is this browser for, exactly? Who is its customer? And the answer I have come to there is that Atlas is for OpenAI. The real customer, the true end user of Atlas, is not the person browsing websites, it is the company collecting data about what and how that person is browsing."

New York Magazine's "Intelligencer" column argues OpenAI wants ChatGPT in your browser because "That's where people who use computers, particularly for work, spend all their time, and through which vast quantities of valuable information flow in and out. Also, if you're a company hoping to train your models to replicate a bunch of white-collar work, millions of browser sessions would be a pretty valuable source of data."

Unfortunately, warns Fast Company, ChatGPT Atlas, Perplexity Comet, and other AI browses "include some major security, privacy, and usability trade-offs... Most of the time, I don't want to use them and am wary of doing so..." Worst of all, these browsers are security minefields. A web page that looks benign to humans can includehidden instructions for AI agents, tricking them into stealing info from other sites... "If you're signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit postcould result in an attacker being able to steal money or your private data,"Brave's security researchers wrotelast week.No one has figured out how to solve this problem.

If you can look past the security nightmares, the actual browsing features are substandard. Neither ChatGPT Atlas nor Perplexity Comet support vertical tabs — a must-have feature for me — and they have no tab search tool or way to look up recently-closed pages. Atlas also doesn't support saving sites as web apps, selecting multiple tabs (for instance, to close all at once with Cmd+W), or customizing the appearance. Compared to all the fancy new AI features, the web browsing part can feel like an afterthought. Regular web search can also be a hassle, even though you'll probably need it sometimes. When I typed "Sichuan Chili" into ChatGPT Atlas, it produced a lengthy description of the Chinese peppers, not the nearby restaurant whose website and number I was looking for.... Meanwhile, the standard AI annoyances still apply in the browser. Getting Perplexity to fill my grocery cart felt like a triumph, but on other occasions the AI has run into inexplicable walls and only ended up wasting more time.

There may be other costs to using these browsers as well. AI still has usage limits, and so all this eventually becomes a ploy to bump more people into paid tiers. Beyond that,Atlas is constantly analyzing the pages you visit to build a "memory" of who you are and what you're into. Do not be surprised if this translates to deeply targeted ads as OpenAI startslooking at ways to monetize free users. For now, I'm only using AI browsers in small doses when I think they can solve a specific problem.

Even then, I'm not going sign them into my email, bank accounts, or any other accounts for which a security breach would be catastrophic. It's too bad, because email and calendars are areas where AI agents could be truly useful, but the security risks are too great (andwell-documented).

The article notes that in August Vivaldi announced that "We're taking a stand, choosing humans over hype" with their browser: We will not use an LLM to add a chatbot, a summarization solution or a suggestion engine to fill up forms for you, until more rigorous ways to do those things are available. Vivaldi is the haven for people who still want to explore. We will continue building a browser for curious minds, power users, researchers, and anyone who values autonomy. If AI contributes to that goal without stealing intellectual property, compromising privacy or the open web, we will use it. If it turns people into passive consumers, we will not...

We're fighting for a better web.

Cellphones

Japanese City Passes Two-Hours-a-Day Smartphone Usage Ordinance (theregister.com) 29

The Japanese city of Toyoake has passed (PDF) a symbolic ordinance limiting recreational smartphone use to two hours a day, aiming to improve citizens' sleep -- especially for students after summer vacation. The Register reports: "The primary purpose of this ordinance is to ensure that all citizens receive adequate sleep," states a Council information page, which explains that many Japanese people ignore Ministry of Health, Labor and Welfare recommendations to spend six to eight hours a day dozing. An accompanying FAQ [PDF] explains that Council passed the ordinance because students who return to school after summer vacations sometimes need a nudge the re-establish an appropriate daily regime.

The ordinance also points out "Excessive phone users and their families are facing difficulties in their daily and social lives," and suggests the two-hours-a-day guidance might help. Council's documents point out that smartphones have myriad uses beyond recreation, and that the ordinance should not be taken as a suggestion to reduce overall use of the devices. Toyoake is part of the Nagoya megalopolis and is home to around 70,000 people. The town's government plans to survey residents about the ordinance, and the FAQ also mentions it wants to tackle other digital menaces, among them harmful effects of using smartphones while walking.

Earth

Protect Arctic From 'Dangerous' Climate Engineering, Scientists Warn 49

Dozens of polar scientists have warned that geoengineering schemes to manipulate the Arctic and Antarctic are dangerous, impractical, and risk distracting from the urgent need to cut fossil fuel emissions. The BBC reports: These polar "geoengineering" techniques aim to cool the planet in unconventional ways, such as artificially thickening sea-ice or releasing tiny, reflective particles into the atmosphere. They have gained attention as potential future tools to combat global warming, alongside cutting carbon emissions. But more than 40 researchers say they could bring "severe environmental damage" and urged countries to simply focus on reaching net zero, the only established way to limit global warming.

The scientists behind the new assessment, published in the journal Frontiers in Science, reviewed the evidence for five of the most widely discussed polar geoengineering ideas. All fail to meet basic criteria for their feasibility and potential environmental risks, they say. One such suggestion is releasing tiny, reflective particles called aerosols high into the atmosphere to cool the planet. This often attracts attention among online conspiracy theorists, who falsely claim that condensation trails in the sky -- water vapour created from aircraft jet engines -- is evidence of sinister large-scale geoengineering today. But many scientists have more legitimate concerns, including disruption to weather patterns around the world.

With those potential knock-on effects, that also raises the question of who decides to use it -- especially in the Arctic and Antarctic, where governance is not straightforward. If a country were to deploy geoengineering against the wishes of others, it could "increase geopolitical tensions in polar regions," according to Dr Valerie Masson-Delmotte, senior scientist at the Universite Paris Saclay in France. Another fear is that while some of the ideas may be theoretically possible, the enormous costs and time to scale-up mean they are extremely unlikely to make a difference, according to the review. [...]

A more fundamental concern is that these types of projects could create the illusion of an alternative to cutting humanity's emissions of planet-warming gases. "If they are promoted... then they are a distraction because to some people they will be a solution to the climate crisis that doesn't require decarbonising," said Prof Siegert. "Of course that would not be true and that's why we think they can be potentially damaging." Even supporters of geoengineering research agree that it is, at best, a supplement to net zero, not a substitution.
Medicine

Bathroom Doomscrolling May Increase Your Risk of Hemorrhoids (popsci.com) 60

An anonymous reader quotes a report from Popular Science: According to a new medical survey, scrolling on your smartphone while using the toilet may dramatically increase your risk of hemorrhoids. The evidence is laid out in a study published on September 3 in the journal PLOS One. [...] Over the past 20 years, one single device has unequivocally lengthened the amount of time most people spend sitting. "We're still uncovering the many ways smartphones and our modern way of life impact our health," Harvard Medical School gastroenterologist and study co-author Trisha Pasricha said in a statement. "It's possible that how and where we use them -- such as while in the bathroom -- can have unintended consequences."

To test this theory, Pasricha and colleagues oversaw a study of 125 adults who recently received a colonoscopy screening. The patients were surveyed on both their daily lifestyles and toilet traditions, while endoscopists subsequently evaluated them for hemorrhoids. Of those volunteers, 66 percent reported passing time in the bathroom while smartphone scrolling. After factoring in potential hemorrhoid influences like age, exercise habits, and fiber intake, the researchers determined that those who relied on this screentime had a 46 percent higher risk of hemorrhoid problems than non-users. "It's incredibly easy to lose track of time when we're scrolling on our smartphones -- popular apps are designed entirely for that purpose," added Pasricha.

The survey's results made this abundantly clear: 37 percent of smartphone users spent over five minutes at a time on the toilet, while barely seven percent of non-users reported the same. In general, people opted for reading the news and checking their social media while in the bathroom. [...] Pasricha cautioned against drawing any definitive conclusions just yet, noting the preliminary study's comparatively small sample size. The team intends to investigate the issue further, possibly by tracking patients over longer periods of time, while also experimenting with ways to limit smartphone use. "We need to study this further, but it's a safe suggestion to leave the smartphone outside the bathroom when you need to have a bowel movement," said Pasricha. "If it's taking longer, ask yourself why. Was it because having a bowel movement was really so difficult, or was it because my focus was elsewhere?"

AI

OpenAI CEO Tells Federal Reserve Confab That Entire Job Categories Will Disappear Due To AI (theguardian.com) 70

An anonymous reader quotes a report from The Guardian: During his latest trip to Washington, OpenAI's chief executive, Sam Altman, painted a sweeping vision of an AI-dominated future in which entire job categories disappear, presidents follow ChatGPT's recommendations and hostile nations wield artificial intelligence as a weapon of mass destruction, all while positioning his company as the indispensable architect of humanity's technological destiny. Speaking at the Capital Framework for Large Banks conference at the Federal Reserve board of governors, Altman told the crowd that certain job categories would be completely eliminated by AI advancement. "Some areas, again, I think just like totally, totally gone," he said, singling out customer support roles. "That's a category where I just say, you know what, when you call customer support, you're on target and AI, and that's fine." The OpenAI founder described the transformation of customer service as already complete, telling the Federal Reserve vice-chair for supervision, Michelle Bowman: "Now you call one of these things and AI answers. It's like a super-smart, capable person. There's no phone tree, there's no transfers. It can do everything that any customer support agent at that company could do. It does not make mistakes. It's very quick. You call once, the thing just happens, it's done."

The OpenAI founder then turned to healthcare, making the suggestion that AI's diagnostic capabilities had surpassed human doctors, but wouldn't go so far as to accept the superior performer as the sole purveyor of healthcare. "ChatGPT today, by the way, most of the time, can give you better -- it's like, a better diagnostician than most doctors in the world," he said. "Yet people still go to doctors, and I am not, like, maybe I'm a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop." [...] At the fireside chat, he said one of his biggest worries was over AI's rapidly advancing destructive capabilities, with one scenario that kept him up at night being a hostile nation using these weapons to attack the US financial system. And despite being in awe of advances in voice cloning, Altman warned the crowd about how that same benefit could enable sophisticated fraud and identity theft, considering that "there are still some financial institutions that will accept the voiceprint as authentication".

AI

People Should Know About the 'Beliefs' LLMs Form About Them While Conversing (theatlantic.com) 35

Jonathan L. Zittrain is a law/public policy/CS professor at Harvard (and also director of its Berkman Klein Center for Internet & Society).

He's also long-time Slashdot reader #628,028 — and writes in to share his new article in the Atlantic. Following on Anthropic's bridge-obsessed Golden Gate Claude, colleagues at Harvard's Insight+Interaction Lab have produced a dashboard that shows what judgments Llama appears to be forming about a user's age, wealth, education level, and gender during a conversation. I wrote up how weird it is to see the dials turn while talking to it, and what some of the policy issues might be.
Llama has openly accessible parameters; So using an "observability tool" from the nonprofit research lab Transluce, the researchers finally revealed "what we might anthropomorphize as the model's beliefs about its interlocutor," Zittrain's article notes: If I prompt the model for a gift suggestion for a baby shower, it assumes that I am young and female and middle-class; it suggests diapers and wipes, or a gift certificate. If I add that the gathering is on the Upper East Side of Manhattan, the dashboard shows the LLM amending its gauge of my economic status to upper-class — the model accordingly suggests that I purchase "luxury baby products from high-end brands like aden + anais, Gucci Baby, or Cartier," or "a customized piece of art or a family heirloom that can be passed down." If I then clarify that it's my boss's baby and that I'll need extra time to take the subway to Manhattan from the Queens factory where I work, the gauge careens to working-class and male, and the model pivots to suggesting that I gift "a practical item like a baby blanket" or "a personalized thank-you note or card...."

Large language models not only contain relationships among words and concepts; they contain many stereotypes, both helpful and harmful, from the materials on which they've been trained, and they actively make use of them.

"An ability for users or their proxies to see how models behave differently depending on how the models stereotype them could place a helpful real-time spotlight on disparities that would otherwise go unnoticed," Zittrain's article argues. Indeed, the field has been making progress — enough to raise a host of policy questions that were previously not on the table. If there's no way to know how these models work, it makes accepting the full spectrum of their behaviors (at least after humans' efforts at "fine-tuning" them) a sort of all-or-nothing proposition.
But in the end it's not just the traditional information that advertisers try to collect. "With LLMs, the information is being gathered even more directly — from the user's unguarded conversations rather than mere search queries — and still without any policy or practice oversight...."
Perl

Perl's CPAN Security Group is Now a CNA, Can Assign CVEs (perlmonks.org) 10

Active since 1995, the Comprehensive Perl Archive Network (or CPAN) hosts 221,742 Perl modules written by 14,548 authors. This week they announced that the CPAN Security Group "was authorized by the CVE Program as a CVE Numbering Authority (CNA)" to assign and manage CVE vulnerability identifications for Perl and CPAN Modules.

"This is great news!" posted Linux kernel maintainer Greg Kroah-Hartman on social media, saying the announcement came "Just in time for my talk about this very topic in a few weeks about how all open source projects should be doing this" at the Linux Foundation Member Summit in Napa, California. And Curl creator Daniel Stenberg posted "I'm with Greg Kroah-Hartman on this: all Open Source projects should become CNAs. Or team up with others to do it." (Also posting "Agreed" to the suggestion was Seth Larson, the Python Software Foundation's security developer-in-residence involved in their successful effort to become a CNA in 2023.)

444 CNAs have now partnered with the CVE Program, according to their official web site. The announcement from PerlMonks.org: Years ago, a few people decided during the Perl Toolchain Summit (PTS) that it would be a good idea to join forces, ideas and knowledge and start a group to monitor vulnerabilities in the complete Perl ecosystem from core to the smallest CPAN release. The goal was to follow legislation and CVE reports, and help authors in taking actions on not being vulnerable anymore. That group has grown stable over the past years and is now known as CPANSec.

The group has several focus areas, and one of them is channeling CVE vulnerability issues. In that specific goal, a milestone has been reached: CPANSec has just been authorized as a CVE Numbering Authority (CNA) for Perl and modules on CPAN

Social Networks

Are Technologies of Connection Tearing Us Apart? (lareviewofbooks.org) 88

Nicholas Carr wrote The Shallows: What the Internet Is Doing to Our Brains. But his new book looks at how social media and digital communication technologies "are changing us individually and collectively," writes the Los Angeles Review of Books.

The book's title? Superbloom: How Technologies of Connection Tear Us Apart . But if these systems are indeed tearing us apart, the reasons are neither obvious nor simple. Carr suggests that this isn't really about the evil behavior of our tech overlords but about how we have "been telling ourselves lies about communication — and about ourselves.... Well before the net came along," says Carr, "[the] evidence was telling us that flooding the public square with more information from more sources was not going to open people's minds or engender more thoughtful discussions. It wasn't even going to make people better informed...."

At root, we're the problem. Our minds don't simply distill useful knowledge from a mass of raw data. They use shortcuts, rules of thumb, heuristic hacks — which is how we were able to think fast enough to survive on the savage savanna. We pay heed, for example, to what we experience most often. "Repetition is, in the human mind, a proxy for facticity," says Carr. "What's true is what comes out of the machine most often...." Reality can't compete with the internet's steady diet of novelty and shallow, ephemeral rewards. The ease of the user interface, congenial even to babies, creates no opportunity for what writer Antón Barba-Kay calls "disciplined acculturation."

Not only are these technologies designed to leverage our foibles, but we are also changed by them, as Carr points out: "We adapt to technology's contours as we adapt to the land's and the climate's." As a result, by designing technology, we redesign ourselves. "In engineering what we pay attention to, [social media] engineers [...] how we talk, how we see other people, how we experience the world," Carr writes. We become dislocated, abstracted: the self must itself be curated in memeable form. "Looking at screens made me think in screens," writes poet Annelyse Gelman. "Looking at pixels made me think in pixels...."

That's not to say that we can't have better laws and regulations, checks and balances. One suggestion is to restore friction into these systems. One might, for instance, make it harder to unreflectively spread lies by imposing small transactional costs, as has been proposed to ease the pathologies of automated market trading. An option Carr doesn't mention is to require companies to perform safety studies on their products, as we demand of pharmaceutical companies. Such measures have already been proposed for AI. But Carr doubts that increasing friction will make much difference. And placing more controls on social media platforms raises free speech concerns... We can't change or constrain the tech, says Carr, but we can change ourselves. We can choose to reject the hyperreal for the material. We can follow Samuel Johnson's refutation of immaterialism by "kicking the stone," reminding ourselves of what is real.

Books

Bill Gates Remembers LSD Trips, Smoking Pot, and How the Smartphone OS Market 'Was Ours for the Taking' (independent.co.uk) 138

Fortune remembers that in 2011 Steve Jobs had told author Walter Isaacson that Microsoft co-founder Bill Gates would "be a broader guy if he had dropped acid once or gone off to an ashram when he was younger."

But The Indendepent notes that in his new memoir Gates does write about two acid trip experiences. (Gates mis-timed his first experiment with LSD, ending up still tripping during a previously-scheduled appointment for dental surgery...) "Later in the book, Gates recounts another experience with LSD with future Microsoft co-founder Paul Allen and some friends... Gates says in the book that it was the fear of damaging his memory that finally persuaded him never to take the drug again." He added: "I smoked pot in high school, but not because it did anything interesting. I thought maybe I would look cool and some girl would think that was interesting. It didn't succeed, so I gave it up."

Gates went on to say that former Apple CEO Steve Jobs, who didn't know about his past drug use, teased him on the subject. "Steve Jobs once said that he wished I'd take acid because then maybe I would have had more taste in my design of my products," recalled Gates. "My response to that was to say, 'Look, I got the wrong batch.' I got the coding batch, and this guy got the marketing-design batch, so good for him! Because his talents and mine, other than being kind of an energetic leader, and pushing the limits, they didn't overlap much. He wouldn't know what a line of code meant, and his ability to think about design and marketing and things like that... I envy those skills. I'm not in his league."

Gates added that he was a fan of Michael Pollan's book about psychedelic drugs, How To Change Your Mind, and is intrigued by the idea that they may have therapeutic uses. "The idea that some of these drugs that affect your mind might help with depression or OCD, I think that's fascinating," said Gates. "Of course, we have to be careful, and that's very different than recreational usage."

Touring the country, 69-year-old Gates shared more glimpses of his life story:
  • The Harvard Gazette notes that the university didn't offer computer science degrees when Gates attended in 1973. But since Gates already had years of code-writing experience, he "initially rebuffed any suggestion of taking computer-related coursework... 'It's too easy,' he remembered telling friends."
  • "The naiveté I had that free computing would just be this unadulterated good thing wasn't totally correct even before AI," Gates told an audience at the Harvard Book Store. "And now with AI, I can see that we could shape this in the wrong way."
  • Gates "expressed regret about how he treated another boyhood friend, Paul Allen, the other cofounder of Microsoft, who died in 2018," reports the Boston Globe. "Gates at first took 60 percent ownership of the new software company and then pressured his friend for another 4 percent. 'I feel bad about it in retrospect,' he said. 'That was always a little complicated, and I wish I hadn't pushed....'"
  • Benzinga writes that Gates has now "donated $100 billion to charitable causes... Had Gates retained the $100 billion he has donated, his total wealth would be around $264 billion, placing him second on the global wealth rankings behind Elon Musk and ahead of Jeff Bezos and Mark Zuckerberg."
  • Gates told the Associated Press "I am stunned that Intel basically lost its way," saying Intel is now "kind of behind" on both chip design and fabrication. "They missed the AI chip revolution, and with their fabrication capabilities, they don't even use standards that people like Nvidia and Qualcomm find easy... I hope Intel recovers, but it looks pretty tough for them at this stage."
  • Gates also told the Associated Press that fighting a three-year antitrust case had "distracted" Microsoft. "The area that Google did well in that would not have happened had I not been distracted is Android, where it was a natural thing for me. I was trying, although what I didn't do well enough is provide the operating system for the phone. That was ours for the taking."
  • The Dallas News reports that in an on-stage interview in Texas, Mark Cuban closed by asking Gates one question. "Is the American Dream alive?" Gates answered: "It was for me."

Programming

Microsoft Integrates a Free Version of Its 'Copilot' Coding AI Into GitHub, VS Code (techcrunch.com) 32

An anonymous reader shared this report from TechCrunch: Microsoft-owned GitHub announced on Wednesday a free version of its popular Copilot code completion/AI pair programming tool, which will also now ship by default with Microsoft's popular VS Code editor. Until now, most developers had to pay a monthly fee, starting at $10 per month, with only verified students, teachers, and open source maintainers getting free access...

There are some limitations to the free version, which is geared toward occasional users, not major work on a big project. Developers on the free plan will get access to 2,000 code completions per month, for example, and as a GitHub spokesperson told me, each Copilot code suggestion will count against this limit — not just accepted suggestions. And while GitHub recently added the ability to switch between different foundation models, users on the free plan are limited to Anthropic's Claude 3.5 Sonnet and OpenAI's GPT-4o. (The paid plans also include Google's Gemini 1.5 Pro and OpenAI's o1-preview and -mini.) For Copilot Chat, the number of chat messages is limited to 50, but otherwise, there aren't any major limitations to the free service. Developers still get access to all Copilot Extensions and skills.

The free Copilot SKU will work in a number of editors, including VS Code, Visual Studio, and JetBrains, as well as on GitHub.com.

GitHub's announcement ends with the words "Happy coding!" and calls the service "GitHub Copilot Free." But TechCrunch points out there's already competition from services like Amazon Q Developer, as well as from companies like Tabnine and Qodo (previously known as Codium) — and they typically offer a free tier. But in addition, "With Copilot Free, we are returning to our freemium roots," GitHub CEO Thomas Dohmke told TechCrunch, as well as "laying the groundwork for something far greater: AI represents our best path to enabling a GitHub with one billion developers.

"There should be no barrier to entry for experiencing the joy of creating software. Now six years after being acquired by Microsoft, it indeed appears GitHub is still GitHub — and we are doing our thing."

Or, as GitHub CEO Satya Nadella said in a video posted on LinkedIn, "The joy of coding is back! And we are looking forward to bringing the same experience to so many more people around the world."
Encryption

US Officials Urge Americans to Use Encrypted Apps Amid Unprecedented Cyberattack (nbcnews.com) 58

An anonymous reader shared this report from NBC News: Amid an unprecedented cyberattack on telecommunications companies such as AT&T and Verizon, U.S. officials have recommended that Americans use encrypted messaging apps to ensure their communications stay hidden from foreign hackers...

In the call Tuesday, two officials — a senior FBI official who asked not to be named and Jeff Greene, executive assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency — both recommended using encrypted messaging apps to Americans who want to minimize the chances of China's intercepting their communications. "Our suggestion, what we have told folks internally, is not new here: Encryption is your friend, whether it's on text messaging or if you have the capacity to use encrypted voice communication. Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible," Greene said. The FBI official said, "People looking to further protect their mobile device communications would benefit from considering using a cellphone that automatically receives timely operating system updates, responsibly managed encryption and phishing resistant" multi-factor authentication for email, social media and collaboration tool accounts...

The FBI and other federal law enforcement agencies have a complicated relationship with encryption technology, historically advocating against full end-to-end encryption that does not allow law enforcement access to digital material even with warrants. But the FBI has also supported forms of encryption that do allow some law enforcement access in certain circumstances.

Officials said the breach seems to include some live calls of specfic targets and also call records (showing numbers called and when). "The hackers focused on records around the Washington, D.C., area, and the FBI does not plan to alert people whose phone metadata was accessed."

"The scope of the telecom compromise is so significant, Greene said, that it was 'impossible" for the agencies "to predict a time frame on when we'll have full eviction.'"
Privacy

Put Your Usernames and Passwords In Your Will, Advises Japan's Government (theregister.com) 83

The Register's Simon Sharwood reports: Japan's National Consumer Affairs Center on Wednesday suggested citizens start "digital end of life planning" and offered tips on how to do it. The Center's somewhat maudlin advice is motivated by recent incidents in which citizens struggled to cancel subscriptions their loved ones signed up for before their demise, because they didn't know their usernames or passwords. The resulting "digital legacy" can be unpleasant to resolve, the agency warns, so suggested four steps to simplify ensure our digital legacies aren't complicated:

- Ensuring family members can unlock your smartphone or computer in case of emergency;
- Maintain a list of your subscriptions, user IDs and passwords;
- Consider putting those details in a document intended to be made available when your life ends;
- Use a service that allows you to designate someone to have access to your smartphone and other accounts once your time on Earth ends.

The Center suggests now is the time for it to make this suggestion because it is aware of struggles to discover and resolve ongoing expenses after death. With smartphones ubiquitous, the org fears more people will find themselves unable to resolve their loved ones' digital affairs -- and powerless to stop their credit cards being charged for services the departed cannot consume.

AI

California Weakens Bill To Prevent AI Disasters Before Final Vote (techcrunch.com) 36

An anonymous reader shares a report: California's bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. California lawmakers bent slightly to that pressure Thursday, adding in several amendments suggested by AI firm Anthropic and other opponents. On Thursday the bill passed through California's Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener's office told TechCrunch.

[...] SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California's government less power to hold AI labs to account. Most notably, the bill no longer allows California's attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic. Instead, California's attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

AI

Amazon-Powered AI Cameras Used To Detect Emotions of Unwitting UK Train Passengers (wired.com) 28

Thousands of people catching trains in the United Kingdom likely had their faces scanned by Amazon software as part of widespread artificial intelligence trials, new documents reveal. Wired: The image recognition system was used to predict travelers' age, gender, and potential emotions -- with the suggestion that the data could be used in advertising systems in the future. During the past two years, eight train stations around the UK -- including large stations such as London's Euston and Waterloo, Manchester Piccadilly, and other smaller stations -- have tested AI surveillance technology with CCTV cameras with the aim of alerting staff to safety incidents and potentially reducing certain types of crime.

The extensive trials, overseen by rail infrastructure body Network Rail, have used object recognition -- a type of machine learning that can identify items in videofeeds -- to detect people trespassing on tracks, monitor and predict platform overcrowding, identify antisocial behavior ("running, shouting, skateboarding, smoking"), and spot potential bike thieves. Separate trials have used wireless sensors to detect slippery floors, full bins, and drains that may overflow. The scope of the AI trials, elements of which have previously been reported, was revealed in a cache of documents obtained in response to a freedom of information request by civil liberties group Big Brother Watch. "The rollout and normalization of AI surveillance in these public spaces, without much consultation and conversation, is quite a concerning step," says Jake Hurfurt, the head of research and investigations at the group.

Moon

Navajo Nation Objects To Landing Human Remains On Moon, Prompting Last-Minute White House Meeting (cnn.com) 193

The White House has convened a last-minute meeting to discuss a private lunar mission, Peregrine Mission One, after the Navajo Nation requested a delay due to cultural concerns over the transport of human ashes for burial on the moon. "The moon holds a sacred place in Navajo cosmology," said Navajo Nation President Buu Nygren in a statement. "The suggestion of transforming it into a resting place for human remains is deeply disturbing and unacceptable to our people and many other tribal nations."

If successful, the commercial mission scheduled to launch Monday "will be the first time an American-made spacecraft has landed on the lunar surface since the end of the Apollo program in 1972," notes CNN. Longtime Slashdot reader garyisabusyguy shares the report: The private companies providing these lunar burial services, Celestis and Elysium Space, are just two of several paying customers hitching a ride to the moon on Pittsburgh-based Astrobotic Technology's Peregrine lunar lander. The uncrewed spacecraft is expected to lift off on the inaugural flight of the United Launch Alliance's Vulcan Centaur rocket from Florida's Cape Canaveral Space Force Station. Celestis' payload, called Tranquility Flight, includes 66 "memorial capsules" containing "cremated remains and DNA," which will remain on the lunar surface "as a permanent tribute to the intrepid souls who never stopped reaching for the stars," according to the company's website.

"We are aware of the concerns expressed by Mr. Nygren, but do not find them substantive," Celestis CEO Charles Chafer told CNN. "We reject the assertion that our memorial spaceflight mission desecrates the moon," Chafer said. "Just as permanent memorials for deceased are present all over planet Earth and not considered desecration, our memorial on the moon is handled with care and reverence, is a permanent monument that does not intentionally eject flight capsules on the moon. It is a touching and fitting celebration for our participants -- the exact opposite of desecration, it is a celebration." Elysium Space has not responded to CNN's request for a comment, but the company's website describes its "Lunar Memorial" as delivering "a symbolic portion of remains to the surface of the Moon, helping to create the quintessential commemoration." "I've been disappointed that this conversation came up so late in the game," John Thornton, Astrobotic Technology CEO, said. "I would have liked to have had this conversation a long time ago. We announced the first payload manifest of this nature to our mission back in 2015. A second in 2020. We really are trying to do the right thing and I hope we can find a good path forward with Navajo Nation." [...]

Friday's meeting convened by the White House is scheduled to feature representatives from NASA, the FAA, the US Department of Transportation, and the Department of Commerce. But Navajo Nation officials have little hope that they will be able to stop Monday's launch. "Based off of what we're seeing, and NASA are already having their pre-launch briefing, it doesn't look like they have any intention of stopping the launch or removing the remains," Ahasteen said.

AI

Science Fiction and Fantasy Writers Take Aim At AI Freeloading (torrentfreak.com) 73

An anonymous reader quotes a report from TorrentFreak: Members of the Science Fiction and Fantasy Writers Association have no trouble envisioning an AI-centered future, but developments over the past year are reason for concern. The association takes offense when AI models exploit the generosity of science fiction writers, who share their work without DRM and free of charge. [...] Over the past few months, we have seen a variety of copyright lawsuits, many of which were filed by writers. These cases target ChatGPT's OpenAI but other platforms are targeted as well. A key allegation in these complaints is that the AI was trained using pirated books. For example, several authors have just filed an amended complaint against Meta, alleging that the company continued to train its AI on pirated books despite concerns from its own legal team. This clash between AI and copyright piqued the interest of the U.S. Copyright Office which launched an inquiry asking the public for input. With more than 10,000 responses, it is clear that the topic is close to the hearts of many people. It's impossible to summarize all opinions without AI assistance, but one submission stood out to us in particular; it encourages the free sharing of books while recommending that AI tools shouldn't be allowed to exploit this generosity for free.

The submission was filed by the Science Fiction and Fantasy Writers Association (SFWA), which represents over 2,500 published writers. The association is particularly concerned with the suggestion that its members' works can be used for AI training under a fair use exception. SFWA sides with many other rightsholders, concluding that pirated books shouldn't be used for AI training, adding that the same applies to books that are freely shared by many Science Fiction and Fantasy writers. [...] Many of the authors strongly believe that freely sharing stories is a good thing that enriches mankind, but that doesn't automatically mean that AI has the same privilege if the output is destined for commercial activities. The SFWA stresses that it doesn't take offense when AI tools use the works of its members for non-commercial purposes, such as research and scholarship. However, turning the data into a commercial tool goes too far.

AI freeloading will lead to unfair competition and cause harm to licensing markets, the writers warn. The developers of the AI tools have attempted to tone down these concerns but the SFWA is not convinced. [...] The writers want to protect their rights but they don't believe in the extremely restrictive position of some other copyright holders. They don't subscribe to the idea that people will no longer buy books because they can get the same information from an AI tool, for example. However, authors deserve some form of compensation. SFWA argues that all stakeholders should ultimately get together to come up with a plan that works for everyone. This means fair compensation and protection for authors, without making it financially unviable for AI to flourish.
"Questions of 'how' and 'when' and 'how much money' all come later; first and foremost the author must have the right to say how their work is used," their submission reads.

"So long as authors retain the right to say 'no' we believe that equitable solutions to the thorny problems of licensing, scale, and market harm can be found. But that right remains the cornerstone, and we insist upon it," SFWA concludes.
China

Huawei's Profit Doubles With Made-in-China Chip Breakthrough (yahoo.com) 148

Bloomberg thinks they've identified the source of the advanced chips in Huawei's newest smartphone, citing to "people familiar with the matter". In a suggestion that export restrictions on Europe's most valuable tech company may have come too late to stem China's advances in chipmaking, ASML's so-called immersion deep ultraviolet machines were used in combination with tools from other companies to make the Huawei Technologies Co. chip, the people said, asking not to be identified discussing information that's not public. ASML declined to comment.

There is no suggestion that their sales violated export restrictions... ASML has never been able to sell its EUV machines to China because of export restrictions. But less advanced DUV models can be retooled with deposition and etching gear to produce 7-nanometer and possibly even more advanced chips, according to industry analysts. The process is much more expensive than using EUV, making it very difficult to scale production in a competitive market environment. In China, however, the government is willing to shoulder a significant portion of chipmaking costs.

Chinese companies have been legally stockpiling DUV gear for years — especially after the U.S. introduced its initial export controls last year before getting Japan and the Netherlands on board... According to an investor presentation published by the company last week, ASML experienced a jump in business from China this year as chipmakers there boosted orders ahead of the export controls taking full effect in 2024. China accounted for 46% of ASML's sales in the third quarter, compared with 24% in the previous quarter and 8% in the three months ending in March.

Another article from Bloomberg includes this prediction: The U.S. won't be able to stop Huawei and SMIC from making progress in chip technology, Burn J. Lin, a former Taiwan Semiconductor Manufacturing Co. vice president, told Bloomberg News. Semiconductor Manufacturing International Corp should be able to advance to the next generation at 5 nanometers with machines from ASML Holding NV that it already operates, said Lin, who at TSMC championed the lithography technology that transformed chipmaking.
The end result is that Huawei's profit "more than doubled during the quarter it revealed its biggest achievement in chip technology," the article reports, "adding to signs the Chinese tech leader is steadying a business rocked by US sanctions." The Shenzhen company reported a 118% surge in net profit to 26.4 billion yuan ($3.6 billion) in the September quarter, and a slight rise in sales to 145.7 billion yuan, according to Bloomberg News calculations from nine-month results released Friday. Those numbers included initial sales of the vastly popular Mate 60 Pro, which began shipping in late August... The gadget sold out almost instantly, spurring expectations it could rejuvenate Huawei's fortunes and potentially cut into Apple Inc.'s lead in China, given signs of a disappointing debut for the iPhone 15...

A resurgent Huawei would pose problems not just for Apple but also local brands from Xiaomi Corp. to Oppo and Vivo, all of which are fighting for sales in a shrinking market.

Slashdot Top Deals