Games

The Best ROM Hack Website is Shutting Down After Nearly 20 Years (polygon.com) 16

ROMhacking.net, a prominent platform for fan translations and modifications of classic games, is shutting down after nearly two decades of operation. The site's administrator, who goes by the name Nightcrawler, said the website will remain accessible in a read-only format, but all new submissions have been halted and the site's extensive database has been transferred to the Internet Archive for preservation.

ROMhacking.net has long served as a crucial resource for gaming enthusiasts, according to Polygon, hosting a vast array of fan-made translations, bug fixes, and modifications for classic titles, many of which never received official localizations outside their countries of origin. The site's contributions to the gaming community include fan translations of Japanese-exclusive titles and even patches for long-standing bugs in popular games like Super Mario 64. Nightcrawler said the website ran into challenges including in managing the site's exponential growth and increasing copyright pressures, things that contributed to the decision to winding down operations.
Windows

Global Computer Outage Impact Vastly Underestimated, Microsoft Admits 64

Microsoft has revealed that the global computer outage caused by a faulty CrowdStrike software update, which impacted numerous major corporations, affected far more devices than initially reported, with the tech giant stating that the previously announced figure of 8.5 million affected Windows machines represents only a "subset" of the total impact. Microsoft has refrained from providing a revised estimate of the full scope of the disruption.

The revelation comes as the technology sector continues to grapple with the fallout from the incident, which occurred 10 days ago and led to widespread disruptions across various industries, prompting Microsoft to face criticism despite the root cause being traced back to a third-party cybersecurity provider's error. Microsoft clarified that the initial 8.5 million figure was derived solely from devices with enabled crash reporting features, suggesting that the true extent of the outage could be substantially higher, given that many systems do not have this optional feature activated.

Further reading: Delta Seeks Damages From CrowdStrike, Microsoft After Outage.
Open Source

Mike McQuaid on 15 Years of Homebrew and Protecting Open-Source Maintainers (thenextweb.com) 37

Despite multiple methods available across major operating systems for installing and updating applications, there remains "no real clear answer to 'which is best,'" reports The Next Web. Each system faces unique challenges such as outdated packages, high fees, and policy restrictions.

Enter Homebrew.

"Initially created as an option for developers to keep the dependencies they often need for developing, testing, and running their work, Homebrew has grown to be so much more in its 15-year history." Created in 2009, Homebrew has become a leading solution for macOS, integrating with MDM tools through its enterprise-focused extension, Workbrew, to balance user freedom with corporate security needs, while maintaining its open-source roots under the guidance of Mike McQuaid. In an interview with The Next Web's Chris Chinchilla, project leader Mike McQuaid talks about the challenges and responsibilities of maintaining one of the world's largest open-source projects: As with anything that attracts plenty of use and attention, Homebrew also attracts a lot of mixed and extreme opinions, and processing and filtering those requires a tough outlook, something that Mike has spoken about in numerous interviews and at conferences. "As a large project, you get a lot of hate from people. Either people are just frustrated because they hit a bug or because you changed something, and they didn't read the release notes, and now something's broken," Mike says when I ask him about how he copes with the constant influx of communication. "There are a lot of entitled, noisy users in open source who contribute very little and like to shout at people and make them feel bad. One of my strengths is that I have very little time for those people, and I just insta-block them or close their issues."

More crucially, an open-source project is often managed and maintained by a group of people. Homebrew has several dozen maintainers and nearly one thousand total contributors. Mike explains that all of these people also deserve to be treated with respect by users, "I'm also super protective of my maintainers, and I don't want them to be treated that way either." But despite these features and its widespread use, one area Homebrew has always lacked is the ability to work well with teams of users. This is where Workbrew, a company Mike founded with two other Homebrew maintainers, steps in. [...] Workbrew ties together various Homebrew features with custom glue to create a workflow for setting up and maintaining Mac machines. It adds new features that core Homebrew maintainers had no interest in adding, such as admin and reporting dashboards for a computing fleet, while bringing more general improvements to the core project.

Bearing in mind Mike's motivation to keep Homebrew in the "traditional open source" model, I asked him how he intended to keep the needs of the project and the business separated and satisfied. "We've seen a lot of churn in the last few years from companies that made licensing decisions five or ten years ago, which have now changed quite dramatically and have generated quite a lot of community backlash," Mike said. "I'm very sensitive to that, and I am a little bit of an open-source purist in that I still consider the open-source initiative's definition of open source to be what open source means. If you don't comply with that, then you can be another thing, but I think you're probably not open source."

And regarding keeping his and his co-founder's dual roles separated, Mike states, "I'm the CTO and co-founder of Workbrew, and I'm the project leader of Homebrew. The project leader with Homebrew is an elected position." Every year, the maintainers and the community elect a candidate. "But then, with the Homebrew maintainers working with us on Workbrew, one of the things I say is that when we're working on Workbrew, I'm your boss now, but when we work on Homebrew, I'm not your boss," Mike adds. "If you think I'm saying something and it's a bad idea, you tell me it's a bad idea, right?" The company is keeping its early progress in a private beta for now, but you can expect an announcement soon. As for what's happening for Homebrew? Well, in the best "open source" way, that's up to the community and always will be.

Movies

Disney's First R-Rated Movie Opening Sets an All-Time Record: 'Deadpool & Wolverine' (hollywoodreporter.com) 70

No R-rated film has ever earned as much in its opening weekend, reports the Hollywood Reporter — a whopping $205 million. (The previous record was $133.7 million, set in 2016 by the original film Deadpool...)

It's also the very first R-rated film ever released by Disney... [Deadpool actor Ryan] Reynolds has his own theory about its success. "Disney probably doesn't want me to frame it this way, but I've always thought of Deadpool & Wolverine as the first four-quadrant, R-rated film," Reynolds tells the Hollywood Reporter. "Yes, it's rated R, but we set out to make a movie with enough laughs, action and heart to appeal to everyone, whether you're a comic book movie fan or not."

There's reason Disney and others may bristle at labeling it a four-quadrant film, which generally is reserved for movies that work equally for males and females over and under 25. Afterall, it is perhaps the most violent and bloody Deadpool movie yet. Still, here's evidence to back up Reynolds' theory that it's playing to a far more broad audience than the usual Marvel Cinematic Univerese movie, even if it's skewing male by anywhere from 60 to 63 percent. So far, 13.6 million people have bought tickets to see it, on par with last year's Barbie, which was rated PG-13, according to Steve Buck's leading research firm EntTelligence. That's the most foot traffic ever for an R-rated movie....

"Once thought of as a sure-fire way to limit potential box office, the R rating, when properly applied, can be the key to unlocking massive box office, and this has proven to be the secret sauce for the Deadpool franchise," says chief Comscore box office analyst Paul Dergarabedian. "The creative freedom afforded by the less restrictive rating has enabled filmmakers to push the envelope and, particularly in the case of Deadpool & Wolverine, can deliver the kind of edgy, intense, profanity-filled comedy action that modern audiences are fired up to see on the big screen...."

It's also the biggest July opening of all time, the biggest opening of 2024 so far and Marvel Studios' biggest launch since Spider-Man: No Way Home in December 2021.

ScreenRant notes that Deadpool & Wolverine has already surpassed the entire global box office for The Marvels in just three days. It's the biggest debut for a film since James Cameron's Avatar: The Way of the Water in December of 2022 (according to the Hollywood Reporter). And they add that though the figures haven't been adjusted for inflation — it's still the eighth-biggest box office opening of all time.

But at the end of the day, it's just people enjoying a movie together. "Well, I'm not saying that other people should do this, but my 9-year-old watched the movie with me and my mom, who's in her late 70s," Reynolds reportedly told the New York Times, "and it was just was one of the best moments of this whole experience for me. Both of them were laughing their guts out, were feeling the emotion where I most desperately hoped people would be."
Earth

Are Earth's Forests Losing Their Ability to Absorb Carbon Dioxide? (msn.com) 112

An anonymous reader shared this report from the Washington Post: Earth's land lost much of their ability to absorb the carbon dioxide humans pumped into the air last year, according to a new study that is causing concern among climate scientists that a crucial damper on climate change underwent an unprecedented deterioration. Temperatures in 2023 were so high — and the droughts and wildfires that came with them were so severe — that forests in various parts of the world wilted and burned enough to have degraded the ability of the land to lock away carbon dioxide and act as a check on global warming, the study said.

The scientists behind the research, which focuses on 2023, caution that their findings are preliminary. But the work represents a disturbing data point — one that, if it turns into a trend, spells trouble for the planet and the people on it...

Philippe Ciais [a scientist at France's Laboratory of Climate and Environmental Sciences who co-authored the new research] and his colleagues saw that the concentration of CO2 measured at an observatory on Mauna Loa in Hawaii and elsewhere spiked in 2023, even though global fossil fuel emissions increased only modestly last year in comparison. That mismatch suggests that there was an "unprecedented weakening" in the Earth's ability to absorb carbon, the researchers wrote. The scientists then used satellite data and models for vegetative growth to try to pinpoint where the carbon sink was weakening. The team spotted abnormal losses of carbon in the drought-stricken Amazon and Southeast Asia as well as in the boreal forests of Canada, where record-breaking wildfires burned through tens of millions of acres.

AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Movies

Comic-Con 2024: New Doctor Who Series, 'Star Trek' Movie, Keanu Reeves, and a Red Hulk (polygon.com) 77

As Comic-Con hits San Diego, "part of the big news in 2024 is that the con won't have a corresponding virtual or online event this year," according to Polygon, "for the first time since 2019."

But there's still some big scifi media news, according to CNET's Comic-Con coverage: Disney revealed a new Doctor Who addition to the franchise that will jump back to the 1970s with the Sea Devils, an ancient group of beings who arise from the sea. Made in partnership with the BBC, the series... will air on Disney Plus, where fans can currently stream season 14 of Doctor Who starring Ncuti Gatwa.
And there's also an upcoming Doctor Who Christmas special.

Meanwhile, Saturday night, USA Today ran a special article with late-breaking announcements about Marvel's Cinematic Universe: Marvel has already won Comic-Con, with a raucous screening of "Deadpool & Wolverine" followed by a high-tech drone show, and the box office, with the new movie on track to have one of the best openings of all time... Robert Downey Jr. returns to the MCU as Doctor Doom in Avengers: Doomsday. Kevin Feige says the Fantastic Four will be in the next two Avengers movies... And here comes the Fantastic Four [movie] a year from now. It starts filming Tuesday in the UK...
The article says Marvel's Fantastic Four presentation included "a Fantasti-Car that hovers across the stage — and that castmembers also appeared from the upcoming Thunderbolts* movie.

More geeky news:
  • Amazon Prime showed a new four-minute trailer with clips from season two of its J.R.R. Tolkein prequel, "The Rings of Power". (And there was also a three-minute blooper reel for Season 4 of Prime's superhero-themed series, "The Boys".)
  • Paramount+ showed a trailer for the Star Trek universe's first streaming movie, Section 31. There was also a trailer for season 5 of the animated comedy Star Trek: Lower Decks — plus a particularly strange clip from the fourth season of Star Trek: Strange New Worlds.
  • Next February will see the release of Captain America: Brave New World, in which the Incredible Hulk may get some competition from Harrison Ford, who's been cast as the Red Hulk.

But things got a little too real Friday when a fire at a nearby steakhouse forced the evacuation of the immersive "Penguin Lounge" — which was promoting Max's new prequel series to 2022's movie The Batman.


The Courts

Courts Close the Loophole Letting the Feds Search Your Phone At the Border (reason.com) 46

On Wednesday, Judge Nina Morrison ruled that cellphone searches at the border are "nonroutine" and require probable cause and a warrant, likening them to more invasive searches due to their heavy privacy impact. As reported by Reason, this decision closes the loophole in the Fourth Amendment's protection against unreasonable searches and seizures, which Customs and Border Protection (CBP) agents have exploited. Courts have previously ruled that the government has the right to conduct routine warrantless searches for contraband at the border. From the report: Although the interests of stopping contraband are "undoubtedly served when the government searches the luggage or pockets of a person crossing the border carrying objects that can only be introduced to this country by being physically moved across its borders, the extent to which those interests are served when the government searches data stored on a person's cell phone is far less clear," the judge declared. Morrison noted that "reviewing the information in a person's cell phone is the best approximation government officials have for mindreading," so searching through cellphone data has an even heavier privacy impact than rummaging through physical possessions. Therefore, the court ruled, a cellphone search at the border requires both probable cause and a warrant. Morrison did not distinguish between scanning a phone's contents with special software and manually flipping through it.

And in a victory for journalists, the judge specifically acknowledged the First Amendment implications of cellphone searches too. She cited reporting by The Intercept and VICE about CPB searching journalists' cellphones "based on these journalists' ongoing coverage of politically sensitive issues" and warned that those phone searches could put confidential sources at risk. Wednesday's ruling adds to a stream of cases restricting the feds' ability to search travelers' electronics. The 4th and 9th Circuits, which cover the mid-Atlantic and Western states, have ruled that border police need at least "reasonable suspicion" of a crime to search cellphones. Last year, a judge in the Southern District of New York also ruled (PDF) that the government "may not copy and search an American citizen's cell phone at the border without a warrant absent exigent circumstances."

Google

Pixel 9 AI Will Add You To Group Photos Even When You're Not There (androidheadlines.com) 54

Google's upcoming Pixel 9 smartphones are set to introduce new AI-powered features, including "Add Me," a tool that will allow users to insert themselves into group photos after those pictures have been taken, according to leaked promotional video obtained by Android Headlines. This feature builds on the Pixel 8's "Best Take" function, which allowed face swapping in group shots.
Intel

No Fix For Intel's Crashing 13th and 14th Gen CPUs - Any Damage is Permanent 85

An anonymous reader shares a report: On Monday, it initially seemed like the beginning of the end for Intel's desktop CPU instability woes -- the company confirmed a patch is coming in mid-August that should address the "root cause" of exposure to elevated voltage. But if your 13th or 14th Gen Intel Core processor is already crashing, that patch apparently won't fix it.

Citing unnamed sources, Tom's Hardware reports that any degradation of the processor is irreversible, and an Intel spokesperson did not deny that when we asked. Intel is "confident" the patch will keep it from happening in the first place. But if your defective CPU has been damaged, your best option is to replace it instead of tweaking BIOS settings to try and alleviate the problems.

And, Intel confirms, too-high voltages aren't the only reason some of these chips are failing. Intel spokesperson Thomas Hannaford confirms it's a primary cause, but the company is still investigating. Intel community manager Lex Hoyos also revealed some instability reports can be traced back to an oxidization manufacturing issue that was fixed at an unspecified date last year.
Microsoft

Microsoft Pushes for Windows Changes After CrowdStrike Incident 86

In the wake of a major incident that affected millions of Windows PCs, Microsoft is calling for significant changes to enhance the resilience of its operating system. John Cable, Microsoft's vice president of program management for Windows servicing and delivery, said there was a need for "end-to-end resilience" in a blog post, signaling a potential shift in Microsoft's approach to third-party access to the Windows kernel.

While not explicitly detailing planned improvements, Cable pointed to recent innovations like VBS enclaves and the Azure Attestation service as examples of security measures that don't rely on kernel access. This move towards a "Zero Trust" approach could have far-reaching implications for the cybersecurity industry and Windows users worldwide, as Microsoft seeks to balance system security with the needs of its partners in the broader security community.

The comment follows a Microsoft spokesman revealed last week that a 2009 European Commission agreement prevented the company from restricting third-party access to Windows' core functions.
Java

Chemist Explains the Chemistry Behind Decaf Coffee (theconversation.com) 81

An anonymous reader quotes a report from The Conversation, written by Michael W. Crowder, Professor of Chemistry and Biochemistry and Dean of the Graduate School at Miami University: For many people, the aroma of freshly brewed coffee is the start of a great day. But caffeine can cause headaches and jitters in others. That's why many people reach for a decaffeinated cup instead. I'm a chemistry professor who has taught lectures on why chemicals dissolve in some liquids but not in others. The processes of decaffeination offer great real-life examples of these chemistry concepts. Even the best decaffeination method, however, does not remove all of the caffeine -- about 7 milligrams of caffeine usually remain in an 8-ounce cup. Producers decaffeinating their coffee want to remove the caffeine while retaining all -- or at least most -- of the other chemical aroma and flavor compounds.

Decaffeination has a rich history, and now almost all coffee producers use one of three common methods. All these methods, which are also used to make decaffeinated tea, start with green, or unroasted, coffee beans that have been premoistened. Using roasted coffee beans would result in a coffee with a very different aroma and taste because the decaffeination steps would remove some flavor and odor compounds produced during roasting.
Here's a summary of each method discussed by Dr. Crowder:

The Carbon Dioxide Method: Developed in the early 1970s, the carbon dioxide method uses high-pressure CO2 to extract caffeine from moistened coffee beans, resulting in coffee that retains most of its flavor. The caffeine-laden CO2 is then filtered out using water or activated carbon, removing 96% to 98% of the caffeine with minimal CO2 residue.

The Swiss Water Process: First used commercially in the early 1980s, the Swiss water method uses hot water and activated charcoal filters to decaffeinate coffee, preserving most of its natural flavor. This chemical-free approach removes 94% to 96% of the caffeine by soaking the beans repeatedly until the desired caffeine level is achieved.

Solvent-Based Methods: Originating in the early 1900s, solvent-based methods use organic solvents like ethyl acetate and methylene chloride to extract caffeine from green coffee beans. These methods remove 96% to 97% of the caffeine through either direct soaking in solvent or indirect treatment of water containing caffeine, followed by steaming and roasting to ensure safety and flavor retention.

"It's chemically impossible to dissolve out only the caffeine without also dissolving out other chemical compounds in the beans, so decaffeination inevitably removes some other compounds that contribute to the aroma and flavor of your cup of coffee," writes Dr. Crowder in closing. "But some techniques, like the Swiss water process and the indirect solvent method, have steps that may reintroduce some of these extracted compounds. These approaches probably can't return all the extra compounds back to the beans, but they may add some of the flavor compounds back."
Security

Secure Boot Is Completely Broken On 200+ Models From 5 Big Device Makers (arstechnica.com) 63

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what's known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon..., and it's not clear when it was taken down. The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef:87:81:23:00:84:47:17:0b:b3:cd:87:3a:f4. A table appearing at the end of this article lists each one. The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings "DO NOT SHIP" or "DO NOT TRUST." These keys were created by AMI, one of the three main providers of software developer kits that device makers use to customize their UEFI firmware so it will run on their specific hardware configurations. As the strings suggest, the keys were never intended to be used in production systems. Instead, AMI provided them to customers or prospective customers for testing. For reasons that aren't clear, the test keys made their way into devices from a nearly inexhaustive roster of makers. In addition to the five makers mentioned earlier, they include Aopen, Foremelife, Fujitsu, HP, Lenovo, and Supermicro.

Cryptographic key management best practices call for credentials such as production platform keys to be unique for every product line or, at a minimum, to be unique to a given device manufacturer. Best practices also dictate that keys should be rotated periodically. The test keys discovered by Binarly, by contrast, were shared for more than a decade among more than a dozen independent device makers. The result is that the keys can no longer be trusted because the private portion of them is an open industry secret. Binarly has named its discovery PKfail in recognition of the massive supply-chain snafu resulting from the industry-wide failure to properly manage platform keys. The report is available here. Proof-of-concept videos are here and here. Binarly has provided a scanning tool here.
"It's a big problem," said Martin Smolar, a malware analyst specializing in rootkits who reviewed the Binarly research. "It's basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically... execute any malware or untrusted code during system boot. Of course, privileged access is required, but that's not a problem in many cases."

Binarly founder and CEO Alex Matrosov added: "Imagine all the people in an apartment building have the same front door lock and key. If anyone loses the key, it could be a problem for the entire building. But what if things are even worse and other buildings have the same lock and the keys?"
ISS

Russia Announces It Will Create Core of New Space Station By 2030 (reuters.com) 99

"Despite its domestic space program faltering even before sanctions due to its invasion of Ukraine, and at least one very public failure on a less ambitious project, Russia has announced it will begin construction of a Russian-only replacement for the ISS and place it in a more difficult-to-access polar orbit," writes longtime Slashdot reader Baron_Yam. "Russia is motivated by military and political demands to achieve this, but whether it has the means or not seems uncertain at best." Reuters reports: Russia is aiming to create the four-module core of its planned new orbital space station by 2030, its Roscosmos space agency said on Tuesday. The head of Roscosmos, Yuri Borisov, signed off on the timetable with the directors of 19 enterprises involved in creating the new station. The agency confirmed plans to launch an initial scientific and energy module in 2027. It said three more modules would be added by 2030 and a further two between 2031 and 2033. [...]

Apart from the design and manufacture of the modules, Roscomos said the schedule approved by Borisov includes flight-testing a new-generation crewed spacecraft and building rockets and ground-based infrastructure. The new station will enable Russia to "solve problems of scientific and technological development, national economy and national security that are not available on the Russian segment of the ISS due to technological limitations and the terms of international agreements," it said.

AI

Open Source AI Better for US as China Will Steal Tech Anyway, Zuckerberg Argues (fb.com) 37

Meta CEO Mark Zuckerberg has advocated for open-source AI development, asserting it as a strategic advantage for the United States against China. In a blog post, Zuckerberg argued that closing off AI models would not effectively prevent Chinese access, given their espionage capabilities, and would instead disadvantage U.S. allies and smaller entities. He writes: Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy, and most tech companies are far from operating in a way that would make this more difficult. It seems most likely that a world of only closed models results in a small number of big companies plus our geopolitical adversaries having access to leading models, while startups, universities, and small businesses miss out on opportunities. Plus, constraining American innovation to closed development increases the chance that we don't lead at all. Instead, I think our best strategy is to build a robust open ecosystem and have our leading companies work closely with our government and allies to ensure they can best take advantage of the latest advances and achieve a sustainable first-mover advantage over the long term.
United States

US Urges Vigilance By Tech Startups, VC Firms on Foreign Funds (yahoo.com) 24

The US is warning homegrown tech startups and venture capital firms that some foreign investments may be fronts for hostile nations seeking data and technology for their governments or to undermine American businesses. From a report: Several US intelligence agencies are spotlighting the concern in a joint bulletin Wednesday to small businesses, trade associations and others associated with the venture capital community, according to the National Counterintelligence and Security Center. "Unfortunately our adversaries continue to exploit early-stage investments in US startups to take their sensitive data," said Michael Casey, director of the NCSC. "These actions threaten US economic and national security and can directly lead to the failure of these companies."

Washington has ramped up scrutiny of investments related to countries it considers adversaries, most notably China, as advanced technologies with breakthrough commercial potential, such as artificial intelligence, can also be used to enhance military or espionage capabilities. [...] Small tech companies and venture capitalists "are not in a position to assess the national security implications of their investments," said Mark Montgomery, former executive director of the Cyberspace Solarium Commission, which was assigned to develop a US cybersecurity strategy. "There are way too many examples where what appears to be, at best, potentially only dual-use or non-military-use technology is quickly twisted and used as a national security tool."

Earth

Wealthy Western Countries Lead in Global Oil and Gas Expansion (theguardian.com) 99

A surge in new oil and gas production in 2024 threatens to unleash nearly 12 billion tonnes of planet-heating emissions, with the world's wealthiest countries -- such as the US and the UK -- leading a stampede of fossil fuel expansion in spite of their climate commitments, new data reveals. From a report: The new oil and gas field licences forecast to be awarded across the world this year are on track to generate the highest level of emissions since those issued in 2018, as heatwaves, wildfires, drought and floods cause death and destruction globally, according to analysis of industry data by the International Institute for Sustainable Development (IISD). The 11.9bn tonnes of greenhouse gas emissions -- which is roughly the same as China's annual carbon pollution -- resulting over their lifetime from all current and upcoming oil and gas fields forecast to be licensed by the end of 2024 would be greater than the past four years combined. The projection includes licences awarded as of June 2024, as well as the oil and gas blocks open for bidding, under evaluation or planned.

Meanwhile, fossil fuel firms are ploughing more money into developing new oil and gas sites than at any time since the 2015 Paris climate deal, when the world's governments agreed to take steps to cut emissions and curb global heating. The world's wealthiest countries are economically best placed -- and obliged under the Paris accords -- to lead the transition away from fossil fuels to cleaner energy sources. But these high-capacity countries with a low economic dependence on fossil fuels are spearheading the latest drilling frenzy despite dwindling easy-to-reach reserves, handing out 825 new licences in 2023, the largest number since records began.

AI

AI Adoption Creeps as Enterprises Wrestle With Costs and Use Cases (indiadispatch.com) 32

Global enterprises are grappling with the complexities of AI adoption, according to hundreds of top industry executives at a recent private software conference hosted by UBS. UBS adds: We heard:
1. The data points from a private GPU cloud infrastructure provider were a very bullish readthrough to GPU demand, Microsoft's AI infra capabilities and the ramp of enterprise/software demand for training and inference compute.
2. One F500 customer was at 1% Office Copilot roll-out, moving to perhaps 2% in a year as they a) fine-tune internal best practices and b) negotiate to get Microsoft much lower on price.
3. One private flagged "copilot chaos," with customers having to choose between AI copilots from seemingly every tech firm (we wonder if this creates pricing pressure and/or an evaluation slowdown).
4. Popular use cases are AI apps for internal, domain-specific tasks (simple workflow automation).
5. Little evidence of AI resulting in customer headcount cuts, but headcount reduction with 3rd-party managed services providers and (India-based) SI firms.

AI

Google's New Weather Prediction System Combines AI With Traditional Physics (technologyreview.com) 56

An anonymous reader quotes a report from MIT Technology Review: Researchers from Google have built a new weather prediction model that combines machine learning with more conventional techniques, potentially yielding accurate forecasts at a fraction of the current cost. The model, called NeuralGCM and described in a paper in Nature today, bridges a divide that's grown among weather prediction experts in the last several years. While new machine-learning techniques that predict weather by learning from years of past data are extremely fast and efficient, they can struggle with long-term predictions. General circulation models, on the other hand, which have dominated weather prediction for the last 50 years, use complex equations to model changes in the atmosphere and give accurate projections, but they are exceedingly slow and expensive to run. Experts are divided on which tool will be most reliable going forward. But the new model from Google instead attempts to combine the two.

"It's not sort of physics versus AI. It's really physics and AI together," says Stephan Hoyer, an AI researcher at Google Research and a coauthor of the paper. The system still uses a conventional model to work out some of the large atmospheric changes required to make a prediction. It then incorporates AI, which tends to do well where those larger models fall flat -- typically for predictions on scales smaller than about 25 kilometers, like those dealing with cloud formations or regional microclimates (San Francisco's fog, for example). "That's where we inject AI very selectively to correct the errors that accumulate on small scales," Hoyer says. The result, the researchers say, is a model that can produce quality predictions faster with less computational power. They say NeuralGCM is as accurate as one-to-15-day forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF), which is a partner organization in the research.

But the real promise of technology like this is not in better weather predictions for your local area, says Aaron Hill, an assistant professor at the School of Meteorology at the University of Oklahoma, who was not involved in this research. Instead, it's in larger-scale climate events that are prohibitively expensive to model with conventional techniques. The possibilities could range from predicting tropical cyclones with more notice to modeling more complex climate changes that are years away. "It's so computationally intensive to simulate the globe over and over again or for long periods of time," Hill says. That means the best climate models are hamstrung by the high costs of computing power, which presents a real bottleneck to research."
The researchers said NeuralGCM will be open source and capable of running on less than 5,500 lines of code, compared with the nearly 377,000 lines required for the model from the National Oceanic and Atmospheric Administration (NOAA).
Graphics

Nvidia RTX 40-Series GPUs Hampered By Low-Quality Thermal Paste (pcgamer.com) 50

"Anyone who is into gaming knows your graphics card is under strain trying to display modern graphics," writes longtime Slashdot reader smooth wombat. "This results in increased power usage, which is then turned into heat. Keeping your card cool is a must to get the best performance possible."

"However, hardware tester Igor's Lab found that vendors for Nvidia RTX 40-series cards are using cheap, poorly applied thermal paste, which is leading to high temperatures and consequently, performance degradation over time. This penny-pinching has been confirmed by Nick Evanson at PC Gamer." From the report: I have four RTX 40-series cards in my office (RTX 4080 Super, 4070 Ti, and two 4070s) and all of them have quite high hotspots -- the highest temperature recorded by an individual thermal sensor in the die. In the case of the 4080 Super, it's around 11 C higher than the average temperature of the chip. I took it apart to apply some decent quality thermal paste and discovered a similar situation to that found by Igor's Lab. In the space of a few months, the factory-applied paste had separated and spread out, leaving just an oily film behind, and a few patches of the thermal compound itself. I checked the other cards and found that they were all in a similar state.

Igor's Lab examined the thermal paste used on a brand-new RTX 4080 and found it to be quite thin in nature, due to large quantities of cheap silicone oil being used, along with zinc oxide filler. There was lots of ground aluminium oxide (the material that provides the actual thermal transfer) but it was quite coarse, leading to the paste separating quite easily. Removing the factory-installed paste from another RTX 4080 graphics card, Igor's Lab applied a more appropriate amount of a high-quality paste and discovered that it lowered the hotspot temperature by nearly 30 C.

Slashdot Top Deals