Power

Silicon/Perovskite Solar Panels Can Reach 34% Efficiency, Researchers Show (arstechnica.com) 54

An anonymous reader shared this report from Ars Technica: [I]t might be worth spending more to get a panel that converts more of the incoming sunlight to electricity, since it allows you to get more out of the price paid to get each panel installed. But silicon panels are already pushing up against physical limits on efficiency. Which means our best chance for a major boost in panel efficiency may be to combine silicon with an additional photovoltaic material.

Right now, most of the focus is on pairing silicon with a class of materials called perovskites. Perovskite crystals can be layered on top of silicon, creating a panel with two materials that absorb different areas of the spectrum — plus, perovskites can be made from relatively cheap raw materials. Unfortunately, it has been difficult to make perovskites that are both high-efficiency and last for the decades that the silicon portion will. Lots of labs are attempting to change that, though. And two of them reported some progress this week, including a perovskite/silicon system that achieved 34 percent efficiency.

One team of researchers "sent a sample to a European test lab, which came out with an efficiency of 33.7 percent," Ars Technica notes. "The researchers have a few ideas that should boost this to 35 percent, but didn't attempt them for this paper.

"For comparison, the maximum efficiency for silicon alone is in the area of 27 percent, so that represents a very significant boost and is one of the highest perovskite/silicon combinations ever reported."
AI

AI Startup Suno Says Music Industry Suit Aims to Stifle Competition (bloomberg.com) 42

AI music startup Suno is pushing back against the world's biggest record labels, saying in a court filing that a lawsuit they filed against the company aims to stifle competition. From a report: In a filing Thursday in federal court in Massachusetts, Suno said that while the record labels argue the company infringed on their recorded music copyrights, the lawsuit actually reflects the industry's opposition to competition -- which Suno's AI software represents by making it easy for anyone to make music.

"Where Suno sees musicians, teachers, and everyday people using a new tool to create original music, the labels see a threat to their market share," the Cambridge, Massachusetts-based company wrote in the filing, which also asked the court to enter judgment in Suno's favor.

AI

Perplexity AI Will Share Revenue With Publishers After Plagiarism Accusations (cnbc.com) 11

An anonymous reader quotes a report from CNBC: Perplexity AI on Tuesday debuted a revenue-sharing model for publishers after more than a month of plagiarism accusations. Media outlets and content platforms including Fortune, Time, Entrepreneur, The Texas Tribune, Der Spiegel and WordPress.com are the first to join the company's "Publishers Program." The announcement follows an onslaught of controversy in June, when Forbes said it found a plagiarized version of its paywalled original reporting within Perplexity AI's Pages tool, with no reference to the media outlet besides a small "F" logo at the bottom of the page. Weeks later, Wired said it also found evidence of Perplexity plagiarizing Wired stories, and reported that an IP address "almost certainly linked to Perplexity and not listed in its public IP range" visited its parent company's websites more than 800 times in a three-month span.

Under the new partner program, any time a user asks a question and Perplexity generates advertising revenue from citing one of the publisher's articles in its answer, Perplexity will share a flat percentage of that revenue. That percentage counts on a per-article basis, Dmitry Shevelenko, Perplexity's chief business officer, told CNBC in an interview -- meaning that if three articles from one publisher were used in one answer, the partner would receive "triple the revenue share." Shevelenko confirmed that the flat rate is a double-digit percentage but declined to provide specifics. Shevelenko told CNBC that more than a dozen publishers, including "major newspaper dailies and companies that own them," had reached out with interest less than two hours after the program debuted. The company's goal, he said, is to have 30 publishers enrolled by the end of the year, and Perplexity is looking to partner with some of the publishers' ad sales teams so they can sell ads "against all Perplexity inventory."

"When Perplexity earns revenue from an interaction where a publisher's content is referenced, that publisher will also earn a share," Perplexity wrote in a blog post, adding that the company will offer publishers API credits and also work with ScalePost.ai to provide analytics to provide "deeper insights into how Perplexity cites their content." Shevelenko told CNBC that Perplexity began engaging with publishers in January and solidified ideas for how its revenue-sharing program would work later in the first quarter of 2024. He said five Perplexity employees were dedicated to working on the program. "Some of it grew out of conversations we were having with publishers about integrating Perplexity APIs and technology into their products," Shevelenko said.

Windows

Global Computer Outage Impact Vastly Underestimated, Microsoft Admits 64

Microsoft has revealed that the global computer outage caused by a faulty CrowdStrike software update, which impacted numerous major corporations, affected far more devices than initially reported, with the tech giant stating that the previously announced figure of 8.5 million affected Windows machines represents only a "subset" of the total impact. Microsoft has refrained from providing a revised estimate of the full scope of the disruption.

The revelation comes as the technology sector continues to grapple with the fallout from the incident, which occurred 10 days ago and led to widespread disruptions across various industries, prompting Microsoft to face criticism despite the root cause being traced back to a third-party cybersecurity provider's error. Microsoft clarified that the initial 8.5 million figure was derived solely from devices with enabled crash reporting features, suggesting that the true extent of the outage could be substantially higher, given that many systems do not have this optional feature activated.

Further reading: Delta Seeks Damages From CrowdStrike, Microsoft After Outage.
Earth

Are Earth's Forests Losing Their Ability to Absorb Carbon Dioxide? (msn.com) 112

An anonymous reader shared this report from the Washington Post: Earth's land lost much of their ability to absorb the carbon dioxide humans pumped into the air last year, according to a new study that is causing concern among climate scientists that a crucial damper on climate change underwent an unprecedented deterioration. Temperatures in 2023 were so high — and the droughts and wildfires that came with them were so severe — that forests in various parts of the world wilted and burned enough to have degraded the ability of the land to lock away carbon dioxide and act as a check on global warming, the study said.

The scientists behind the research, which focuses on 2023, caution that their findings are preliminary. But the work represents a disturbing data point — one that, if it turns into a trend, spells trouble for the planet and the people on it...

Philippe Ciais [a scientist at France's Laboratory of Climate and Environmental Sciences who co-authored the new research] and his colleagues saw that the concentration of CO2 measured at an observatory on Mauna Loa in Hawaii and elsewhere spiked in 2023, even though global fossil fuel emissions increased only modestly last year in comparison. That mismatch suggests that there was an "unprecedented weakening" in the Earth's ability to absorb carbon, the researchers wrote. The scientists then used satellite data and models for vegetative growth to try to pinpoint where the carbon sink was weakening. The team spotted abnormal losses of carbon in the drought-stricken Amazon and Southeast Asia as well as in the boreal forests of Canada, where record-breaking wildfires burned through tens of millions of acres.

AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Power

Fracking for Heat: A New Source of Clean Energy? (msn.com) 37

Southern California Edison — one of America's largest power companies — will buy power from 7-year-old fracking startup Fervo, reports the Washington Post.

"But instead of oil and gas, Fervo is hunting heat, a more abundant resource that neither pollutes the air nor contributes to global warming." The heat will fuel a new type of power plant: an enhanced geothermal plant... [C]onventional geothermal power plants capture steam from natural underground hot springs in places such as Iceland or the Geysers in Northern California. These require a rare combination of geologic conditions — heat, underground water and porous rock. Enhanced geothermal plants use technology pioneered by oil and gas drillers to reproduce the conditions of a conventional geothermal well. This makes it possible to extract heat in many more places.

When completed in 2028, the new enhanced geothermal plant will add 400 megawatts of carbon-free electricity to the power grid (Southern California Edison has agreed to buy 320 megawatts; the rest will go to smaller power providers.) That is less than one-fifth of the generating capacity of the Diablo Canyon nuclear power plant, which by itself provides nearly a tenth of California's electricity. But as the first power purchasing agreement between an electric utility and an enhanced geothermal company, the deal represents a milestone in the effort to limit global warming. "It's a big deal," said Fervo founder and CEO Tim Latimer. "It shows the important role that geothermal is going to play on the grid as a 24/7 carbon-free energy resource...."

Fracking for heat releases no greenhouse gases. But to meaningfully contribute to emissions cuts, enhanced geothermal will need to expand quickly.

The article includes an interesting statistic about the original impact of fracking. "Between 2005 and 2021, cheaper natural gas replaced so much coal that it drove a larger reduction in U.S. CO2 emissions than replacing coal with emissions-free electricity sources such as wind and solar." (Though it still emits other greenhouse gases, and "some scientists now say that so much methane leaks during fracking that natural gas warms the planet as much as coal does.")

And while fracking for oil still has some strong critics, U.S. presidential candidate Kamala Harris "will not seek to ban fracking if she's elected," the Hill reported Friday, citing confirming comments from a campaign official.
Privacy

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim (theregister.com) 23

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained.

For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here].

According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks.
GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]."

Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."
Power

US Solar Production Soars By 25 Percent In Just One Year (arstechnica.com) 194

Yesterday, the Energy Information Agency (EIA) released electricity generation numbers for the first five months of 2024, revealing that solar power generation increased by 25% compared to the same period last year. Ars Technica's John Timmer reports: The EIA breaks down solar production according to the size of the plant. Large grid-scale facilities have their production tracked, giving the EIA hard numbers. For smaller installations, like rooftop solar on residential and commercial buildings, the agency has to estimate the amount produced, since the hardware often resides behind the metering equipment, so only shows up via lower-than-expected consumption.

In terms of utility-scale production, the first five months of 2024 saw it rise by 29 percent compared to the same period in the year prior. Small-scale solar was "only" up by 18 percent, with the combined number rising by 25.3 percent. Most other generating sources were largely flat, year over year. This includes coal, nuclear, and hydroelectric, all of which changed by 2 percent or less. Wind was up by 4 percent, while natural gas rose by 5 percent. Because natural gas is the largest single source of energy on the grid, however, its 5 percent rise represents a lot of electrons -- slightly more than the total increase in wind and solar.

Overall, energy use was up by about 4 percent compared to the same period in 2023. This could simply be a matter of changing weather conditions that required more heating or cooling. But there have been several trends that should increase electricity usage: the rise of bitcoin mining, growth of data centers, and the electrification of appliances and transport. So far, that hasn't shown up in the actual electricity usage in the US, which has stayed largely flat for decades. It could be possible that 2024 is the year where usage starts going up again.
Since the findings are based on data from before some of the most productive months of the year for solar power, solar production for the year as a whole could increase by much more than 25%. Overall, the EIA predicts solar production could rise by as much as 42% in 2024.
AI

AI Models Face Collapse If They Overdose On Their Own Output 106

According to a new study published in Nature, researchers found that training AI models using AI-generated datasets can lead to "model collapse," where models produce increasingly nonsensical outputs over generations. "In one example, a model started with a text about European architecture in the Middle Ages and ended up -- in the ninth generation -- spouting nonsense about jackrabbits," writes The Register's Lindsay Clark. From the report: [W]ork led by Ilia Shumailov, Google DeepMind and Oxford post-doctoral researcher, found that an AI may fail to pick up less common lines of text, for example, in training datasets, which means subsequent models trained on the output cannot carry forward those nuances. Training new models on the output of earlier models in this way ends up in a recursive loop. In an accompanying article, Emily Wenger, assistant professor of electrical and computer engineering at Duke University, illustrated model collapse with the example of a system tasked with generating images of dogs. "The AI model will gravitate towards recreating the breeds of dog most common in its training data, so might over-represent the Golden Retriever compared with the Petit Basset Griffon Vendéen, given the relative prevalence of the two breeds," she said.

"If subsequent models are trained on an AI-generated data set that over-represents Golden Retrievers, the problem is compounded. With enough cycles of over-represented Golden Retriever, the model will forget that obscure dog breeds such as Petit Basset Griffon Vendeen exist and generate pictures of just Golden Retrievers. Eventually, the model will collapse, rendering it unable to generate meaningful content." While she concedes an over-representation of Golden Retrievers may be no bad thing, the process of collapse is a serious problem for meaningful representative output that includes less-common ideas and ways of writing. "This is the problem at the heart of model collapse," she said.
Google

Google DeepMind's AI Systems Can Now Solve Complex Math Problems (technologyreview.com) 40

Google DeepMind has announced that its AI systems, AlphaProof and AlphaGeometry 2, have achieved silver medal performance at the 2024 International Mathematical Olympiad (IMO), solving four out of six problems and scoring 28 out of 42 possible points in a significant breakthrough for AI in mathematical reasoning. This marks the first time an AI system has reached such a high level of performance in this prestigious competition, which has long been considered a benchmark for advanced mathematical reasoning capabilities in machine learning.

AlphaProof, a system that combines a pre-trained language model with reinforcement learning techniques, demonstrated its new capability by solving two algebra problems and one number theory problem, including the competition's most challenging question. Meanwhile, AlphaGeometry 2 successfully tackled a complex geometry problem, Google wrote in a blog post. The systems' solutions were formally verified and scored by prominent mathematicians, including Fields Medal winner Prof Sir Timothy Gowers and IMO Problem Selection Committee Chair Dr Joseph Myers, lending credibility to the achievement.

The development of these AI systems represents a significant step forward in bridging the gap between natural language processing and formal mathematical reasoning, the company argued. By fine-tuning a version of Google's Gemini model to translate natural language problem statements into formal mathematical language, the researchers created a vast library of formalized problems, enabling AlphaProof to train on millions of mathematical challenges across various difficulty levels and topic areas. While the systems' performance is impressive, challenges remain, particularly in the field of combinatorics where both AI models were unable to solve the given problems. Researchers at Google DeepMind continue to investigate these limitations, the company said, aiming to further improve the systems' capabilities across all areas of mathematics.
AI

Nvidia and Mistral's New Model 'Mistral-NeMo' Brings Enterprise-Grade AI To Desktop Computers (venturebeat.com) 23

Nvidia and French startup Mistral AI jointly announced today the release of a new language model designed to bring powerful AI capabilities directly to business desktops. From a report: The model, named Mistral-NeMo, boasts 12 billion parameters and an expansive 128,000 token context window, positioning it as a formidable tool for businesses seeking to implement AI solutions without the need for extensive cloud resources. Bryan Catanzaro, vice president of applied deep learning research at Nvidia, emphasized the model's accessibility and efficiency in a recent interview with VentureBeat. "We're launching a model that we jointly trained with Mistral. It's a 12 billion parameter model, and we're launching it under Apache 2.0," he said. "We're really excited about the accuracy of this model across a lot of tasks."

The collaboration between Nvidia, a titan in GPU manufacturing and AI hardware, and Mistral AI, a rising star in the European AI scene, represents a significant shift in the AI industry's approach to enterprise solutions. By focusing on a more compact yet powerful model, the partnership aims to democratize access to advanced AI capabilities. Catanzaro elaborated on the advantages of smaller models. "The smaller models are just dramatically more accessible," he said. "They're easier to run, the business model can be different, because people can run them on their own systems at home. In fact, this model can run on RTX GPUs that many people have already."

Security

Ransomware Continues To Pile on Costs For Critical Infrastructure Victims 21

Costs associated with ransomware attacks on critical national infrastructure (CNI) organizations skyrocketed in the past year. From a report: According to Sophos' latest figures, released today, the median ransom payments rose to $2.54 million -- a whopping 41 times last year's sum of $62,500. The mean payment for 2024 is even higher at $3.225 million, although this represents a less dramatic 6x increase. IT, tech, and telecoms were the least likely to pay mega bucks to cybercriminals with an average payment of $330,000, while lower education and federal government orgs reported the highest average payments at $6.6 million.

The numbers are based only on ransomware victims that were willing to disclose the details of their blunders, so do not present the complete picture. On the topic of ransom payments, only 86 CNI organizations of the total 275 involved in the survey offered data. There's a good chance that the numbers would be skewed if 100 percent of the total CNI ransomware victims polled were entirely transparent with their figures. Costs to recover from ransomware attacks are also significantly up compared to the researchers' report last year, with some CNI sectors' costs quadrupling to a median average of $3 million per incident. While the mean cost across oil, gas, energy, and utilities dropped slightly to $3.12 million from $3.17 million last year, the energy and water sectors saw the sharpest increase in recovery costs. The new average for just these two sectors is now four times greater than the global median cross-sector average of $750k, Sophos said.
Transportation

Amid Whistleblower Complaints, Boeing Buys Spirit, Ending Outsourcing of Key Work on Planes (apnews.com) 35

Monday Boeing announced plans to acquire its key supplier, Spirit AeroSystems, for $4.7 billion, according to the Associated Press — "a move that it says will improve plane quality and safety amid increasing scrutiny by Congress, airlines and the Department of Justice. Boeing previously owned Spirit, and the purchase would reverse a longtime Boeing strategy of outsourcing key work on its passenger planes."

But meanwhile, an anonymous reader shared this report from Newsweek: More than a hundred Boeing whistleblowers have contacted the U.S. aviation watchdog since the start of the year, Newsweek can reveal. Official figures show that the Federal Aviation Administration's (FAA) whistleblowing hotline has seen a huge surge of calls from workers concerned about safety problems. Since January the watchdog saw a total of 126 reports, via various channels, from workers concerned about safety problems. In 2023, there were just 11....

After a visit from FAA Administrator Mike Whitaker to a Boeing factory earlier in the year, Boeing CEO Dave Calhoun agreed to share details of the hotline with all Boeing employees. The FAA told Newsweek that the number of Boeing employees coming forward was a "sign of a healthy culture".... Newsweek also spoke to Jon Holden, president of the 751 District for the International Association of Machinists, Boeing's largest union which represents more than 32,000 aerospace workers. Holden said that numerous whistleblowers had complained to the FAA over Boeing's attempt to cut staff and reduce inspections in an effort to "speed up the rate" at which planes went out the door...

Holden's union is currently in contract negotiations with Boeing, and is attempting to secure a 40% pay rise alongside a 50-year guarantee of work security for its members.

CNN also reports on new allegations Wednesday from a former Boeing quality-control manager: that "for years workers at its 787 Dreamliner factory in Everett, Washington, routinely took parts that were deemed unsuitable to fly out of an internal scrap yard and put them back on factory assembly lines." In his first network TV interview, Merle Meyers, a 30-year veteran of Boeing, described to CNN what he says was an elaborate off-the-books practice that Boeing managers at the Everett factory used to meet production deadlines, including taking damaged and improper parts from the company's scrapyard, storehouses and loading docks... Meyers' claims that lapses he witnessed were intentional, organized efforts designed to thwart quality control processes in an effort to keep up with demanding production schedules. Beginning in the early 2000s, Meyers says that for more than a decade, he estimates that about 50,000 parts "escaped" quality control and were used to build aircraft. Those parts include everything from small items like screws to more complex assemblies like wing flaps. A single Boeing 787 Dreamliner, for example, has approximately 2.3 million parts...

Based on conversations Meyers says he had with current Boeing workers in the time since he left the company, he believes that while employees no longer remove parts from the scrapyard, the practice of using other unapproved parts in assembly lines continues. "Now they're back to taking parts of body sections — everything — right when it arrives at the Everett site, bypassing quality, going right to the airplane," Meyers said.

Company emails going back years show that Meyers repeatedly flagged the issue to Boeing's corporate investigations team, pointing out what he says were blatant violations of Boeing's safety rules. But investigators routinely failed to enforce those rules, Meyers says, even ignoring "eye witness observations and the hard work done to ensure the safety of future passengers and crew," he wrote in an internal 2022 email provided to CNN.

Power

ITER Fusion Reactor To See Further Delays, With Operations Pushed To 2034 (arstechnica.com) 112

John Timmer reports via Ars Technica: On Tuesday, the people managing the ITER experimental fusion reactor announced (PDF) that a combination of delays and altered priorities meant that its first-of-its-kind hardware wouldn't see plasma until 2036, with the full-energy deuterium-tritium fusion pushed back to 2039. The latter represents a four-year delay relative to the previous roadmap. While the former is also a delay, it's due in part to changing priorities.

ITER is an attempt to build a fusion reactor that's capable of sustaining plasmas that allow it to operate well beyond the break-even point, where the energy released by fusion reactions significantly exceeds the energy required to create the conditions that enable those reactions. It's meant to hit that milestone by scaling up a well-understood design called a tokamak. But the problem has been plagued by delays and cost overruns nearly from its start. At early stages, many of these stemmed from changes in designs necessitated by a better and improved understanding of plasmas held at extreme pressures and temperatures due to better modeling capabilities and a better understanding of the behavior of plasmas in smaller reactions.

The latest delays are due to more prosaic reasons. One of them is the product of the international nature of the collaboration, which sees individual components built by different partner organizations before assembly at the reactor site in France. The pandemic, unsurprisingly, severely disrupted the production of a lot of these components, and the project's structure meant that alternate suppliers couldn't be used (assuming alternate suppliers of one-of-a-kind hardware existed in the first place). The second problem relates to the location of the reactor in France. The country's nuclear safety regulator had concerns about the assembly of some of the components and halted construction on the reactor.

Science

Get Ready For Nuclear Clocks (arxiv.org) 50

Long-time Slashdot reader jrronimo says JILA physicist Jun Ye's group "has made a breakthrough towards the next stage of precision timekeeping."

From their paper recently published to arXiv: Optical atomic clocks use electronic energy levels to precisely keep track of time. A clock based on nuclear energy levels promises a next-generation platform for precision metrology and fundamental physics studies.... These results mark the start of nuclear-based solid-state optical clocks and demonstrate the first comparison of nuclear and atomic clocks for fundamental physics studies. This work represents a confluence of precision metrology, ultrafast strong field physics, nuclear physics, and fundamental physics.
Bitcoin

Linux Foundation Announces Intent to Form LF Decentralized Trust (linuxfoundation.org) 9

This week the Linux Foundation announced a new organization for decentralized systems and technologies, with an aim of "fostering innovation and collaboration" in both their development and deployment.

It will build on existing Linux Foundation blockchain and digital identity projects, according to the announcement, while supporting "a rapidly growing decentralized technology landscape." To foster this broader ecosystem, LF Decentralized Trust will encompass the growing portfolio of Hyperledger projects and host new open source software, communities, standards, and specifications that are critical to the macro shift toward decentralized systems of distributed trust....

LF Decentralized Trust's expanded project and member ecosystem will be both essential to emerging tokenized assets classes and networks, as well as to modernizing the core infrastructure for finance, trade, government, healthcare, and more. LF Decentralized Trust will serve as a neutral home for the open development of a broad range of ledger, identity, security, interoperability, scale, implementation, and related technologies... LF Decentralized Trust will also include new directed funding models that will drive strategic investments by members into individual projects and project resources.

"With LF Decentralized Trust, we're expanding our commitment to open source innovation by embracing a wider array of decentralized technologies," said Jim Zemlin, Executive Director of the Linux Foundation. "This new, elevated foundation will enable the community to build a more robust ecosystem that drives forward transparency, security, and efficiency in global infrastructure."

"After eight years of advancing the development of blockchain, decentralized identity and related technologies via the Hyperledger community, the time has come to broaden our effort and impact," said Daniela Barbosa, General Manager, Blockchain and Identity, the Linux Foundation. "Ledgers and ledger technologies are but one component of the decentralized systems that will underpin a digital-first global economy. LF Decentralized Trust is where we will gather and grow an expanded community and portfolio of technologies to deliver the transparency, reliability, security and efficiency needed to successfully upgrade critical systems around the world."

The announcement includes quotes of support from numerous companies including Oracle, Siemens, Visa, Accenture, Citi, and Hitachi. Some highlights:
  • "The formation of the LF Decentralized Trust reflects the growing demand for open source resources that are critical to the management and functionality of decentralized systems." — CEO of Digital Asset
  • "The adoption of decentralized infrastructure is at an inflection point, reflecting the increasing demand from both enterprises and consumers for more secure and transparent digital transactions. As the industry leader for onchain data, blockchain abstraction, and interoperability, we're excited to see the formation of the LF Decentralized Trust and to expand our collaboration with leading financial institutions on advancing tokenized assets and the onchain economy at large." — CMO at Chainlink Labs.
  • "As a founding member of the Hyperledger Foundation, and given our unique position in the financial markets, we recognize the vast potential for open-source innovation and decentralized technologies when it comes to reducing risk, increasing resiliency and improving security. The expansion of Hyperledger Foundation into LF Decentralized Trust represents an exciting opportunity to continue expanding these groundbreaking technologies." — a managing director at DTCC

Robotics

Public Servants Uneasy As Government 'Spy' Robot Prowls Federal Offices (www.cbc.ca) 72

An anonymous reader quotes a report from CBC News: A device federal public servants call "the little robot" began appearing in Gatineau office buildings in March. It travels through the workplace to collect data using about 20 sensors and a 360-degree camera, according to Yahya Saad, co-founder of GlobalDWS, which created the robot. "Using AI on the robot, the camera takes the picture, analyzes and counts the number of people and then discards the image," he said. Part of a platform known as VirBrix, the robot also gathers information on air quality, light levels, noise, humidity, temperature and even measures CO2, methane and radon gas. The aim is to create a better work environment for humans -- one that isn't too hot, humid or dim. Saad said that means more comfortable and productive employees. The technology can also help reduce heating, cooling and hydro costs, he said. "All these measures are done to save on energy and reduce the carbon footprint," Saad explained. After the pilot program in March, VirBrix is set to return in July and October, and the government hasn't ruled out extending its use. It's paying $39,663 to lease the robot for two years.

Bruce Roy, national president of the Government Services Union, called the robot's presence in federal workplaces "intrusive" and "insulting." "People feel observed all the time," he said in French. "It's a spy. The robot is a spy for management." Roy, whose union represents more than 12,000 federal workers across several departments, said the robot is unnecessary because the employer already has ways of monitoring employee attendance and performance. "We believe that one of the robot's tasks is to monitor who is there and who is not," he said. "Folks say, why is there a robot here? Doesn't my employer trust that I'm here and doing my work properly?" [...] Jean-Yves Duclos, the minister of public services and procurement, said the government is instead using the technology as it looks to cut its office space footprint in half over the coming years. "These robots, as we call them, these sensors observe the utilization of office space and will be able to give us information over the next few years to better provide the kind of workplace employees need to do their job," Duclos said in French. "These are totally anonymous methods that allow us to evaluate which spaces are the most used and which spaces are not used, so we can better arrange them."
"In those cases we keep the images, but the whole body, not just the face, the whole body of the person is blurred," said Saad. "These are exceptional cases where we need to keep images and then the images would be handed over to the client."

The data is then stored on a server on Canadian soil, according to GlobalDWS.
The Internet

An Effort To Fund an Internet Subsidy Program Just Got Thwarted Again (theverge.com) 18

Bipartisan agreement on government internet subsidies seems unlikely as Democrats and Republicans propose conflicting bills to reauthorize the FCC's spectrum auctions. The Democratic bill aims to fund the now-defunct Affordable Connectivity Program, while the Republican version does not. "While some Republicans supported earlier efforts to extend the subsidy program, those efforts did not go through in time to keep it from ending," notes The Verge. From the report: The Senate Commerce Committee canceled a Tuesday morning markup meeting in which it was set to consider the Spectrum and National Security Act, led by committee chair Maria Cantwell (D-WA). When she introduced it in April, Cantwell said the bill would provide $7 billion to continue funding the Affordable Connectivity Program (ACP), the pandemic-era internet subsidy for low-income Americans that officially ran out of money and ended at the end of May. The main purpose of the bill is to reauthorize the Federal Communications Commission's authority to run auctions for spectrum. The proceeds from spectrum auctions are often used to fund other programs. In addition to the ACP, Cantwell's bill would also fund programs including incentives for domestic chip manufacturing and a program that seeks to replace telecommunications systems that have been deemed national security concerns. The markup was already postponed several times before.

Cantwell blamed Sen. Ted Cruz (R-TX), the top Republican on the Senate Commerce Committee, for standing in the way of the legislation. "We had a chance to secure affordable broadband for millions of Americans, but Senator Cruz said 'no,'" Cantwell said in a statement late Monday. "He said 'no' to securing a lifeline for millions of Americans who rely on the Affordable Connectivity Program to speak to their doctors, do their homework, connect to their jobs, and stay in touch with loved ones -- including more than one million Texas families." In remarks on the Senate floor on Tuesday, Cantwell said her Republican colleagues on the committee offered amendments to limit the ACP funding in the bill. She said the ACP shouldn't be a partisan issue and stressed the wide range of Americans who've relied on the program for high-speed connections, including elderly people living on fixed incomes and many military families. "I hope my colleagues will stop with obstructing and get back to negotiating on important legislation that will deliver these national security priorities and help Americans continue to have access to something as essential as affordable broadband," she said.

Cruz has his own spectrum legislation with Sen. John Thune (R-SD) that would reauthorize the FCC's spectrum auction authority, with a focus on expanding commercial access to mid-band spectrum, commonly used for 5G. But it doesn't have the same ACP funding mechanism. Some large telecom industry players prefer Cruz's bill, in part because it allows for exclusive licensing. Wireless communications trade group CTIA's SVP of government affairs, Kelly Cole, told Fierce Network that the Cruz bill "is a better approach because it follows the historical precedent set by prior bipartisan legislation to extend the FCC's auction authority." But other tech groups like the Internet Technology Industry Council (ITI), which represents companies including Amazon, Apple, Google, and Meta, support Cantwell's bill, in part because of the programs it seeks to fund.

Education

87% in New Poll Say Cost an Important Reason For Halting Studies (thehill.com) 167

A new Gallup survey released Tuesday found cost and work conflicts are the top reasons Americans choose to discontinue their higher education. From a report: In the poll, 87 percent said cost was a "very" or "moderately" important reason for pursuing further institutional study, while 81 percent pointed to work conflicts. The other two leading reasons were the time it takes to complete a degree at 73 percent and lack of remote options at 70 percent. Cost tops the list among all demographic groups, including across racial and ethnic lines.

"For many of these Americans, their time enrolled in these courses represents significant opportunity costs and financial investment. Given that they lack a degree or credential to show for their time enrolled, they are often worse off than if they never enrolled to begin with," Gallup said. Colleges prices have been surging for decades, with some estimating a 180 percent increase between 1980 and 2020. The cost of Ivy League schools is nearing $90,000 a year, and the average student debt held in the U.S. sits around $30,000. "Today, approximately 41.9 million Americans have some college experience but no degree or credential. The percentage of Americans who have taken some college courses, but who have stopped out and not completed their degree or credential, has increased significantly over the past five years," Gallup found.

Slashdot Top Deals