Operating Systems

Intel, NVIDIA, AMD GPU Drivers Finally Play Nice With ReactOS (x.com) 21

ReactOS aims to be compatible with programs and drivers developed for Windows Server 2003 and later versions of Microsoft Windows. And Slashdot reader jeditobe reports that the project has now "announced significant progress in achieving compatibility with proprietary graphics drivers." ReactOS now supports roughly 90% of GPU drivers for Windows XP and Windows Server 2003, thanks to a series of fixes and the implementation of the KMDF (Kernel-Mode Driver Framework) and WDDM (Windows Display Driver Model) subsystems. Prior to these changes, many proprietary drivers either failed to launch or exhibited unstable behavior. In the latest nightly builds of the 0.4.16 branch, drivers from a variety of manufacturers — including Intel, NVIDIA, and AMD — are running reliably.

The project demonstrated ReactOS running on real hardware, including booting with installed drivers for graphics cards such as Intel GMA 945, NVIDIA GeForce 8800 GTS and GTX 750 Ti, and AMD Radeon HD 7530G. They also highlighted successful operation on mobile GPUs like the NVIDIA Quadro 1000M, with 2D/3D acceleration, audio, and network connectivity all functioning correctly. Further tests confirmed support on less common or older configurations, including a laptop with a Radeon Xpress 1100, as well as high-performance cards like the NVIDIA GTX Titan X.

A key contribution came from a patch merged into the main branch for the memory management subsystem, which improved driver stability and reduced crashes during graphics adapter initialization.

Android

Android's pKVM Becomes First Globally Certified Software to Achieve SESIP Level 5 Security Certification (googleblog.com) 32

Protected KVM (pKVM), the hypervisor powering the Android Virtualization Framework, has officially achieved SESIP Level 5 certification (in testing by cybersecurity lab Dekra against the TrustCB SESIP scheme).

Google's security blog called the certification "a watershed moment," and a "new benchmark" for both open-source security — and for the future of consumer electronics. "It provides a single, open-source, and exceptionally high-quality firmware base that all device manufacturers can build upon." This makes pKVM the first software security system designed for large-scale deployment in consumer electronics to meet this assurance bar. The implications for the future of secure mobile technology are profound. With this level of security assurance, Android is now positioned to securely support the next generation of high-criticality isolated workloads. This includes vital features, such as on-device AI workloads that can operate on ultra-personalized data, with the highest assurances of privacy and integrity...

Achieving Security Evaluation Standard for IoT Platforms (SESIP) Level 5 is a landmark because it incorporates AVA_VAN.5, the highest level of vulnerability analysis and penetration testing under the ISO 15408 (Common Criteria) standard. A system certified to this level has been evaluated to be resistant to highly skilled, knowledgeable, well-motivated, and well-funded attackers who may have insider knowledge and access. This certification is the cornerstone of the next-generation of Android's multi-layered security strategy. Many of the TEEs (Trusted Execution Environments) used in the industry have not been formally certified or have only achieved lower levels of security assurance... Looking ahead, Android device manufacturers will be required to use isolation technology that meets this same level of security for various security operations that the device relies on. Protected KVM ensures that every user can benefit from a consistent, transparent, and verifiably secure foundation.

"This achievement represents just one important aspect of the immense, multi-year dedication from the Linux and KVM developer communities and multiple engineering teams at Google developing pKVM and AVF," the post concludes.

"We look forward to seeing the open-source community and Android ecosystem continue to build on this foundation, delivering a new era of high-assurance mobile technology for users."
Programming

The Toughest Programming Question for High School Students on This Year's CS Exam: Arrays 65

America's nonprofit College Board lets high school students take college-level classes — including a computer programming course that culminates with a 90-minute test. But students did better on questions about If-Then statements than they did on questions about arrays, according to the head of the program. Long-time Slashdot reader theodp explains: Students exhibited "strong performance on primitive types, Boolean expressions, and If statements; 44% of students earned 7-8 of these 8 points," says program head Trevor Packard. But students were challenged by "questions on Arrays, ArrayLists, and 2D Arrays; 17% of students earned 11-12 of these 12 points."

"The most challenging AP Computer Science A free-response question was #4, the 2D array number puzzle; 19% of students earned 8-9 of the 9 points possible."

You can see that question here. ("You will write the constructor and one method of the SumOrSameGame class... Array elements are initialized with random integers between 1 and 9, inclusive, each with an equal chance of being assigned to each element of puzzle...") Although to be fair, it was the last question on the test — appearing on page 16 — so maybe some students just didn't get to it.

theodp shares a sample Java solution and one in Excel VBA solution (which includes a visual presentation).

There's tests in 38 subjects — but CS and Statistics are the subjects where the highest number of students earned the test's lowest-possible score (1 out of 5). That end of the graph also includes notoriously difficult subjects like Latin, Japanese Language, and Physics.

There's also a table showing scores for the last 23 years, with fewer than 67% of students achieving a passing grade (3+) for the first 11 years. But in 2013 and 2017, more than 67% of students achieved that passsing grade, and the percentage has stayed above that line ever since (except for 2021), vascillating between 67% and 70.4%.

2018: 67.8%
2019: 69.6%
2020: 70.4%
2021: 65.1%
2022: 67.6%
2023: 68.0%
2024: 67.2%
2025: 67.0%
Programming

Apple Migrates Its Password Monitoring Service to Swift from Java, Gains 40% Performance Uplift (infoq.com) 109

Meta and AWS have used Rust, and Netflix uses Go,reports the programming news site InfoQ. But using another language, Apple recently "migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput, and significantly reducing memory usage."

This freed up nearly 50% of their previously allocated Kubernetes capacity, according to the article, and even "improved startup time, and simplified concurrency." In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability... "Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency."

Apple's Password Monitoring service, part of the broader Password app's ecosystem, is responsible for securely checking whether a user's saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols. This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions... Apple's previous Java implementation struggled to meet the service's growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead — from JVM initialization, class loading, and just-in-time compilation, slowed the system's ability to scale in real time. Additionally, the service's memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs.

Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases.... Swift's deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java.

"While this isn't a sign that Java and similar languages are in decline," concludes InfoQ's article, "there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice."
Space

Six More Humans Successfully Carried to the Edge of Space by Blue Origin (space.com) 74

An anonymous reader shared this report from Space.com: Three world travelers, two Space Camp alums and an aerospace executive whose last name aptly matched their shared adventure traveled into space and back Saturday, becoming the latest six people to fly with Blue Origin, the spaceflight company founded by billionaire Jeff Bezos.

Mark Rocket joined Jaime Alemán, Jesse Williams, Paul Jeris, Gretchen Green and Amy Medina Jorge on board the RSS First Step — Blue Origin's first of two human-rated New Shepard capsules — for a trip above the Kármán Line, the 62-mile-high (100-kilometer) internationally recognized boundary between Earth and space...

Mark Rocket became the first New Zealander to reach space on the mission. His connection to aerospace goes beyond his apt name and today's flight; he's currently the CEO of Kea Aerospace and previously helped lead Rocket Lab, a competing space launch company to Blue Origin that sends most of its rockets up from New Zealand. Alemán, Williams and Jeris each traveled the world extensively before briefly leaving the planet today. An attorney from Panama, Alemán is now the first person to have visited all 193 countries recognized by the United Nations, traveled to the North and South Poles, and now, have been into space. For Williams, an entrepreneur from Canada, Saturday's flight continued his record of achieving high altitudes; he has summitted Mt. Everest and five of the other six other highest mountains across the globe.

"For about three minutes, the six NS-32 crewmates experienced weightlessness," the article points out, "and had an astronaut's-eye view of the planet..."

On social media Blue Origin notes it's their 12th human spaceflight, "and the 32nd flight of the New Shepard program."
Businesses

Labor Arbitrage RIP (indiadispatch.com) 56

An anonymous reader shares a report: For decades, India's economic promise has rested on its demographic dividend -- the competitive edge of a massive, young, and increasingly educated workforce. Economists and policymakers have routinely cited the country's population profile as its ticket to economic superpower status, with projections of reaching $10 trillion in GDP and achieving high-income status by 2047. These forecasts depend heavily on a critical assumption: that roughly 500 million Indians currently aged 5-24 will find productive employment as they enter the workforce over the next two decades. But a sobering new analysis from Bernstein suggests this fundamental premise may be crumbling under the weight of rapid advances in AI.

"The advent of AI threatens to erode all the advantages of India's rich demographic dividend," write Bernstein analysts Venugopal Garre and Nikhil Arela, who characterize their assessment as a potential "doomsday scenario" for a nation that has hitched its economic wagon to services-led growth. At stake is India's $350 billion services export sector -- a sprawling ecosystem of IT outsourcing, business process management, and offshore knowledge centers that employs over 10 million workers, mostly in jobs that place them in the top 25% of the country's income distribution.

While India's IT giants have successfully navigated previous technological shifts -- from basic call centers in the late 1980s to cloud computing and data analytics more recently -- AI poses a fundamentally different challenge. Unlike earlier transitions that required human adaptation, today's AI systems threaten to replace rather than complement the workforce. "AI subscriptions that come at a fraction of the costs of India's entry level engineers can be deployed to perform tasks at higher precision and speed," the report note.

Science

Have We Reached Peak Human Life Span? (nytimes.com) 75

The oldest human on record, Jeanne Calment of France, lived to the age of 122. What are the odds that the rest of us get there, too? Not high, barring a transformative medical breakthrough, according to research published this week in the journal Nature Aging. From a report: The study looked at data on life expectancy at birth collected between 1990 and 2019 from some of the places where people typically live the longest: Australia, France, Italy, Hong Kong, Japan, South Korea, Spain, Sweden and Switzerland. Data from the United States was also included, though the country's life expectancy is lower.

The researchers found that while average life expectancies increased during that time in all of the locations, the rates at which they rose slowed down. The one exception was Hong Kong, where life expectancy did not decelerate. The data suggests that after decades of life expectancy marching upward thanks to medical and technological advancements, humans could be closing in on the limits of what's possible for average life span. "We're basically suggesting that as long as we live now is about as long as we're going to live," said S. Jay Olshansky, a professor of epidemiology and biostatistics at the University of Illinois Chicago, who led the study. He predicted maximum life expectancy will end up around 87 years -- approximately 84 for men, and 90 for women -- an average age that several countries are already close to achieving.

Math

Researchers Claim New Technique Slashes AI Energy Use By 95% (decrypt.co) 115

Researchers at BitEnergy AI, Inc. have developed Linear-Complexity Multiplication (L-Mul), a technique that reduces AI model power consumption by up to 95% by replacing energy-intensive floating-point multiplications with simpler integer additions. This method promises significant energy savings without compromising accuracy, but it requires specialized hardware to fully realize its benefits. Decrypt reports: L-Mul tackles the AI energy problem head-on by reimagining how AI models handle calculations. Instead of complex floating-point multiplications, L-Mul approximates these operations using integer additions. So, for example, instead of multiplying 123.45 by 67.89, L-Mul breaks it down into smaller, easier steps using addition. This makes the calculations faster and uses less energy, while still maintaining accuracy. The results seem promising. "Applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element wise floating point tensor multiplications and 80% energy cost of dot products," the researchers claim. Without getting overly complicated, what that means is simply this: If a model used this technique, it would require 95% less energy to think, and 80% less energy to come up with new ideas, according to this research.

The algorithm's impact extends beyond energy savings. L-Mul outperforms current 8-bit standards in some cases, achieving higher precision while using significantly less bit-level computation. Tests across natural language processing, vision tasks, and symbolic reasoning showed an average performance drop of just 0.07% -- a negligible tradeoff for the potential energy savings. Transformer-based models, the backbone of large language models like GPT, could benefit greatly from L-Mul. The algorithm seamlessly integrates into the attention mechanism, a computationally intensive part of these models. Tests on popular models such as Llama, Mistral, and Gemma even revealed some accuracy gain on certain vision tasks.

At an operational level, L-Mul's advantages become even clearer. The research shows that multiplying two float8 numbers (the way AI models would operate today) requires 325 operations, while L-Mul uses only 157 -- less than half. "To summarize the error and complexity analysis, L-Mul is both more efficient and more accurate than fp8 multiplication," the study concludes. But nothing is perfect and this technique has a major achilles heel: It requires a special type of hardware, so the current hardware isn't optimized to take full advantage of it. Plans for specialized hardware that natively supports L-Mul calculations may be already in motion. "To unlock the full potential of our proposed method, we will implement the L-Mul and L-Matmul kernel algorithms on hardware level and develop programming APIs for high-level model design," the researchers say.

China

How China Built Tech Prowess: Chemistry Classes and Research Labs (nytimes.com) 44

Stressing science education, China is outpacing other countries in research fields like battery chemistry, crucial to its lead in electric vehicles. From a report: China's domination of electric cars, which is threatening to start a trade war, was born decades ago in university laboratories in Texas, when researchers discovered how to make batteries with minerals that were abundant and cheap. Companies from China have recently built on those early discoveries, figuring out how to make the batteries hold a powerful charge and endure more than a decade of daily recharges. They are inexpensively and reliably manufacturing vast numbers of these batteries, producing most of the world's electric cars and many other clean energy systems.

Batteries are just one example of how China is catching up with -- or passing -- advanced industrial democracies in its technological and manufacturing sophistication. It is achieving many breakthroughs in a long list of sectors, from pharmaceuticals to drones to high-efficiency solar panels. Beijing's challenge to the technological leadership that the United States has held since World War II is evidenced in China's classrooms and corporate budgets, as well as in directives from the highest levels of the Communist Party.

A considerably larger share of Chinese students major in science, math and engineering than students in other big countries do. That share is rising further, even as overall higher education enrollment has increased more than tenfold since 2000. Spending on research and development has surged, tripling in the past decade and moving China into second place after the United States. Researchers in China lead the world in publishing widely cited papers in 52 of 64 critical technologies, recent calculations by the Australian Strategic Policy Institute reveal.

Robotics

Google DeepMind Develops a 'Solidly Amateur' Table Tennis Robot (techcrunch.com) 20

An anonymous reader quotes a report from TechCrunch: In a newly published paper titled "Achieving Human Level Competitive Robot Table Tennis," Google's DeepMind Robotics team is showcasing its own work on the game. The researchers have effectively developed a "solidly amateur human-level player" when pitted against a human component. During testing, the table tennis bot was able to beat all of the beginner-level players it faced. With intermediate players, the robot won 55% of matches. It's not ready to take on pros, however. The robot lost every time it faced an advanced player. All told, the system won 45% of the 29 games it played. "This is the first robot agent capable of playing a sport with humans at human level and represents a milestone in robot learning and control," the paper claims. "However, it is also only a small step towards a long-standing goal in robotics of achieving human level performance on many useful real world skills. A lot of work remains in order to consistently achieve human-level performance on single tasks, and then beyond, in building generalist robots that are capable of performing many useful tasks, skillfully and safely interacting with humans in the real world."

The robot's biggest trouble areas are responding to fast balls, high and low balls. It also has trouble with backhand and the ability to read the spin on an incoming ball. Here's how the researchers plan to address the issue with fast balls: "To address the latency constraints that hinder the robot's reaction time to fast balls, we propose investigating advanced control algorithms and hardware optimizations. These could include exploring predictive models to anticipate ball trajectories or implementing faster communication protocols between the robot's sensors and actuators."
Stats

What's the 'Smartest' City in America - Based on Tech Jobs, Connectivity, and Sustainability? (newsweek.com) 66

Seattle is the smartest city in America, with Miami and then Austin close behind. That's according to a promotional study from smart-building tools company ProptechOS. Newsweek reports: The evaluation of tech infrastructure and connectivity was based on several factors, including the number of free Wi-Fi hot spots, the quantity and density of AI and IoT companies, average broadband download speeds, median 5G coverage per network provider, and the number of airports. Meanwhile, green infrastructure was assessed based on air quality, measured by exposure to PM2.5, tiny particles in the air that can harm health. Other factors include 10-year changes in tree coverage, both loss and gain; the number of electric vehicle charging points and their density per 100,000 people; and the number of LEED-certified green buildings. The tech job market was evaluated on the number of tech jobs advertised per 100,000 people.
Seattle came in first after assessing 16 key indicators across connectivity/infrastructure, sustainability, and tech jobs — "boasting 34 artificial intelligence companies and 13 Internet of Things companies per 100,000 residents." In terms of sustainability, Seattle has enhanced its tree coverage by 13,700 hectares from 2010 to 2020 and has established the equivalent of 10 electric vehicle charging points per 100,000 residents. Seattle has edged out last year's top city, Austin, to claim the title of the smartest city in the U.S., with an overall score of 75.7 out of 100. Miami wasn't far behind, achieving a score of 75.4. However, Austin still came out on top for smart city infrastructure, scoring 86.2 out of 100. This is attributed to its high broadband download speed of 275.60 Mbps — well above the U.S. average of 217.14 Mbps — and its concentration of 337 AI companies, or 35 per 100,000 people.
You can see the full listings here. The article notes that the same study also ranked Paris as the smartest city in Europe — slipping ahead of London — thanks to Paris's 99.5% 5G coverage, plus "the second-highest number of AI companies in Europe and the third-highest number of free Wi-Fi hot spots. Paris is also recognized for its traffic management systems, which monitor noise levels and air quality."

Newsweek also shares this statement from ProptechOS's founder/chief ecosystem officer. "Advancements in smart cities and future technologies such as next-generation wireless communication and AI are expected to reduce environmental impacts and enhance living standards."

In April CNBC reported on an alternate list of the smartest cities in the world, created from research by the World Competitiveness Center. It defined smart cities as "an urban setting that applies technology to enhance the benefits and diminish the shortcomings of urbanization for its citizens." And CNBC reported that based on the list, "Smart cities in Europe and Asia are gaining ground globally while North American cities have fallen down the ranks... Of the top 10 smart cities on the list, seven were in Europe." Here are the top 10 smart cities, according to the 2024 Smart City Index.

- Zurich, Switzerland
- Oslo, Norway
- Canberra, Australia
- Geneva, Switzerland
- Singapore
- Copenhagen, Denmark
- Lausanne, Switzerland
- London, England
- Helsinki, Finland
- Abu Dhabi, United Arab Emirates

Notably, for the first time since the index's inception in 2019, there is an absence of North American cities in the top 20... The highest ranking U.S. city this year is New York City which ranked 34th, followed by Boston at 36th and Washington DC, coming in at 50th place.

Transportation

Hopes For Sustainable Jet Fuel Not Realistic, Report Finds (theguardian.com) 170

An anonymous reader quotes a report from The Guardian: Hopes that replacement fuels for airplanes will slash carbon pollution are misguided and support for these alternatives could even worsen the climate crisis, a new report has warned. There is currently "no realistic or scalable alternative" to standard kerosene-based jet fuels, and touted "sustainable aviation fuels" are well off track to replace them in a timeframe needed to avert dangerous climate change, despite public subsidies, the report by the Institute for Policy Studies, a progressive thinktank, found. "While there are kernels of possibility, we should bring a high level of skepticism to the claims that alternative fuels will be a timely substitute for kerosene-based jet fuels," the report said. [...]

In the U.S., Joe Biden's administration has set a goal for 3 billion gallons of sustainable aviation fuel, which is made from non-petroleum sources such as food waste, woody biomass and other feedstocks, to be produced by 2030, which it said will cut aviation's planet-heating emissions by 20%. [...] Burning sustainable aviation fuels still emits some carbon dioxide, while the land use changes needed to produce the fuels can also lead to increased pollution. Ethanol biofuel, made from corn, is used in these fuels, and meeting the Biden administration's production goal, the report found, would require 114m acres of corn in the U.S., about a 20% increase in current land area given over to to the crop. In the UK, meanwhile, 50% of all agricultural land will have to be given up to sustain current flight passenger levels if jet fuel was entirely replaced. "Agricultural land use changes could threaten global food security as well as nature-based carbon sequestration solutions such as the preservation of forests and wetlands," the report states. "As such, SAF production may actively undermine the Paris agreement goal of achieving greatly reduced emissions by 2050."
Chuck Collins, co-author of the report, said: "To bring these fuels to the scale needed would require massive subsidies, the trade-offs would be unacceptable and would take resources aware from more urgent decarbonization priorities."

"It's a huge greenwashing exercise by the aviation industry. It's magical thinking that they will be able to do this."

Phil Ansell, director of the Center for Sustainable Aviation at the University of Illinois, added: "There's an underappreciation of how big the energy problem is for aviation. We are still many years away from zero pollution flights. But it's true that the industry has been slow to pick things up. We are now trying to find solutions, but we are working at this problem and realizing it's a lot harder than we thought. We are late to the game. We are in the dark ages in terms of sustainability, compared to other sectors."
Power

Could Atomically Thin Layers Bring A 19x Energy Jump In Battery Capacitors? (popularmechanics.com) 27

Researchers believe they've discovered a new material structure that can improve the energy storage of capacitors. The structure allows for storage while improving the efficiency of ultrafast charging and discharging. The new find needs optimization but has the potential to help power electric vehicles. * An anonymous reader shared this report from Popular Mechanics: In a study published in Science, lead author Sang-Hoon Bae, an assistant professor of mechanical engineering and materials science, demonstrates a novel heterostructure that curbs energy loss, enabling capacitors to store more energy and charge rapidly without sacrificing durability... Within capacitors, ferroelectric materials offer high maximum polarization. That's useful for ultra-fast charging and discharging, but it can limit the effectiveness of energy storage or the "relaxation time" of a conductor. "This precise control over relaxation time holds promise for a wide array of applications and has the potential to accelerate the development of highly efficient energy storage systems," the study authors write.

Bae makes the change — one he unearthed while working on something completely different — by sandwiching 2D and 3D materials in atomically thin layers, using chemical and nonchemical bonds between each layer. He says a thin 3D core inserts between two outer 2D layers to produce a stack that's only 30 nanometers thick, about 1/10th that of an average virus particle... The sandwich structure isn't quite fully conductive or nonconductive.

This semiconducting material, then, allows the energy storage, with a density up to 19 times higher than commercially available ferroelectric capacitors, while still achieving 90 percent efficiency — also better than what's currently available.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
AI

China Puts Trust in AI To Maintain Largest High-Speed Rail Network on Earth 17

China is using AI in the operation of its 45,000km (28,000-mile) high-speed rail network, with the technology achieving several milestones, according to engineers involved in the project. From a report: An AI system in Beijing is processing vast amounts of real-time data from across the country and can alert maintenance teams of abnormal situations within 40 minutes, with an accuracy as high as 95 per cent, they said in a peer-reviewed paper. "This helps on-site teams conduct reinspections and repairs as quickly as possible," wrote Niu Daoan, a senior engineer at the China State Railway Group's infrastructure inspection centre, in the paper published by the academic journal China Railway.

In the past year, none of China's operational high-speed railway lines received a single warning that required speed reduction due to major track irregularity issues, while the number of minor track faults decreased by 80 per cent compared to the previous year. According to the paper, the amplitude of rail movement caused by strong winds also decreased -- even on massive valley-spanning bridges -- with the application of AI technology. [...] According to the paper, after years of effort Chinese railway scientists and engineers have "solved challenges" in comprehensive risk perception, equipment evaluation, and precise trend predictions in engineering, power supply and telecommunications. The result was "scientific support for achieving proactive safety prevention and precise infrastructure maintenance for high-speed railways," the engineers said.
Education

U. of Texas at Austin Will Return To Standardized Test Requirement (nytimes.com) 93

The University of Texas at Austin said Monday that it would again require standardized tests for admissions (non-paywalled source), becoming the latest selective university to reinstate requirements for SAT or ACT scores that were abandoned during the pandemic. From a report: A few years ago, about 2,000 colleges across the country began to move away from requiring test scores, at least temporarily, amid concerns they helped fuel inequality. But a growing number of those schools have reversed those policies, including Brown, Yale, Dartmouth, M.I.T., Georgetown and Purdue, with several announcing the changes in recent months.

U.T. Austin, which admits a cross-section of high-achieving Texas students under a plan designed to increase opportunity in the state, cited a slightly different reason than the other schools in returning to test requirements. Without requiring test scores, officials said, they were hampered in placing the admitted students in programs they would be most suited for and in determining which ones needed extra help. After making test scores optional the past few years, the university will now require applicants to submit either SAT or ACT scores beginning Aug. 1, with applications for fall 2025 admissions.

In an interview, Jay Hartzell, the U.T. president, said that the decision followed an analysis of students who did not submit scores. "We looked at our students and found that, in many ways, they weren't faring as well," Dr. Hartzell said. Those against testing requirements have long said that standardized tests are unfair because many students from affluent families use tutors and coaches to bolster their scores. But recent data has raised questions about the contention. In reinstating test requirements, some universities have said that making scores optional had the unintended effect of harming prospective students from low-income families.

Power

Engineers Use AI To Wrangle Fusion Power For the Grid (princeton.edu) 69

An anonymous reader quotes a report from Princeton Engineering: In the blink of an eye, the unruly, superheated plasma that drives a fusion reaction can lose its stability and escape the strong magnetic fields confining it within the donut-shaped fusion reactor. These getaways frequently spell the end of the reaction, posing a core challenge to developing fusion as a non-polluting, virtually limitless energy source. But a Princeton-led team composed of engineers, physicists, and data scientists from the University and the Princeton Plasma Physics Laboratory (PPPL) have harnessed the power of artificial intelligence to predict -- and then avoid -- the formation of a specific plasma problem in real time.

In experiments at the DIII-D National Fusion Facility in San Diego, the researchers demonstrated their model, trained only on past experimental data, could forecast potential plasma instabilities known as tearing mode instabilities up to 300 milliseconds in advance. While that leaves no more than enough time for a slow blink in humans, it was plenty of time for the AI controller to change certain operating parameters to avoid what would have developed into a tear within the plasma's magnetic field lines, upsetting its equilibrium and opening the door for a reaction-ending escape.

"By learning from past experiments, rather than incorporating information from physics-based models, the AI could develop a final control policy that supported a stable, high-powered plasma regime in real time, at a real reactor," said research leader Egemen Kolemen, associate professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment, as well as staff research physicist at PPPL. The research opens the door for more dynamic control of a fusion reaction than current approaches, and it provides a foundation for using artificial intelligence to solve a broad range of plasma instabilities, which have long been obstacles to achieving a sustained fusion reaction.
The team published their findings in the journal Nature.
Communications

Starlink's Laser System Is Beaming 42 Million GB of Data Per Day (pcmag.com) 97

SpaceX revealed that it's delivering over 42 petabytes of data for customers per day, according to engineer Travis Brashears. "We're passing over terabits per second [of data] every day across 9,000 lasers," Brashears said today at SPIE Photonics West, an event in San Francisco focused on the latest advancements in optics and light. "We actually serve over lasers all of our users on Starlink at a given time in like a two-hour window." PCMag reports: Although Starlink uses radio waves to beam high-speed internet to customers, SpaceX has also been outfitting the company's satellites with a "laser link" system to help drive down latency and improve the system's global coverage. The lasers, which can sustain a 100Gbps connection per link, are especially crucial to helping the satellites fetch data when no SpaceX ground station is near, like over the ocean or Antarctic. Instead, the satellite can transmit the data to and from another Starlink satellite in Earth's orbit, forming a mesh network in space.

Tuesday's talk from Brashears revealed the laser system is quite robust, even as the equipment is flying onboard thousands of Starlink satellites constantly circling the Earth. Despite the technical challenges, the company has achieved a laser "link uptime" at over 99%. The satellites are constantly forming laser links, resulting in about 266,141 "laser acquisitions" per day, according to Brashears' presentation. But in some cases, the links can also be maintained for weeks at a time, and even reach transmission rates at up to 200Gbps.

Brashears also said Starlink's laser system was able to connect two satellites over 5,400 kilometers (3,355 miles) apart. The link was so long "it cut down through the atmosphere, all the way down to 30 kilometers above the surface of the Earth," he said, before the connection broke. "Another really fun fact is that we held a link all the way down to 122 kilometers while we were de-orbiting a satellite," he said. "And we were able to downstream the video." During his presentation, Brashears also showed a slide depicting how the laser system can deliver data to a Starlink dish in Antarctica through about seven different paths. "We can dynamically change those routes within milliseconds. So as long as we have some path to the ground [station], you're going to have 99.99% uptime. That's why it's important to get as many nodes up there as possible," he added.

Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


NASA

US Commits To Landing an International Astronaut On the Moon (arstechnica.com) 49

During a meeting of the National Space Council, Vice President Kamala Harris said an international astronaut will land on the Moon during one of NASA's Artemis missions. "Today, in recognition of the essential role that our allies and partners play in the Artemis program, I am proud to announce that alongside American astronauts, we intend to land an international astronaut on the surface of the Moon by the end of the decade," Harris said. Ars Technica reports: Although the National Space Council is useful in aggregating disparate interests across the US government to help form more cohesive space policies, public meetings like the one Wednesday can seem perfunctory. Harris departed the stage soon after her speech, and other government officials read from prepared remarks during the rest of the event. Nevertheless, Harris' announcement highlighted the role the space program plays in elevating the soft power of the United States. It was widely assumed an international astronaut would eventually land on the Moon with NASA. Harris put a deadline on achieving this goal.

NASA has long included astronauts from its international partners on human spaceflight missions, dating back to the ninth flight of the space shuttle in 1983, when West German astronaut Ulf Merbold joined five Americans on a flight to low-Earth orbit. This was seen by US government officials as a way to foster closer relations with like-minded countries. The inclusion of foreign astronauts on US missions also repays partner nations who make financial commitments to US-led space projects with a high-profile flight opportunity for one of their citizens.

Among the international partners contributing to Artemis, it seems most likely a European astronaut would get the first slot for a landing with NASA. ESA funded the development of the service modules used on NASA's Orion spacecraft, which will ferry astronauts from Earth to the Moon and back. These modules provide power and propulsion for Orion. ESA is also developing refueling and communications infrastructure for the Gateway mini-space station to be constructed in orbit around the Moon.

A Japanese astronaut might also have a shot at getting a seat on an Artemis landing. Japan's government has committed to providing the life-support system for the Gateway's international habitation module, along with resupply services to deliver cargo to Gateway. Japan is also interested in building a pressurized rover for astronauts to drive across the lunar surface. In recognition of Japan's contributions, NASA last year committed to flying a Japanese astronaut aboard Gateway. Canada is building a robotic arm for Gateway, but a Canadian astronaut already has a seat on NASA's first crewed Artemis mission, albeit without a trip to the lunar surface.

Movies

68 Years After His Death, James Dean Is Reportedly Starring in a New Movie - Thanks to AI (bbc.com) 64

Nearly seven decades after he died, James Dean "has been cast as the star in a new, upcoming movie," reports the BBC: A digital clone of the actor — created using artificial intelligence technology similar to that used to generate deepfakes — will walk, talk and interact on screen with other actors in the film...

This is the second time Dean's digital clone has been lined up for a film. In 2019, it was announced he would be resurrected in CGI for a film called Finding Jack, but it was later cancelled. Travis Cloyd, chief executive of immersive media agency WorldwideXR (WXR), confirmed to BBC, however, that Dean will instead star in Back to Eden, a science fiction film in which "an out of this world visit to find truth leads to a journey across America with the legend James Dean". The digital cloning of Dean also represents a significant shift in what is possible. Not only will his AI avatar be able to play a flat-screen role in Back to Eden and a series of subsequent films, but also to engage with audiences in interactive platforms including augmented reality, virtual reality and gaming.

The technology goes far beyond passive digital reconstruction or deepfake technology that overlays one person's face over someone else's body. It raises the prospect of actors — or anyone else for that matter — achieving a kind of immortality that would have been otherwise impossible, with careers that go on long after their lives have ended. But it also raises some uncomfortable questions. Who owns the rights to someone's face, voice and persona after they die? What control can they have over the direction of their career after death — could an actor who made their name starring in gritty dramas suddenly be made to appear in a goofball comedy or even pornography? What if they could be used for gratuitous brand promotions in adverts...? Dean's image is one of hundreds represented by WRX and its sister licensing company CMG Worldwide — including Amelia Earhart, Bettie Page, Malcolm X and Rosa Parks...

Voice actors, in particular, have been leading the conversation and working across acting guilds to form a unified front in protecting the rights and careers of actors... Cloyd acknowledges the potential for fewer acting opportunities but offers a "glass-half-full" perspective toward employing dead actors. "At the end of the day, it creates lots of jobs," he says, referring to the other technical and film industry jobs the technology could generate. "So even though it could be jeopardising one person's role or job, at the same time, it's creating hundreds of jobs in regards to what it takes to do this at a high level."

If the dead — or rather, their digital clones — are damned to an eternity of work, who benefits financially? And do the dead have any rights? Simply put, the rules are murky and, in some regions of the world, non-existent.

In June Rolling Stone published this advice from Samuel L. Jackson. "Future actors should do what I always do when I get a contract and it has the words 'in perpetuity' and 'known and unknown' on it: I cross that shit out. It's my way of saying, 'No, I do not approve of this.'"

Slashdot Top Deals