Space

Startups Are Building Balloons To Hoist Tourists Into the Stratosphere (cnbc.com) 66

An anonymous reader quotes a report from CNBC: CNBC spoke to three startups -- France-based Zephalto, Florida-based Space Perspective and Arizona-based World View -- that aim to hoist tourists to the stratosphere using pressurized capsules and massive gas-filled balloons. "The capsule itself is designed to to carry eight customers and two crew into the stratosphere," said Ryan Hartman, CEO of World View. "There will be a center bar where people can gather, and then, of course, there will be a bathroom aboard the capsule." The balloon rides will last around 6 hours, but will not take passengers all the way to space. Most will reach heights of 15 to 19 miles above the earth's surface, flying in an area known as the stratosphere. The start of space is generally accepted by the U.S. government to be around 80 kilometers, or about 50 miles, above the earth's surface.

Jane Poynter, founder and co-CEO of Space Perspective, has a differing view. "There is no universal definition of space," Poynter said. "We are regulated as a spaceship. If we go over 98,000 feet, we are a spaceship. Outside the capsule, it's essentially a vacuum. We're above 99% of Earth's atmosphere, which is why the sky is so deep black." Compared to rocket-powered space tourism, the physical sensation that passengers will experience on a stratospheric balloon ride is more comparable to being on an airplane. Passengers will not experience weightlessness. "We don't need any physical requirements to board the balloon," said Vincent Farret d'Asties, the founder and chief pilot at Zephalto. "If you can board a standard plane, you can board the balloon."

All three companies told CNBC that they were pleased with consumer interest. World Views says it sold 1,250 tickets so far while Space Perspective has sold 1,800. Zephalto did not tell CNBC how many tickets it sold, but said its initial flights were fully booked. Ticket prices range from $50,000 per seat with World View to around $184,000 with Zephalto. Space Perspective sells tickets to its experience for $125,000 per seat. That's all assuming commercial service gets off the ground. Only Zephalto has performed crewed tests so far, though not at the company's target altitude of about 15 miles above the earth's surface.

Power

Cutting-Edge Technology Could Massively Reduce the Amount of Energy Used For Air Conditioning (wired.com) 75

An anonymous reader quotes a report from Wired, written by Chris Baraniuk: The buses struggling in China's muggy weather gave [Matt Jore, CEO of Montana Technologies] and his colleagues an idea. If they could make dehumidification more efficient somehow, then they could make air conditioning as a whole much more efficient, too. They headed back to the US wondering how to make this happen. [...] "I have here 50-gallon barrels of this stuff. It comes in a special powder," says Jore, referring to the moisture-loving material that coats components inside his firm's novel dehumidifier system, AirJoule. This is the result of years of research and development that followed his team's trip to China. The coating is a type of highly porous material called a metal-organic framework, and the pores are sized so that they fit around water molecules extremely well. It makes for a powerful desiccant, or drying device. "Just one kilogram can take up half or more than half -- in our case 55 percent -- of its own weight in water vapor," says Jore.

The AirJoule system consists of two chambers, each one containing surfaces coated with this special material. They take turns at dehumidifying a flow of air. One chamber is always drying air that is pushed through the system while the other gradually releases the moisture it previously collected. A little heat from the drying chamber gets applied to the moisture-saturated coating in the other, since that helps to encourage the water to drip away for removal. These two cavities swap roles every 10 minutes or so, says Jore. This process doesn't cool the air, but it does make it possible to feed dry air to a more traditional air conditioning device, drastically cutting how much energy that secondary device will use. And Jore claims that AirJoule consumes less than 100 watt-hours per liter of water vapor removed -- potentially cutting the energy required for dehumidification by as much as 90 percent compared to a traditional dehumidifier.

Montana Technologies wants to sell the components for its AirJoule system to established HVAC firms rather than attempt to build its own consumer products and compete with those firms directly -- it calls the approach AirJoule Inside. The firm is also working on a system for the US military, based on the same technology, that can harvest drinkable water from the air. Handy for troops stationed in the desert, one imagines. However, AirJoule is still at the prototype and testing stages. "We're building several of these pilot preproduction units for potential customers and partners," says Jore. "Think rooftops on big-box retailers."
Montana Technologies isn't the only firm using cutting-edge technology to make air conditioning units more efficient. Rival firm Blue Frontier has developed a desiccant-based dehumidifying system using a liquid salt solution, with installations in various U.S. locations, that links to a secondary air-conditioning process and regenerates desiccant during off-peak hours to reduce peak electricity demand.

Then there's Nostromo Energy's IceBrick system, installed in California hotels, which freezes water capsules during off-peak hours and uses the stored coolth during peak times. This system can reduce cooling costs by up to 30 percent and emissions by up to 80 percent, according to Wired.
EU

OW2: 'The European Union Must Keep Funding Free Software' (ow2.org) 15

OW2, the non-profit international consortium dedicated to developing open-source middleware, published an open letter to the European Commission today. They're urging the European Union to continue funding free software after noticing that the Next Generation Internet (NGI) programs were no longer mentioned in Cluster 4 of the 2025 Horizon Europe funding plans.

OW2 argues that discontinuing NGI funding would weaken Europe's technological ecosystem, leaving many projects under-resourced and jeopardizing Europe's position in the global digital landscape. The letter reads, in part: NGI programs have shown their strength and importance to support the European software infrastructure, as a generic funding instrument to fund digital commons and ensure their long-term sustainability. We find this transformation incomprehensible, moreover when NGI has proven efficient and economical to support free software as a whole, from the smallest to the most established initiatives. This ecosystem diversity backs the strength of European technological innovation, and maintaining the NGI initiative to provide structural support to software projects at the heart of worldwide innovation is key to enforce the sovereignty of a European infrastructure. Contrary to common perception, technical innovations often originate from European rather than North American programming communities, and are mostly initiated by small-scaled organizations.

Previous Cluster 4 allocated 27 millions euros to:
- "Human centric Internet aligned with values and principles commonly shared in Europe";
- "A flourishing internet, based on common building blocks created within NGI, that enables better control of our digital life";
- "A structured eco-system of talented contributors driving the creation of new internet commons and the evolution of existing internet commons."

In the name of these challenges, more than 500 projects received NGI funding in the first 5 years, backed by 18 organizations managing these European funding consortia.

Earth

To Avoid Sea Level Rise, Some Researchers Propose Barriers Around the World's Vulnerable Glaciers (science.org) 57

"Researchers are proposing a new way to battle the effects of climate change..." writes Science magazine: slowing the rising of sea levels with "glacial geoengineering". (That is, "building flexible barriers around them or drilling deep into them to slow their slippage into the sea.") Geoengineering proponents say it would be better to begin research now on how to staunch sea level rise at its source, rather than spending billions and billions of dollars to wall off coastal cities. "At some point you have to think, 'Well, is there anything else we can do?'" asks glaciologist John Moore of the University of Lapland, an author on the white paper, which was sponsored by the University of Chicago. One idea researched by Moore and covered in the report is to build buoyant "curtains," moored to the sea floor beyond the edge of ice shelves and glaciers, to block natural currents of warm water that erode ice sheets from below. (Especially in Antarctica, warming ocean water is a bigger threat to glaciers than warming air.)

Early designs called for plastic, but natural fibers such as canvas and sisal are now being considered to avoid pollution concerns. According to the white paper, initial modeling studies show that curtain heights stretching only partway up from the sea floor off the coast of western Antarctica could reduce glacial melting by a factor of 10 in some locations. Another intervention some scientists are contemplating would slow the slippage of ice sheets by drilling holes to their bases and pumping out water or heat.

Such massive engineering efforts would surely be some of the most expensive ever undertaken by humanity. At a workshop at the University of Chicago in October 2023, researchers suggested it might cost $88 billion to build 80 kilometers of curtains around Antarctic glaciers. Interventions would also require international political support, which some glaciologists view as an even bigger hurdle than the price tag. Twila Moon, a glaciologist at the U.S. National Snow and Ice Data Center, says such projects would require fleets of icebreakers, extensive shipping and supply chain needs, and significant personnel to construct, maintain, and guard the final structures — in ocean conditions she calls "eye-poppingly difficult." The projects could also incur unintended consequences, potentially disrupting ocean circulation patterns or endangering wildlife. Furthermore, it would take decades to find out whether the interventions were working.

Even if the engineering and logistics were possible, that "does not answer the question of whether it should be pursued," says Moon, who opposes even preliminary studies on the concepts.

"The report, which also stresses the importance of emissions reductions, takes pains to say it 'does not advocate for intervention; rather, it advocates for research into whether any interventions may be viable'..."
China

China Building Two-Thirds of World's Wind and Solar Projects 123

An anonymous reader quotes a report from The Guardian: The amount of wind and solar power under construction in China is now nearly twice as much as the rest of the world combined, a report has found. Research published on Thursday by Global Energy Monitor (GEM), an NGO, found that China has 180 gigawatts (GW) of utility-scale solar power under construction and 15GW of wind power. That brings the total of wind and solar power under construction to 339GW, well ahead of the 40GW under construction in the US. The researchers only looked at solar farms with a capacity of 20MW or more, which feed directly into the grid. That means that the total volume of solar power in China could be much higher, as small scale solar farms account for about 40% of China's solar capacity.

Between March 2023 and March 2024, China installed more solar than it had in the previous three years combined, and more than the rest of the world combined for 2023, the GEM analysts found. China is on track to reach 1,200GW of installed wind and solar capacity by the end of 2024, six years ahead of the government's target. "The unabated wave of construction guarantees that China will continue leading in wind and solar installation in the near future, far ahead of the rest of the world," the report said. Earlier analysis suggests that China will need to install between 1,600GW and 1,800GW of wind and solar energy by 2030 to meet its target of producing 25% of all energy from non-fossil sources. Between 2020 and 2023, only 30% of the growth in energy consumption was met by renewable sources, compared with the target of 50%.
Power

Amazon Says It Now Runs On 100% Clean Power. Employees Say It's More Like 22% (fastcompany.com) 90

Today, Amazon announced that it reached its 100% renewable energy goal seven years ahead of schedule. However, as Fast Company's Adele Peters reports, "a group of Amazon employees argues that the company's math is misleading." From the report: A report (PDF) from the group, Amazon Employees for Climate Justice, argues that only 22% of the company's data centers in the U.S. actually run on clean power. The employees looked at where each data center was located and the mix of power on the regional grids -- how much was coming from coal, gas, or oil versus solar or wind. Amazon, like many other companies, buys renewable energy credits (RECs) for a certain amount of clean power that's produced by a solar plant or wind farm. In theory, RECs are supposed to push new renewable energy to get built. In reality, that doesn't always happen. The employee research found that 68% of Amazon's RECs are unbundled, meaning that they didn't fund new renewable infrastructure, but gave credit for renewables that already existed or were already going to be built.

As new data centers are built, they can mean that fossil-fuel-dependent grids end up building new fossil fuel power plants. "Dominion Energy, which is the utility in Virginia, is expanding because of demand, and Amazon is obviously one of their largest customers," says Eliza Pan, a representative from Amazon Employees for Climate Justice and a former Amazon employee. "Dominion's expansion is not renewable expansion. It's more fossil fuels." Amazon also doesn't buy credits that are specifically tied to the grids powering their data centers. The company might purchase RECs from Canada or Arizona, for example, to offset electricity used in Virginia. The credits also aren't tied to the time that the energy was used; data centers run all day and night, but most renewable energy is only available some of the time. The employee group argues that the company should follow the approach that Google takes. Google aims to use carbon-free energy, 24/7, on every grid where it operates.

Japan

Tokyo Residents Seek To Block Building of Massive Data Centre (usnews.com) 22

A group of residents in Tokyo said on Wednesday they were aiming to block construction of a massive logistics and data centre planned by Singaporean developer GLP, in a worrying sign for businesses looking to Japan to meet growing demand. From a report: The petition by more than 220 residents of Akishima city in western Tokyo follows a successful bid in December in Nagareyama city to quash a similar data-centre plan. The Akishima residents were concerned the centre would threaten wildlife, cause pollution and a spike in electricity usage, and drain its water supply which comes solely from groundwater. They filed a petition to audit the urban planning procedure that approved GLP's 3.63-million-megawatt data centre, which GLP estimated would likely emit about 1.8 million tons of carbon dioxide a year. "One company will be responsible for ruining Akishima. That's what this development is," Yuji Ohtake, a representative of the residents' group, told a press conference. Global tech firms such as Microsoft, Amazon and Oracle also have plans to build data centres in Japan. The residents estimated that 3,000 of 4,800 trees on the site would have to be cut down, threatening the area's Eurasian goshawk birds and badgers.
AI

AWS App Studio Promises To Generate Enterprise Apps From a Written Prompt (techcrunch.com) 36

Amazon Web Services is the latest entrant to the generative AI game with the announcement of App Studio, a groundbreaking tool capable of building complex software applications from simple written prompts. TechCrunch's Ron Miller reports: "App Studio is for technical folks who have technical expertise but are not professional developers, and we're enabling them to build enterprise-grade apps," Sriram Devanathan, GM of Amazon Q Apps and AWS App Studio, told TechCrunch. Amazon defines enterprise apps as having multiple UI pages with the ability to pull from multiple data sources, perform complex operations like joins and filters, and embed business logic in them. It is aimed at IT professionals, data engineers and enterprise architects, even product managers who might lack coding skills but have the requisite company knowledge to understand what kinds of internal software applications they might need. The company is hoping to enable these employees to build applications by describing the application they need and the data sources they wish to use.

Examples of the types of applications include an inventory-tracking system or claims approval process. The user starts by entering the name of an application, calling the data sources and then describing the application they want to build. The system comes with some sample prompts to help, but users can enter an ad hoc description if they wish. It then builds a list of requirements for the application and what it will do, based on the description. The user can refine these requirements by interacting with the generative AI. In that way, it's not unlike a lot of no-code tools that preceded it, but Devanathan says it is different. [...] Once the application is complete, it goes through a mini DevOps pipeline where it can be tested before going into production. In terms of identity, security and governance, and other requirements any enterprise would have for applications being deployed, the administrator can link to existing systems when setting up the App Studio. When it gets deployed, AWS handles all of that on the back end for the customer, based on the information entered by the admin.

Transportation

New Research Finds America's EV Chargers Are Just 78% Reliable (and Underfunded) (hbs.edu) 220

Harvard Business School has an "Institute for Business in Global Society" that explores the societal impacts of business. And they've recently published some new AI-powered research about EV charging infrastructure, according to the Institute's blog, conducted by climate fellow Omar Asensio.

"Asensio and his team, supported by Microsoft and National Science Foundation awards, spent years building models and training AI tools to extract insights and make predictions," using the reviews drivers left (in more than 72 languages) on the smartphone apps drivers use to pay for charging. And ultimately this research identified "a significant obstacle to increasing electric vehicle (EV) sales and decreasing carbon emissions in the United States: owners' deep frustration with the state of charging infrastructure, including unreliability, erratic pricing, and lack of charging locations..." [C]harging stations in the U.S. have an average reliability score of only 78%, meaning that about one in five don't work. They are, on average, less reliable than regular gas stations, Asensio said. "Imagine if you go to a traditional gas station and two out of 10 times the pumps are out of order," he said. "Consumers would revolt...." EV drivers often find broken equipment, making charging unreliable at best and simply not as easy as the old way of topping off a tank of gas. The reason? "No one's maintaining these stations," Asensio said.
One problem? Another blog post by the Institute notes that America's approach to public charging has differed sharply from those in other countries: In Europe and Asia, governments started making major investments in public charging infrastructure years ago. In America, the initial thinking was that private companies would fill the public's need by spending money to install charging stations at hotels, shopping malls and other public venues. But that decentralized approach failed to meet demand and the Biden administration is now investing heavily to grow the charging network and facilitate EV sales... "No single market actor has sufficient incentive to build out a national charging network at a pace that meets our climate goals," the report declared. Citing research and the experience of other countries, it noted that "policies that increase access to charging stations may be among the best policies to increase EV sales." But the U.S. is far behind other countries.
Thanks to Slashdot reader NoWayNoShapeNoForm for sharing the article.
China

Is China Building Spy Bases in Cuba? (msn.com) 47

"Images captured from space show the growth of Cuba's electronic eavesdropping stations," reported the Wall Street Journal this week, citing a new report from the Center for Strategic and International Studies, a Washington-based think tank.

But they added that the stations "are believed to be linked to China," including previously-unreported construction about 70 miles from the U.S. naval base at Guantanamo Bay. (The Journal had previously reported China and Cuba were "negotiating closer defense and intelligence ties, including establishing a new joint military training facility on the island and an eavesdropping facility.") At the time, the Journal reported that Cuba and China were already jointly operating eavesdropping stations on the island, according to U.S. officials, who didn't disclose their locations. It couldn't be determined which, if any, of those are included in the sites covered by the CSIS report.

The concern about the stations, former officials and analysts say, is that China is using Cuba's geographical proximity to the southeastern U.S. to scoop up sensitive electronic communications from American military bases, space-launch facilities, and military and commercial shipping. Chinese facilities on the island "could also bolster China's use of telecommunications networks to spy on U.S. citizens," said Leland Lazarus, an expert on China-Latin America relations at Florida International University... Authors of the CSIS report, after analyzing years' worth of satellite imagery, found that Cuba has significantly upgraded and expanded its electronic spying facilities in recent years and pinpointed four sites — at Bejucal, El Salao, Wajay and Calabazar... "These are active locations with an evolving mission set," said Matthew Funaiole, a senior follow at CSIS and the report's chief author.

The CSIS web site shows some of the satellite images. "Pinpointing the specific targets of these assets is nearly impossible," they add — but since Cuba has no space program, "the types of space-tracking capabilities observed are likely intended to monitor the activities of other nations (like the United States) with a presence in orbit." While China's own satellites could also benefit from a North America-based groundstation for communications, the Cuban facilities "would also provide the ability to monitor radio traffic and potentially intercept data delivered by U.S. satellites as they pass over highly sensitive military sites across the southern United States."

The think tank points out that one possibly-installed system would be within range to monitor rocket launches from Cape Canaveral and NASA's Kennedy Space Center. "Studying these launches — particularly those of SpaceX's Falcon 9 and Falcon Heavy reusable first-stage booster rocket systems — is likely of keen interest to China as it attempts to catch up to U.S. leadership in space launch technology."
Google

Google Paper: AI Potentially Breaking Reality Is a Feature Not a Bug (404media.co) 82

An anonymous reader shares a report: Generative AI could "distort collective understanding of socio-political reality or scientific consensus," and in many cases is already doing that, according to a new research paper from Google, one of the biggest companies in the world building, deploying, and promoting generative AI. The paper, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data," [PDF] was co-authored by researchers at Google's artificial intelligence research laboratory DeepMind, its security think tank Jigsaw, and its charitable arm Google.org, and aims to classify the different ways generative AI tools are being misused by analyzing about 200 incidents of misuse as reported in the media and research papers between January 2023 and March 2024.

Unlike self-serving warnings from Open AI CEO Sam Altman or Elon Musk about the "existential risk" artificial general intelligence poses to humanity, Google's research focuses on real harm that generative AI is currently causing and could get worse in the future. Namely, that generative AI makes it very easy for anyone to flood the internet with generated text, audio, images, and videos. Much like another Google research paper about the dangers of generative AI I covered recently, Google's methodology here likely undercounts instances of AI-generated harm. But the most interesting observation in the paper is that the vast majority of these harms and how they "undermine public trust," as the researchers say, are often "neither overtly malicious nor explicitly violate these tools' content policies or terms of service." In other words, that type of content is a feature, not a bug.

AI

MIT Robotics Pioneer Rodney Brooks On Generative AI 41

An anonymous reader quotes a report from TechCrunch: When Rodney Brooks talks about robotics and artificial intelligence, you should listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997. In fact, he likes to make predictions about the future of AI and keeps a scorecard on his blog of how well he's doing. He knows what he's talking about, and he thinks maybe it's time to put the brakes on the screaming hype that is generative AI. Brooks thinks it's impressive technology, but maybe not quite as capable as many are suggesting. "I'm not saying LLMs are not important, but we have to be careful [with] how we evaluate them," he told TechCrunch.

He says the trouble with generative AI is that, while it's perfectly capable of performing a certain set of tasks, it can't do everything a human can, and humans tend to overestimate its capabilities. "When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that," Brooks said. "And they're usually very over-optimistic, and that's because they use a model of a person's performance on a task." He added that the problem is that generative AI is not human or even human-like, and it's flawed to try and assign human capabilities to it. He says people see it as so capable they even want to use it for applications that don't make sense.

Brooks offers his latest company, Robust.ai, a warehouse robotics system, as an example of this. Someone suggested to him recently that it would be cool and efficient to tell his warehouse robots where to go by building an LLM for his system. In his estimation, however, this is not a reasonable use case for generative AI and would actually slow things down. It's instead much simpler to connect the robots to a stream of data coming from the warehouse management software. "When you have 10,000 orders that just came in that you have to ship in two hours, you have to optimize for that. Language is not gonna help; it's just going to slow things down," he said. "We have massive data processing and massive AI optimization techniques and planning. And that's how we get the orders completed fast."
"People say, 'Oh, the large language models are gonna make robots be able to do things they couldn't do.' That's not where the problem is. The problem with being able to do stuff is about control theory and all sorts of other hardcore math optimization," he said.

"It's not useful in the warehouse to tell an individual robot to go out and get one thing for one order, but it may be useful for eldercare in homes for people to be able to say things to the robots," he said.
Communications

Two of the German Military's New Spy Satellites Appear To Have Failed In Orbit (arstechnica.com) 34

Ars Technica's Eric Berger writes: On the day before Christmas last year, a Falcon 9 rocket launched from California and put two spy satellites into low-Earth orbit for the armed forces of Germany, which are collectively called the Bundeswehr. Initially, the mission appeared successful. The German satellite manufacturer, OHB, declared that the two satellites were "safely in orbit." The addition of the two SARah satellites completed a next-generation constellation of three reconnaissance satellites, the company said. However, six months later, the two satellites have yet to become operational. According to the German publication Der Spiegel, the antennas on the satellites cannot be unfolded. Engineers with OHB have tried to resolve the issue by resetting the flight software, performing maneuvers to vibrate or shake the antennas loose, and more to no avail. As a result, last week, German lawmakers were informed that the two new satellites will probably not go into operation as planned.

The three-satellite constellation known as SARah -- the SAR is a reference to the synthetic aperture radar capability of the satellites -- was ordered in 2013 at a cost of $800 million. The first of the three satellites, SARah 1, launched in June 2022 on a Falcon 9 rocket. This satellite was built by Airbus in southern Germany, and it has since gone into operation without any problems. The two smaller satellites built by OHB, flying with passive synthetic aperture radar reflectors, were intended to complement the SARah 1 satellite, which carries an active phased-array radar antenna. [...] According to the Der Spiegel report, the Bundeswehr says the two SARah satellites built by OHB remain the property of the German company and would only be turned over to the military once they were operational. As a result, the military says OHB will be responsible for building two replacement satellites. Shockingly, the German publication says that its sources indicated OBH did not fully test the functionality and deployment of the satellite antennas on the ground. This could not be confirmed.

Earth

Canned Water Made From Air and Sunlight To Hit US Stores in September (newscientist.com) 101

Canned water distilled from the air will be available to buy in the US later this year, in an effort to promote solar-powered "hydropanels" that provide an off-grid method of producing drinking water. New Scientist adds: The panels, created by Arizona-based firm Source, use solar energy to power fans, which draw water vapour from the air. A water-absorbing substance, known as a desiccant, traps the moisture, before solar energy from the panel releases the moisture into a collection area within the panel. The distilled water is then sent to a pressurised tank, where the pH is tweaked and minerals like calcium and magnesium are added.

Each panel can produce up to 3 litres of drinking water water a day, about the average daily intake for one person. The process works effectively even in hot, arid conditions such as Arizona, says Friesen. Source, which launched in 2014 as Zero Mass Water, already has hydropanels installed in 56 countries around the world. The panels can be installed as ground arrays, or on rooftops, linked into a building's drinking water pipes. Many sites serve off-grid communities without easy access to potable water, says Friesen. Most of the panels, which retail at almost $3000 apiece, are purchased by governments or development banks, although households can also install panels privately.

Google

Google Might Abandon ChromeOS Flex (zdnet.com) 59

An anonymous reader shares a report: ChromeOS Flex extends the lifespan of older hardware and contributes to reducing e-waste, making it an environmentally conscious choice. Unfortunately, recent developments hint at a potential end for ChromeOS Flex. As detailed in a June 12 blog post by Prajakta Gudadhe, senior director of engineering for ChromeOS, and Alexander Kuscher, senior director of product management for ChromeOS, Google's announcement about integrating ChromeOS with Android to enhance AI capabilities suggests that Flex might not be part of this future.

Google's plan, as detailed, suggests that ChromeOS Flex could be phased out, leaving its current users in a difficult position. The ChromiumOS community around ChromeOS Flex may attempt to adjust to these changes if Google open sources ChromeOS Flex, but this is not a guarantee. In the meantime, users may want to consider alternatives, such as various Linux distributions, to keep their older hardware functional.

AI

Amazon, Built by Retail, Invests in Its AI Future (wsj.com) 26

An anonymous reader shares a report: Amazon built a $2 trillion company through years of aggressive spending on its retail and logistics businesses. Its future gains will likely be determined by the billions designated to fund its artificial-intelligence push. Amazon is planning to spend more than $100 billion over the next decade on data centers, an impressive level of investment even for a company known for its spending ways. The Seattle company is now devoting more investment money to its cloud computing and AI infrastructure than to its sprawling network of e-commerce warehouses.

Amazon Web Services, the arm that manages Amazon's cloud business, has opened data centers for years, but executives said there is a surge in investment now to meet demand triggered by the excitement around AI. "We have to dive in. We have to figure it out," said John Felton, who took over as AWS's chief financial officer this year after spending most of his career in Amazon's retail fulfillment operations. The company's financial commitment reflects the importance and high costs of AI. Felton said building for AI today feels like building that massive delivery network in years past. "It's a little uncertain," he said. AWS is expanding in Virginia, Ohio and elsewhere.

Sci-Fi

William Gibson's 'Neuromancer' to Become a Series on Apple TV+ 149

It's been adapted into a graphic novel, a videogame, a radio play, and an opera, according to Wikipedia — which also describes years of trying to adapt Neuromancer into a movie. "The landmark 1984 cyberpunk novel has been on Hollywood's wishlist for decades," writes Gizmodo, "with multiple filmmakers attempting to bring it to the big screen." (Back in 2010, Slashdot's CmdrTaco even posted an update with the headline "Neuromancer Movie In Your Future?" with a 2011 story promising the movie deal was "moving forward....")

But now Deadline reports it's becoming a 10-episode series on Apple TV+ (co-produced by Apple Studios) starring Callum Turner and Brianna Middleton: Created for television by Graham Roland and JD Dillard, Neuromancer follows a damaged, top-rung super-hacker named Case (Turner) who is thrust into a web of digital espionage and high stakes crime with his partner Molly (Middleton), a razor-girl assassin with mirrored eyes, aiming to pull a heist on a corporate dynasty with untold secrets.
More from Gizmodo: "We're incredibly excited to be bringing this iconic property to Apple TV+," Roland and Dillard said in a statement. "Since we became friends nearly 10 years ago, we've looked for something to team up on, so this collaboration marks a dream come true. Neuromancer has inspired so much of the science fiction that's come after it and we're looking forward to bringing television audiences into Gibson's definitive 'cyberpunk' world."
The novel launched Gibson's "Sprawl" trilogy of novels (building on the dystopia in his 1982 short story "Burning Chrome"), also resurrecting the "Molly Millions" character from Johnny Mnemonic — an even earlier short story from 1981...
Cloud

Could We Lower The Carbon Footprint of Data Centers By Launching Them Into Space? (cnbc.com) 114

The Wall Street Journal reports that a European initiative studying the feasibility data centers in space "has found that the project could be economically viable" — while reducing the data center's carbon footprint.

And they add that according to coordinator Thales Alenia Space, the project "could also generate a return on investment of several billion euros between now and 2050." The study — dubbed Ascend, short for Advanced Space Cloud for European Net zero emission and Data sovereignty — was funded by the European Union and sought to compare the environmental impacts of space-based and Earth-based data centers, the company said. Moving forward, the company plans to consolidate and optimize its results. Space data centers would be powered by solar energy outside the Earth's atmosphere, aiming to contribute to the European Union's goal of achieving carbon neutrality by 2050, the project coordinator said... Space data centers wouldn't require water to cool them, the company said.
The 16-month study came to a "very encouraging" conclusion, project manager Damien Dumestier told CNBC. With some caveats... The facilities that the study explored launching into space would orbit at an altitude of around 1,400 kilometers (869.9 miles) — about three times the altitude of the International Space Station. Dumestier explained that ASCEND would aim to deploy 13 space data center building blocks with a total capacity of 10 megawatts in 2036, in order to achieve the starting point for cloud service commercialization... The study found that, in order to significantly reduce CO2 emissions, a new type of launcher that is 10 times less emissive would need to be developed. ArianeGroup, one of the 12 companies participating in the study, is working to speed up the development of such reusable and eco-friendly launchers. The target is to have the first eco-launcher ready by 2035 and then to allow for 15 years of deployment in order to have the huge capacity required to make the project feasible, said Dumestier...

Michael Winterson, managing director of the European Data Centre Association, acknowledges that a space data center would benefit from increased efficiency from solar power without the interruption of weather patterns — but the center would require significant amounts of rocket fuel to keep it in orbit. Winterson estimates that even a small 1 megawatt center in low earth orbit would need around 280,000 kilograms of rocket fuel per year at a cost of around $140 million in 2030 — a calculation based on a significant decrease in launch costs, which has yet to take place. "There will be specialist services that will be suited to this idea, but it will in no way be a market replacement," said Winterson. "Applications that might be well served would be very specific, such as military/surveillance, broadcasting, telecommunications and financial trading services. All other services would not competitively run from space," he added in emailed comments.

[Merima Dzanic, head of strategy and operations at the Danish Data Center Industry Association] also signaled some skepticism around security risks, noting, "Space is being increasingly politicised and weaponized amongst the different countries. So obviously, there is a security implications on what type of data you send out there."

Its not the only study looking at the potential of orbital data centers, notes CNBC. "Microsoft, which has previously trialed the use of a subsea data center that was positioned 117 feet deep on the seafloor, is collaborating with companies such as Loft Orbital to explore the challenges in executing AI and computing in space."

The article also points out that the total global electricity consumption from data centers could exceed 1,000 terawatt-hours in 2026. "That's roughly equivalent to the electricity consumption of Japan, according to the International Energy Agency."
Toys

Lego Bricks Made From Meteorite Dust 3D Printed by Europe's Space Agency (engadget.com) 27

Lego teamed up with the European Space Agency to make Lego pieces from actual meteorite dust, writes Engadget.

"It's a proof of concept to show how astronauts could use moondust to build lunar structures." Consider the sheer amount of energy and money required to haul up building materials from Earth to the Moon. It would be a game changer to, instead, build everything from pre-existing lunar materials. There's a layer of rock and mineral deposits at the surface of the Moon, which is called lunar regolith...

However, there isn't too much lunar regolith here on Earth for folks to experiment with. ESA scientists made their own regolith by grinding up a really old meteorite. [4.5 billion years, according to Lego's site, discovered in Africa in 2000.] The dust from this meteorite was turned into a mixture that was used to 3D print the Lego pieces. Voila. Moon bricks. They click together just like regular Lego bricks, though they only come in one color (space gray obviously.)

"The result is amazing," says ESA Science Officer Aidan Cowley on the Lego site (though "the bricks may look a little rougher than usual. Importantly the clutch power still works, enabling us to play and test our designs.")

"Nobody has built a structure on the Moon," Cowley said in an ESA statement. "So it was great to have the flexibility to try out all kinds of designs and building techniques with our space bricks." And the bricks will also be "helping to inspire the next generation of space engineers," according to the ESA's announcement — since they'll be on display in select Lego stores in the U.S., Canada, the U.K., Spain, France, Germany, the Netherlands, and Australia through September 20th.
The Matrix

Researchers Upend AI Status Quo By Eliminating Matrix Multiplication In LLMs 72

Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a new method to run AI language models more efficiently by eliminating matrix multiplication, potentially reducing the environmental impact and operational costs of AI systems. Ars Technica's Benj Edwards reports: Matrix multiplication (often abbreviated to "MatMul") is at the center of most neural network computational tasks today, and GPUs are particularly good at executing the math quickly because they can perform large numbers of multiplication operations in parallel. [...] In the new paper, titled "Scalable MatMul-free Language Modeling," the researchers describe creating a custom 2.7 billion parameter model without using MatMul that features similar performance to conventional large language models (LLMs). They also demonstrate running a 1.3 billion parameter model at 23.8 tokens per second on a GPU that was accelerated by a custom-programmed FPGA chip that uses about 13 watts of power (not counting the GPU's power draw). The implication is that a more efficient FPGA "paves the way for the development of more efficient and hardware-friendly architectures," they write.

The paper doesn't provide power estimates for conventional LLMs, but this post from UC Santa Cruz estimates about 700 watts for a conventional model. However, in our experience, you can run a 2.7B parameter version of Llama 2 competently on a home PC with an RTX 3060 (that uses about 200 watts peak) powered by a 500-watt power supply. So, if you could theoretically completely run an LLM in only 13 watts on an FPGA (without a GPU), that would be a 38-fold decrease in power usage. The technique has not yet been peer-reviewed, but the researchers -- Rui-Jie Zhu, Yu Zhang, Ethan Sifferman, Tyler Sheaves, Yiqiao Wang, Dustin Richmond, Peng Zhou, and Jason Eshraghian -- claim that their work challenges the prevailing paradigm that matrix multiplication operations are indispensable for building high-performing language models. They argue that their approach could make large language models more accessible, efficient, and sustainable, particularly for deployment on resource-constrained hardware like smartphones. [...]

The researchers say that scaling laws observed in their experiments suggest that the MatMul-free LM may also outperform traditional LLMs at very large scales. The researchers project that their approach could theoretically intersect with and surpass the performance of standard LLMs at scales around 10^23 FLOPS, which is roughly equivalent to the training compute required for models like Meta's Llama-3 8B or Llama-2 70B. However, the authors note that their work has limitations. The MatMul-free LM has not been tested on extremely large-scale models (e.g., 100 billion-plus parameters) due to computational constraints. They call for institutions with larger resources to invest in scaling up and further developing this lightweight approach to language modeling.

Slashdot Top Deals