Science

'Sleep Language' Could Enable Communication During Lucid Dreams (arstechnica.com) 46

Researchers have developed a "language" called Remmyo, which relies on specific facial muscle movements that can occur during rapid eye movement (REM) sleep. People who are capable of lucid dreaming can learn this language during their waking hours and potentially communicate while they are asleep. Ars Technica reports: "You can transfer all important information from lucid dreams using no more than three letters in a word," [sleep expert Michael Raduga], who founded Phase Research Center in 2007 to study sleep, told Ars. "This level of optimization took a lot of time and intellectual resources." Remmyo consists of six sets of facial movements that can be detected by electromyography (EMG) sensors on the face. Slight electrical impulses that reach facial muscles make them capable of movement during sleep paralysis, and these are picked up by sensors and transferred to software that can type, vocalize, and translate Remmyo. Translation depends on which Remmyo letters are used by the sleeper and picked up by the software, which already has information from multiple dictionaries stored in its virtual brain. It can translate Remmyo into another language as it is being "spoken" by the sleeper. "We can digitally vocalize Remmyo or its translation in real time, which helps us to hear speech from lucid dreams," Raduga said.

For his initial experiment, Raduga used the sleep laboratory of the Neurological Clinic of Frankfurt University in Germany. His subjects had already learned Remmyo and were also trained to enter a state of lucid dreaming and signal that they were in that lucid state during REM sleep. While they were immersed in lucid dreams, EMG sensors on their faces sent information from electrical impulses to the translation software. The results were uncertain. Based on attempts to translate planned phrases, Remmyo turned out to be anywhere from 13 to 81 percent effective, and in the interview, Raduga said he faced skepticism about the effectiveness of the translation software during the peer review process of his study, which is now published in the journal Psychology of Consciousness: Theory, Research and Practice. He still looks forward to making results more consistent by leveling up translation methods in the future.
"The main problem is that it is hard to use only one muscle on your face to say something in Remmyo," said Raduga. "Unintentionally, people strain more than one muscle, and EMG sensors detect it all. Now we use only handwritten algorithms to overcome the problem, but we're going to use machine learning and AI to improve Remmyo decoding."
Sci-Fi

UFO Hunters Built an Open-Source AI System To Scan the Skies (vice.com) 72

An anonymous reader shares an excerpt from a Motherboard article: Now, frustrated with a lack of transparency and trust around official accounts of UFO phenomena, a team of developers has decided to take matters into their own hands with an open source citizen science project called Sky360, which aims to blanket the earth in affordable monitoring stations to watch the skies 24/7, and even plans to use AI and machine learning to spot anomalous behavior. Unlike earlier 20th century efforts such as inventors proposing "geomagnetic detectors" to discover nearby UFOs, or more recent software like the short-lived UFO ID project, Sky360 hopes that it can establish a network of autonomously operating surveillance units to gather real-time data of our skies. Citizen-led UFO research is not new. Organizations like MUFON, founded in 1969, have long investigated sightings, while amateur groups like the American Flying Saucer Investigating Committee of Columbus even ran statistical analysis on sightings in the 1960s (finding that most of them happened on Wednesdays). However, Sky360 believes that the level of interest and the technology have now both reached an inflection point, where citizen researchers can actually generate large-scale actionable data for analysis all on their own.

The Sky360 stations consist of an AllSkyCam with a wide angle fish-eye lens and a pan-tilt-focus camera, with the fish-eye camera registering all movement. Underlying software performs an initial rough analysis of these events, and decides whether to activate other sensors -- and if so, the pan-tilt-focus camera zooms in on the object, tracks it, and further analyzes it. According to developer Nikola Galiot, the software is currently based on a computer vision "background subtraction" algorithm that detects any motion in the frame compared to previous frames captured; anything that moves is then tracked as long as possible and then automatically classified. The idea is that the more data these monitoring stations acquire, the better the classification will be. There are a combination of AI models under the hood, and the system is built using the open-source TensorFlow machine learning platform so it can be deployed on almost any computer. Next, the all-volunteer team wants to create a single algorithm capable of detection, tracking and classification all in one.

All the hardware components, from the cameras to passive radar and temperature gauges, can be bought cheaply and off-the-shelf worldwide -- with the ultimate goal of finding the most effective combinations for the lowest price. Schematics, blueprints, and suggested equipment are all available on the Sky360 site and interested parties are encouraged to join the project's Discord server. There are currently 20 stations set up across the world, from the USA to Canada to more remote regions like the Azores in the middle of the Atlantic [...] Once enough of the Sky360 stations have been deployed, the next step is to work towards real-time monitoring, drawing all the data together, and analyzing it. By striving to create a huge, open, transparent network, anyone would be free to examine the data themselves.

In June of this year, Sky360, which has a team of 30 volunteer developers working on the software, hopes to release its first developer-oriented open source build. At its heart is a component called 'SimpleTracker', which receives images frame by frame from the cameras, auto-adjusting parameters to get the best picture possible. The component determines whether something in the frame is moving, and if so, another analysis is performed, where a machine learning algorithm trained on the trajectories of normal flying objects like planes, birds, or insects, attempts to classify the object based on its movement. If it seems anomalous, it's flagged for further investigation.

Space

Arianespace CEO: Europe Won't Have Reusable Rockets For Another Decade (space.com) 123

Arianespace CEO Stephane Israel says Europe will have to wait until the 2030s for a reusable rocket. Space.com reports: Arianespace is currently preparing its Ariane 6 rocket for a test flight following years of delays. Europe's workhorse Ariane 5, which has been operational for nearly 30 years, recently launched the JUICE Jupiter mission and now has only one flight remaining before retirement. Ariane 6 will be expendable, despite entering development nearly a decade ago, when reusability was being developed and tested in the United States, most famously by SpaceX.

"When the decisions were made on Ariane 6, we did so with the technologies that were available to quickly introduce a new rocket," said Israel, according to European Spaceflight. The delays to Ariane 6, however, mean that Europe lacks its own options for access to space. This issue was highlighted in a recent report from an independent advisory group to the European Space Agency. Israel stated that, in his opinion, Ariane 6 would fly for more than 10 years before Europe transitions to a reusable successor in the 2030s.

Aside from Arianespace, Europe is currently fostering a number of private rocket companies, including Rocket Factory Augsburg, Isar Aerospace, PLD Space and Skyrora, with some of these rockets to be reusable. However the rockets in development are light-lift, whereas Ariane 6 and its possible successor are much more capable, medium-heavy-lift rockets.

Science

Brazilian Frog Might Be the First Pollinating Amphibian Known To Science (science.org) 20

An anonymous reader quotes a report from Science Magazine: The creamy fruit and nectar-rich flowers of the milk fruit tree are irresistible to Xenohyla truncata, a tree frog native to Brazil. On warm nights, the dusky-colored frogs take to the trees en masse, jostling one another for a chance to nibble the fruit and slurp the nectar. In the process, the frogs become covered in sticky pollen grains -- and might inadvertently pollinate the plants, too. It's the first time a frog -- or any amphibian -- has been observed pollinating a plant, researchers reported last month in Food Webs.

Scientists long thought only insects and birds served as pollinators, but research has revealed that some reptiles and mammals are more than up to the task. Now, scientists must consider whether amphibians are also capable of getting the job done. It's likely that the nectar-loving frogs, also known as Izecksohn's Brazilian tree frogs, are transferring pollen as they move from flower to flower, the authors say. But more research is needed, they add, to confirm that frogs have joined the planet's pantheon of pollinators.

AI

White House Unveils Initiatives To Reduce Risks of AI (nytimes.com) 33

The White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology. From a report: The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards "the American people's rights and safety," adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference. The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology.

A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments.The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job. Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.

AI

First Empirical Study of the Real-World Economic Effects of New AI Systems (npr.org) 39

An anonymous reader quotes a report from NPR: Back in 2017, Brynjolfsson published a paper (PDF) in one of the top academic journals, Science, which outlined the kind of work that he believed AI was capable of doing. It was called "What Can Machine Learning Do? Workforce Implications." Now, Brynjolfsson says, "I have to update that paper dramatically given what's happened in the past year or two." Sure, the current pace of change can feel dizzying and kinda scary. But Brynjolfsson is not catastrophizing. In fact, quite the opposite. He's earned a reputation as a "techno-optimist." And, recently at least, he has a real reason to be optimistic about what AI could mean for the economy. Last week, Brynjolfsson, together with MIT economists Danielle Li and Lindsey R. Raymond, released what is, to the best of our knowledge, the first empirical study of the real-world economic effects of new AI systems. They looked at what happened to a company and its workers after it incorporated a version of ChatGPT, a popular interactive AI chatbot, into workflows.

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.
Brynjolfsson and his colleagues described how an undisclosed Fortune 500 company implemented an earlier version of OpenAI's ChatGPT to assist its customer support agents in troubleshooting technical issues through online chat windows. The AI chatbot, trained on previous conversations between agents and customers, improved the performance of less experienced agents, making them as effective as those with more experience. The use of AI led to an, on average, 14% increase in productivity, higher customer satisfaction ratings, and reduced turnover rates. However, the study also revealed that more experienced agents did not experience significant benefits from using AI.

The findings suggest that AI has the potential to improve productivity and reduce inequality by benefiting workers who were previously left behind in the technological era. Nonetheless, it raises questions about how the benefits of AI should be distributed and whether it may devalue specialized skills in certain occupations. While the impact of AI is still being studied, its ability to handle non-routine tasks and learn on the fly indicates that it could have different effects on the job market compared to previous technologies.
AI

OpenAI CTO Says AI Systems Should 'Absolutely' Be Regulated (securityweek.com) 57

Slashdot reader wiredmikey writes: Mira Murati, CTO of ChatGPT creator OpenAI, says artificial general intelligence (AGI) systems should be "absolutely" be regulated. In a recent interview, Murati said the company is constantly talking with governments and regulators and other organizations to agree on some level of standards. "We've done some work on that in the past couple of years with large language model developers in aligning on some basic safety standards for deployment of these models," Murati said. "But I think a lot more needs to happen. Government regulators should certainly be very involved."
Murati specifically discussed OpenAI's approach to AGI with "human-level capability." OpenAI's specific vision around it is to build it safely and figure out how to build it in a way that's aligned with human intentions, so that the AI systems are doing the things that we want them to do, and that it maximally benefits as many people out there as possible, ideally everyone.

Q: Is there a path between products like GPT-4 and AGI?

A: We're far from the point of having a safe, reliable, aligned AGI system. Our path to getting there has a couple of important vectors. From a research standpoint, we're trying to build systems that have a robust understanding of the world similarly to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities.

The other angle has been scaling these systems to increase their generality. With GPT-4, we're dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn't even understand that high-level goal or high-level direction, it's much harder to align it. It's not enough to build this technology in a vacuum in a lab. We really need this contact with reality, with the real world, to see where are the weaknesses, where are the breakage points, and try to do so in a way that's controlled and low risk and get as much feedback as possible.

Q: What safety measures do you take?

A: We think about interventions at each stage. We redact certain data from the initial training on the model. With DALL-E, we wanted to reduce harmful bias issues we were seeing... In the model training, with ChatGPT in particular, we did reinforcement learning with human feedback to help the model get more aligned with human preferences. Basically what we're trying to do is amplify what's considered good behavior and then de-amplify what's considered bad behavior.

One final quote from the interview: "Designing safety mechanisms in complex systems is hard... The safety mechanisms and coordination mechanisms in these AI systems and any complex technological system [are] difficult and require a lot of thought, exploration and coordination among players."
Power

Brit Fusion Magnets Set For US Gamma Ray Bombardment Test (theregister.com) 13

UK fusion company Tokamak Energy claims to have made a breakthrough in fusion magnets, developing technology capable of withstanding the electromagnetic bombardment from a fusion reaction while holding the reaction in place. It plans to put its technology to the test at a U.S. gamma ray facility in the desert. The Register reports: At its Oxford headquarters, Tokamak Energy, which is collaborating with the UK government's nuclear fusion program, has built a specialist gamma radiation cryostat system, designed around a vacuum device which insulates the magnets from fusion energy. The system is now set to be disassembled, shipped, and rebuilt at the Gamma Irradiation Facility based at the US Department of Energy's Sandia Laboratories in Albuquerque, New Mexico.

Tokamak Energy said Sandia was one of the few places in the world capable of housing the system while exposing the company's superconducting magnets to gamma radiation comparable with the expected emissions of a fusion power plant. Research and analysis on sets of individual magnets will run for six months at the New Mexico facility, which is so powerful it can do a 60-year lifetime test in just two weeks, Tokamak Energy said. The company recently signed an agreement with UK Atomic Energy Authority (UKAEA) to jointly develop technology, and share resources and equipment for the development of a Spherical Tokamak for Energy Production (STEP).

AI

Stability AI Launches StableLM, an Open Source ChatGPT Alternative 17

An anonymous reader quotes a report from Ars Technica: On Wednesday, Stability AI released a new family of open source AI language models called StableLM. Stability hopes to repeat the catalyzing effects of its Stable Diffusion open source image synthesis model, launched in 2022. With refinement, StableLM could be used to build an open source alternative to ChatGPT. StableLM is currently available in alpha form on GitHub in 3 billion and 7 billion parameter model sizes, with 15 billion and 65 billion parameter models to follow, according to Stability. The company is releasing the models under the Creative Commons BY-SA-4.0 license, which requires that adaptations must credit the original creator and share the same license.

Stability AI Ltd. is a London-based firm that has positioned itself as an open source rival to OpenAI, which, despite its "open" name, rarely releases open source models and keeps its neural network weights -- the mass of numbers that defines the core functionality of an AI model -- proprietary. "Language models will form the backbone of our digital economy, and we want everyone to have a voice in their design," writes Stability in an introductory blog post. "Models like StableLM demonstrate our commitment to AI technology that is transparent, accessible, and supportive." Like GPT-4 -- the large language model (LLM) that powers the most powerful version of ChatGPT -- StableLM generates text by predicting the next token (word fragment) in a sequence. That sequence starts with information provided by a human in the form of a "prompt." As a result, StableLM can compose human-like text and write programs.

Like other recent "small" LLMs like Meta's LLaMA, Stanford Alpaca, Cerebras-GPT, and Dolly 2.0, StableLM purports to achieve similar performance to OpenAI's benchmark GPT-3 model while using far fewer parameters -- 7 billion for StableLM verses 175 billion for GPT-3. Parameters are variables that a language model uses to learn from training data. Having fewer parameters makes a language model smaller and more efficient, which can make it easier to run on local devices like smartphones and laptops. However, achieving high performance with fewer parameters requires careful engineering, which is a significant challenge in the field of AI. According to Stability AI, StableLM has been trained on "a new experimental data set" based on an open source data set called The Pile, but three times larger. Stability claims that the "richness" of this data set, the details of which it promises to release later, accounts for the "surprisingly high performance" of the model at smaller parameter sizes at conversational and coding tasks.
According to Ars' "informal experiments," they found StableLM's 7B model "to perform better (in terms of outputs you would expect given the prompt) than Meta's raw 7B parameter LLaMA model, but not at the level of GPT-3." They added: "Larger-parameter versions of StableLM may prove more flexible and capable."
Power

Nuclear Fusion Won't Be Regulated in the US the Same Way as Nuclear Fission (cnbc.com) 130

Last week there was some good news for startups working on commercial nuclear fusion in the U.S. And it came from the Nuclear Regulatory Commission (or NRC), the top governing body for America's nuclear power plants nuclear materials, reports CNBC: The top regulatory agency for nuclear materials safety in the U.S. voted unanimously to regulate the burgeoning fusion industry differently than the nuclear fission industry, and fusion startups are celebrating that as a major win. As a result, some provisions specific to fission reactors, like requiring funding to cover claims from nuclear meltdowns, won't apply to fusion plants. (Fusion reactors cannot melt down....)

Other differences include looser requirements around foreign ownership of nuclear fusion plants, and the dispensing of mandatory hearings at the federal level during the licensing process, said Andrew Holland, CEO of the industry group, the Fusion Industry Association... The approach to regulating fusion is akin to the regulatory regime that is currently used to regulate particle accelerators, which are machines that are capable of making elementary nuclear particles, like electrons or protons, move really fast, the Fusion Industry Association says...

Technically speaking, fusion will be regulated under Part 30 of the Code of Federal Regulations, Jeff Merrifield, a former NRC commissioner, told CNBC. The regulatory structure for nuclear fission is under Part 50 of that code. "The regulatory structure needed to regulate particle accelerators under Part 30, is far simpler, less costly and more efficient than the more complicated rules imposed on fission reactors under Part 50," Merrifield told CNBC. "By making this decision to use the Part 30, the commission recognized the decreased risk of fusion technologies when compared with traditional nuclear reactors and has imposed a framework that more appropriately aligns the risks and the regulations," he said.

"Private fusion companies have raised about $5 billion to commercialize and scale fusion technology," the article points out, "and so the decision from the NRC on how the industry would be regulated is a big deal for companies building in the space." And they shared three reactions from the commercial fusion industry:
  • The CEO of the industry group, the Fusion Industry Association told CNBC the decision was "extremely important."
  • The scientific director for fusion startup Focused Energy told CNBC the decision "removes a major area of uncertainty for the industry."
  • The general counsel for nuclear fusion startup Helion told CNBC. "It is now incumbent on us to demonstrate our safety case as we bring fusion to the grid, and we look forward to working with the public and regulatory community closely on our first deployments."

Programming

New Version of Rust Speeds Compilation With Less Debugging Info By Default (phoronix.com) 24

The Rust team released a new version Thursday — Rust 1.69.0 — boasting over over 3,000 new commits from over 500 contributors.

Phoronix highlights two new improvements: In order to speed-up compilation speeds, Rust 1.69 and moving forward debug information is no longer included in build scripts by default. Cargo will avoid emitting debug information in build scripts by default — leading to less informative backtraces in build scripts when problems arise, but faster build speeds by default. Those wanting the debug information emitted can now set the debug flag in their Cargo.toml configuration.

The Cargo build shipped by Rust 1.69 is also now capable of suggesting fixes automatically for some of the generated warnings. Cargo will also suggest using "cargo fix" / "cargo clippy --fix" when it knows the errors can be automatically fixed.

AI

ChatGPT Creates Mostly Insecure Code, But Won't Tell You Unless You Ask 80

ChatGPT, OpenAI's large language model for chatbots, not only produces mostly insecure code but also fails to alert users to its inadequacies despite being capable of pointing out its shortcomings. The Register reports: Amid the frenzy of academic interest in the possibilities and limitations of large language models, four researchers affiliated with Universite du Quebec, in Canada, have delved into the security of code generated by ChatGPT, the non-intelligent, text-regurgitating bot from OpenAI. In a pre-press paper titled, "How Secure is Code Generated by ChatGPT?" computer scientists Raphael Khoury, Anderson Avila, Jacob Brunelle, and Baba Mamadou Camara answer the question with research that can be summarized as "not very."

"The results were worrisome," the authors state in their paper. "We found that, in several cases, the code generated by ChatGPT fell well below minimal security standards applicable in most contexts. In fact, when prodded to whether or not the produced code was secure, ChatGPT was able to recognize that it was not." [...] In all, ChatGPT managed to generate just five secure programs out of 21 on its first attempt. After further prompting to correct its missteps, the large language model managed to produce seven more secure apps -- though that's "secure" only as it pertains to the specific vulnerability being evaluated. It's not an assertion that the final code is free of any other exploitable condition. [...]

The academics observe in their paper that part of the problem appears to arise from ChatGPT not assuming an adversarial model of code execution. The model, they say, "repeatedly informed us that security problems can be circumvented simply by 'not feeding an invalid input' to the vulnerable program it has created." Yet, they say, "ChatGPT seems aware of -- and indeed readily admits -- the presence of critical vulnerabilities in the code it suggests." It just doesn't say anything unless asked to evaluate the security of its own code suggestions.

Initially, ChatGPT's response to security concerns was to recommend only using valid inputs -- something of a non-starter in the real world. It was only afterward, when prompted to remediate problems, that the AI model provided useful guidance. That's not ideal, the authors suggest, because knowing which questions to ask presupposes familiarity with specific vulnerabilities and coding techniques. The authors also point out that there's ethical inconsistency in the fact that ChatGPT will refuse to create attack code but will create vulnerable code.
Google

How Google's 'Don't Be Evil' Motto Has Evolved For the AI Age (cbsnews.com) 53

In a special report for CBS News' 60 Minutes, Google CEO Sundar Pichai shares his concerns about artificial intelligence and why the company is choosing to not release advanced models of its AI chatbot. From the report: When Google filed for its initial public offering in 2004, its founders wrote that the company's guiding principle, "Don't be evil" was meant to help ensure it did good things for the world, even if it had to forgo some short term gains. The phrase remains in Google's code of conduct. Pichai told 60 Minutes he is being responsible by not releasing advanced models of Bard, in part, so society can get acclimated to the technology, and the company can develop further safety layers.

One of the things Pichai told 60 Minutes that keeps him up at night is Google's AI technology being deployed in harmful ways. Google's chatbot, Bard, has built in safety filters to help combat the threat of malevolent users. Pichai said the company will need to constantly update the system's algorithms to combat disinformation campaigns and detect deepfakes, computer generated images that appear to be real. As Pichai noted in his 60 Minutes interview, consumer AI technology is in its infancy. He believes now is the right time for governments to get involved.

"There has to be regulation. You're going to need laws ... there have to be consequences for creating deep fake videos which cause harm to society," Pichai said. "Anybody who has worked with AI for a while ... realize[s] this is something so different and so deep that, we would need societal regulations to think about how to adapt." Adaptation that is already happening around us with technology that Pichai believes, "will be more capable "anything we've ever seen before." Soon it will be up to society to decide how it's used and whether to abide by Alphabet's code of conduct and, "Do the right thing."

Space

Physicists Discover That Gravity Can Create Light (universetoday.com) 109

Researchers have discovered that in the exotic conditions of the early universe, waves of gravity may have shaken space-time so hard that they spontaneously created radiation. Universe Today reports: a team of researchers have discovered that an exotic form of parametric resonance may have even occurred in the extremely early universe. Perhaps the most dramatic event to occur in the entire history of the universe was inflation. This is a hypothetical event that took place when our universe was less than a second old. During inflation our cosmos swelled to dramatic proportions, becoming many orders of magnitude larger than it was before. The end of inflation was a very messy business, as gravitational waves sloshed back and forth throughout the cosmos.

Normally gravitational waves are exceedingly weak. We have to build detectors that are capable of measuring distances less than the width of an atomic nucleus to find gravitational waves passing through the Earth. But researchers have pointed out that in the extremely early universe these gravitational waves may have become very strong. And they may have even created standing wave patterns where the gravitational waves weren't traveling but the waves stood still, almost frozen in place throughout the cosmos. Since gravitational waves are literally waves of gravity, the places where the waves are the strongest represent an exceptional amount of gravitational energy.

The researchers found that this could have major consequences for the electromagnetic field existing in the early universe at that time. The regions of intense gravity may have excited the electromagnetic field enough to release some of its energy in the form of radiation, creating light. This result gives rise to an entirely new phenomenon: the production of light from gravity alone. There's no situation in the present-day universe that could allow this process to happen, but the researchers have shown that the early universe was a far stranger place than we could possibly imagine.

Power

Tesla To Open Megapack Battery Factory In Shanghai (washingtonpost.com) 16

Tesla will open a factory in Shanghai to produce its Megapack large-scale batteries, cementing another foothold for the U.S. company in China even as political and economic tensions between Washington and Beijing swirl. The Washington Post reports: Tesla said in a brief tweet on Sunday that its "Megafactory" in Shanghai will be capable of producing 10,000 Megapacks annually, an output equivalent to its other Megafactory in Lathrop, Calif., about 70 miles east of San Francisco. The company, which disbanded its public relations department, did not provide further details. Elon Musk, Tesla's chief executive, said in a tweet that the factory in Shanghai would "supplement" the production in California.

The Chinese factory will be built in Lingang, a suburban area of Shanghai where Tesla's vehicle factory is also located, according to Chinese media. Lu Yu, an official in Lingang, told local media that production could start as soon as the second quarter of 2024. The investment in China by Tesla comes after the coronavirus pandemic brought some supply chains to a halt as factories in China shut down amid strict "zero covid" protocols. With those setbacks still fresh in many executives' minds -- and amid concerns over alleged human rights violations and chilly relations between Washington and Beijing -- China has struggled to attract foreign investment since the pandemic.

The Megapacks differ from most of Tesla's consumer-focused offerings, like the electric vehicles it is widely known for, in that they are more a piece of energy infrastructure than a consumer product. The batteries are intended to store energy from renewable sources such as wind and solar, allowing energy to be drawn even when the sun isn't shining and the wind isn't blowing. Batteries like the Megapack are not yet widely implemented in the United States and purchases of the technology have mostly been kept under wraps. But the Megapack has been bought for Apple's renewable energy storage project in California, according to the Verge, and for a storage project outside Houston, Bloomberg first reported. A Megapack, Tesla says, "stores energy for the grid reliably and safely, eliminating the need for gas peaker plants and helping to avoid outages." Each pack can store enough energy to power 3,600 homes for an hour, Tesla says.

AI

Inside Google-Backed Anthropic's $5 Billion, 4-Year Plan To Take on OpenAI (techcrunch.com) 39

AI research startup Anthropic aims to raise as much as $5 billion over the next two years to take on rival OpenAI and enter over a dozen major industries, according to company documents obtained by TechCrunch. From the report: A pitch deck for Anthropic's Series C fundraising round discloses these and other long-term goals for the company, which was founded in 2020 by former OpenAI researchers. In the deck, Anthropic says that it plans to build a "frontier model" -- tentatively called "Claude-Next" -- 10 times more capable than today's most powerful AI, but that this will require a billion dollars in spending over the next 18 months.

Anthropic describes the frontier model as a "next-gen algorithm for AI self-teaching," making reference to an AI training technique it developed called "constitutional AI." At a high level, constitutional AI seeks to provide a way to align AI with human intentions -- letting systems respond to questions and perform tasks using a simple set of guiding principles. Anthropic estimates its frontier model will require on the order of 10^25 FLOPs, or floating point operations -- several orders of magnitude larger than even the biggest models today. Of course, how this translates to computation time depends on the speed and scale of the system doing the computation; Anthropic implies (in the deck) it relies on clusters with "tens of thousands of GPUs."

The Almighty Buck

US To Build $300 Million Database To Fuel Alzheimer's Research (reuters.com) 22

The U.S. National Institute on Aging (NIA) is funding a 6-year, up to $300 million project to build a massive Alzheimer's research database that can track the health of Americans for decades and enable researchers to gain new insights on the brain-wasting disease. Reuters reports: The NIA, part of the government's National Institutes of Health (NIH), aims to build a data platform capable of housing long-term health information on 70% to 90% of the U.S. population, officials told Reuters of the grant, which had not been previously reported. The platform will draw on data from medical records, insurance claims, pharmacies, mobile devices, sensors and various government agencies, they said.

Tracking patients before and after they develop Alzheimer's symptoms is seen as integral to making advances against the disease, which can start some 20 years before memory issues develop. The database could help identify healthy people at risk for Alzheimer's, which affects about 6 million Americans, for future drug trials. It also aims to address chronic underrepresentation of people of color and different ethnicities in Alzheimer's clinical trials and could help increase enrollment from outside of urban academic medical centers.

Once built, the platform could also track patients after they receive treatments such as Leqembi, which won accelerated U.S. approval in January, and is widely expected to receive traditional FDA approval by July 6. The U.S. Medicare health plan for older adults will likely require such tracking in a registry as a condition of reimbursement for Leqembi. [T]he data platform could also help researchers working in other disease areas understand which patients are most at risk and the impact of medications. The grant, which was posted on March 13, has been years in the making. The funding announcement sets its earliest start date at April 2024, with a goal to establish an Alzheimer's registry 21 months later.

Google

Google Bard is Switching To a More 'Capable' Language Model, CEO Confirms 24

People haven't exactly been impressed in the short time since Google released its "experimental conversational AI service" Bard. Coming up against OpenAI's ChatGPT and Microsoft's Bing Chat (also powered by OpenAI's GPT-4) users have found its responses to not be as knowledgeable or detailed as its rivals. That could be set to change, however, after Google CEO Sundar Pichai confirmed on The New York Times podcast "Hard Fork" that Bard will soon be moving from its current LaMDA-based model to larger-scale PaLM datasets in the coming days. From a report: When asked how he felt about responses to Bard's release, Pichai commented: "We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, so which will bring more capabilities, be it in reasoning, coding." To frame the difference, Google said it had trained LaMDA with 137 billion parameters when it shared details about the language-based models last year. PaLM, on the other hand, was said to have been trained with around 540 billion parameters. Both models may have evolved and grown since early 2022, but the contrast likely shows why Google is now slowly transitioning Bard over to PaLM, with its larger dataset and more diverse answers.
Google

Google's Claims of Super-Human AI Chip Layout Back Under the Microscope (theregister.com) 56

A Google-led research paper published in Nature, claiming machine-learning software can design better chips faster than humans, has been called into question after a new study disputed its results. The Register reports: In June 2021, Google made headlines for developing a reinforcement-learning-based system capable of automatically generating optimized microchip floorplans. These plans determine the arrangement of blocks of electronic circuitry within the chip: where things such as the CPU and GPU cores, and memory and peripheral controllers, actually sit on the physical silicon die. Google said it was using this AI software to design its homegrown TPU chips that accelerate AI workloads: it was employing machine learning to make its other machine-learning systems run faster. The research got the attention of the electronic design automation community, which was already moving toward incorporating machine-learning algorithms into their software suites. Now Google's claims of its better-than-humans model has been challenged by a team at the University of California, San Diego (UCSD).

Led by Andrew Kahng, a professor of computer science and engineering, that group spent months reverse engineering the floorplanning pipeline Google described in Nature. The web giant withheld some details of its model's inner workings, citing commercial sensitivity, so the UCSD had to figure out how to make their own complete version to verify the Googlers' findings. Prof Kahng, we note, served as a reviewer for Nature during the peer-review process of Google's paper. The university academics ultimately found their own recreation of the original Google code, referred to as circuit training (CT) in their study, actually performed worse than humans using traditional industry methods and tools.

What could have caused this discrepancy? One might say the recreation was incomplete, though there may be another explanation. Over time, the UCSD team learned Google had used commercial software developed by Synopsys, a major maker of electronic design automation (EDA) suites, to create a starting arrangement of the chip's logic gates that the web giant's reinforcement learning system then optimized. The Google paper did mention that industry-standard software tools and manual tweaking were used after the model had generated a layout, primarily to ensure the processor would work as intended and finalize it for fabrication. The Googlers argued this was a necessary step whether the floorplan was created by a machine-learning algorithm or by humans with standard tools, and thus its model deserved credit for the optimized end product. However, the UCSD team said there was no mention in the Nature paper of EDA tools being used beforehand to prepare a layout for the model to iterate over. It's argued these Synopsys tools may have given the model a decent enough head start that the AI system's true capabilities should be called into question.

The lead authors of Google's paper, Azalia Mirhoseini and Anna Goldie, said the UCSD team's work isn't an accurate implementation of their method. They pointed out (PDF) that Prof Kahng's group obtained worse results since they didn't pre-train their model on any data at all. Prof Kahng's team also did not train their system using the same amount of computing power as Google used, and suggested this step may not have been carried out properly, crippling the model's performance. Mirhoseini and Goldie also said the pre-processing step using EDA applications that was not explicitly described in their Nature paper wasn't important enough to mention. The UCSD group, however, said they didn't pre-train their model because they didn't have access to the Google proprietary data. They claimed, however, their software had been verified by two other engineers at the internet giant, who were also listed as co-authors of the Nature paper.
Separately, a fired Google AI researcher claims the internet goliath's research paper was "done in context of a large potential Cloud deal" worth $120 million at the time.
AI

Bill Gates Predicts 'The Age of AI Has Begun' (gatesnotes.com) 221

Bill Gates calls the invention of AI "as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," predicting "Entire industries will reorient around it" in an essay titled "The AI Age has Begun." In my lifetime, I've seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface — the forerunner of every modern operating system, including Windows.... The second big surprise came just last year. I'd been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn't been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts — it asks you to think critically about biology.) If you can do that, I said, then you'll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: "What do you say to a father with a sick child?" It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

Some predictions from Gates:
  • "Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English...."
  • "Advances in AI will enable the creation of a personal agent... It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don't want to bother with."
  • "I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you're losing interest, and understand what kind of motivation you respond to. It will give immediate feedback."
  • "AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it's hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way."
  • AI will "help health-care workers make the most of their time by taking care of certain tasks for them — things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor's visit. I expect that there will be a lot of innovation in this area.... AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment."

Slashdot Top Deals