Space

Scientists Unveil New and Improved 'Skinny Donut' Black Hole Image (reuters.com) 18

The 2019 release of the first image of a black hole was hailed as a significant scientific achievement. But truth be told, it was a bit blurry -- or, as one astrophysicist involved in the effort called it, a "fuzzy orange donut." Scientists on Thursday unveiled a new and improved image of this black hole -- a behemoth at the center of a nearby galaxy -- mining the same data used for the earlier one but improving its resolution by employing image reconstruction algorithms to fill in gaps in the original telescope observations. From a report: Hard to observe by their very nature, black holes are celestial entities exerting gravitational pull so strong no matter or light can escape. The ring of light -- that is, the material being sucked into the voracious object -- seen in the new image is about half the width of how it looked in the previous picture. There is also a larger "brightness depression" at the center - basically the donut hole - caused by light and other matter disappearing into the black hole.

The image remains somewhat blurry due to the limitations of the data underpinning it -- not quite ready for a Hollywood sci-fi blockbuster, but an advance from the 2019 version. This supermassive black hole resides in a galaxy called Messier 87, or M87, about 54 million light-years from Earth. A light year is the distance light travels in a year, 5.9 trillion miles (9.5 trillion km). This galaxy, with a mass 6.5 billion times that of our sun, is larger and more luminous than our Milky Way.
Further reading: The Black Hole Image Data Was Spread Across 5 Petabytes Stored On About Half a Ton of Hard Drives (2019).
Power

Magnon-Based Computation Could Signal Computing Paradigm Shift (phys.org) 19

An anonymous reader quotes a report from Phys.Org: Like electronics or photonics, magnonics is an engineering subfield that aims to advance information technologies when it comes to speed, device architecture, and energy consumption. A magnon corresponds to the specific amount of energy required to change the magnetization of a material via a collective excitation called a spin wave. Because they interact with magnetic fields, magnons can be used to encode and transport data without electron flows, which involve energy loss through heating (known as Joule heating) of the conductor used. As Dirk Grundler, head of the Lab of Nanoscale Magnetic Materials and Magnonics (LMGN) in the School of Engineering explains, energy losses are an increasingly serious barrier to electronics as data speeds and storage demands soar. "With the advent of AI, the use of computing technology has increased so much that energy consumption threatens its development," Grundler says. "A major issue is traditional computing architecture, which separates processors and memory. The signal conversions involved in moving data between different components slow down computation and waste energy."

This inefficiency, known as the memory wall or Von Neumann bottleneck, has had researchers searching for new computing architectures that can better support the demands of big data. And now, Grundler believes his lab might have stumbled on such a "holy grail". While doing other experiments on a commercial wafer of the ferrimagnetic insulator yttrium iron garnet (YIG) with nanomagnetic strips on its surface, LMGN Ph.D. student Korbinian Baumgaertl was inspired to develop precisely engineered YIG-nanomagnet devices. With the Center of MicroNanoTechnology's support, Baumgaertl was able to excite spin waves in the YIG at specific gigahertz frequencies using radiofrequency signals, and -- crucially -- to reverse the magnetization of the surface nanomagnets. "The two possible orientations of these nanomagnets represent magnetic states 0 and 1, which allows digital information to be encoded and stored," Grundler explains.

The scientists made their discovery using a conventional vector network analyzer, which sent a spin wave through the YIG-nanomagnet device. Nanomagnet reversal happened only when the spin wave hit a certain amplitude, and could then be used to write and read data. "We can now show that the same waves we use for data processing can be used to switch the magnetic nanostructures so that we also have nonvolatile magnetic storage within the very same system," Grundler explains, adding that "nonvolatile" refers to the stable storage of data over long time periods without additional energy consumption. It's this ability to process and store data in the same place that gives the technique its potential to change the current computing architecture paradigm by putting an end to the energy-inefficient separation of processors and memory storage, and achieving what is known as in-memory computation.
The research has been published in the journal Nature Communications.
AI

'Pausing AI Developments Isn't Enough. We Need To Shut It All Down' (time.com) 352

Earlier today, more than 1,100 artificial intelligence experts, industry leaders and researchers signed a petition calling on AI developers to stop training models more powerful than OpenAI's ChatGPT-4 for at least six months. Among those who refrained from signing it was Eliezer Yudkowsky, a decision theorist from the U.S. and lead researcher at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

"This 6-month moratorium would be better than no moratorium," writes Yudkowsky in an opinion piece for Time Magazine. "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it." Yudkowsky cranks up the rhetoric to 100, writing: "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter." Here's an excerpt from his piece: The key issue is not "human-competitive" intelligence (as the open letter puts it); it's what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can't calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. [...] It's not that you can't, in principle, survive creating something much smarter than you; it's that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. [...]

It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today's capabilities. Solving safety of superhuman intelligence -- not perfect safety, safety in the sense of "not killing literally everyone" -- could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we've overcome in our history, because we are all gone.

Trying to get anything right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
You can read the full letter signed by AI leaders here.
AI

Free AI Programs Prone To Security Risks, Researchers Say (bloomberg.com) 17

Companies rushing to adopt hot new types of artificial intelligence should exercise caution when using open-source versions of the technology, some of which may not work as advertised or include flaws that hackers can exploit, security researchers say. From a report: There are few ways to know in advance if a particular AI model -- a program made up of algorithms that can do such things as generate text, images and predictions -- is safe, said Hyrum Anderson, distinguished engineer at Robust Intelligence, a machine learning security company that lists the US Defense Department as a client. Anderson said he found that half the publicly available models for classifying images failed 40% of his tests. The goal was to determine whether a malicious actor could alter the outputs of AI programs in a manner that could constitute a security risk or provide incorrect information. Often, models use file types that are particularly prone to security flaws, Anderson said. It's an issue because so many companies are grabbing models from publicly available sources without fully understanding the underlying technology, rather than creating their own. Ninety percent of the companies Robust Intelligence works with download models from Hugging Face, a repository of AI models, he said.
Security

Belgian Intelligence Puts Huawei on Its Watchlist (politico.eu) 23

Belgium's intelligence service is scrutinizing the operations of technology giant Huawei as fears of Chinese espionage grow around the EU and NATO headquarters in Brussels, according to confidential documents seen by POLITICO and three people familiar with the matter. From the report: In recent months, Belgium's State Security Service (VSSE) has requested interviews with former employees of the company's lobbying operation in the heart of Brussels' European district. The intelligence gathering is part of security officials' activities to scrutinize how China may be using non-state actors -- including senior lobbyists in Huawei's Brussels office -- to advance the interests of the Chinese state and its Communist party in Europe, said the people, who requested anonymity due to the sensitivity of the matter. The scrutiny of Huawei's EU activities comes as Western security agencies are sounding the alarm over companies with links to China. British, Dutch, Belgian, Czech and Nordic officials -- as well as EU functionaries -- have all been told to stay off TikTok on work phones over concerns similar to those surrounding Huawei, namely that Chinese security legislation forces Chinese tech firms to hand over data. The scrutiny also comes amid growing evidence of foreign states' influence on EU decision-making -- a phenomenon starkly exposed by the recent Qatargate scandal, where the Gulf state sought to influence Brussels through bribes and gifts via intermediary organizations. The Belgian security services are tasked with overseeing operations led by foreign actors around the EU institutions.
AI

Bill Gates Predicts 'The Age of AI Has Begun' (gatesnotes.com) 221

Bill Gates calls the invention of AI "as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," predicting "Entire industries will reorient around it" in an essay titled "The AI Age has Begun." In my lifetime, I've seen two demonstrations of technology that struck me as revolutionary. The first time was in 1980, when I was introduced to a graphical user interface — the forerunner of every modern operating system, including Windows.... The second big surprise came just last year. I'd been meeting with the team from OpenAI since 2016 and was impressed by their steady progress. In mid-2022, I was so excited about their work that I gave them a challenge: train an artificial intelligence to pass an Advanced Placement biology exam. Make it capable of answering questions that it hasn't been specifically trained for. (I picked AP Bio because the test is more than a simple regurgitation of scientific facts — it asks you to think critically about biology.) If you can do that, I said, then you'll have made a true breakthrough.

I thought the challenge would keep them busy for two or three years. They finished it in just a few months. In September, when I met with them again, I watched in awe as they asked GPT, their AI model, 60 multiple-choice questions from the AP Bio exam — and it got 59 of them right. Then it wrote outstanding answers to six open-ended questions from the exam. We had an outside expert score the test, and GPT got a 5 — the highest possible score, and the equivalent to getting an A or A+ in a college-level biology course. Once it had aced the test, we asked it a non-scientific question: "What do you say to a father with a sick child?" It wrote a thoughtful answer that was probably better than most of us in the room would have given. The whole experience was stunning.

I knew I had just seen the most important advance in technology since the graphical user interface.

Some predictions from Gates:
  • "Eventually your main way of controlling a computer will no longer be pointing and clicking or tapping on menus and dialogue boxes. Instead, you'll be able to write a request in plain English...."
  • "Advances in AI will enable the creation of a personal agent... It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don't want to bother with."
  • "I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionizing the way people teach and learn. It will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you're losing interest, and understand what kind of motivation you respond to. It will give immediate feedback."
  • "AIs will dramatically accelerate the rate of medical breakthroughs. The amount of data in biology is very large, and it's hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way."
  • AI will "help health-care workers make the most of their time by taking care of certain tasks for them — things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor's visit. I expect that there will be a lot of innovation in this area.... AIs will even give patients the ability to do basic triage, get advice about how to deal with health problems, and decide whether they need to seek treatment."

Bitcoin

Belgian Crypto Ads Must Warn of Risks Under New Rules (coindesk.com) 29

An anonymous reader quotes a report from CoinDesk: Crypto ads in Belgium must be accurate and warn investors of the risks under new laws announced by the country's financial regulator Monday. Powers published in Belgium's Official Gazette on Friday mean any mass-media campaign to promote a digital currency would have to be submitted to the Financial Services and Markets Authority (FSMA) 10 days in advance, allowing the regulator to intervene if needed.

"Virtual currencies are all the rage at the moment, but they involve considerable risk," the FSMA said in a statement. "They are often subject to wild price fluctuations and are vulnerable to fraud and IT-related risks." The new rules, which will take effect on May 17, require ads to state that "the only guarantee in crypto is risk." Belgium joins European countries such as Spain and the U.K. in imposing restrictions on publicity campaigns, which often mirror those already in place for traditional finance.

United States

US Core CPI Tops Estimates, Pressuring Fed as It Weighs Hike (bloomberg.com) 176

Underlying US consumer prices rose in February by the most in five months, an acceleration that leaves the Federal Reserve in a tough position as it tries to thwart still-rapid inflation without adding to the turmoil in the banking sector. From a report: The consumer price index, excluding food and energy, increased 0.5% last month and 5.5% from a year earlier, according to Bureau of Labor Statistics data out Tuesday. Economists see the gauge -- known as the core CPI -- as a better indicator of underlying inflation than the headline measure. The overall CPI climbed 0.4% in February and 6% from a year earlier. The median estimates in a Bloomberg survey of economists called for a 0.4% monthly advance in the overall and core CPI measures. The figures reaffirm that the Fed's quest to tame inflation will be a bumpy one as the economy has largely proven resilient to a year's worth of interest-rate hikes so far. The challenge for the Fed now is how to prioritize inflation that is still far too high with growing financial stability risks in the unraveling of Silicon Valley Bank.
Privacy

Discord Promises Outraged Users It Won't Store Call Recordings -- For Now (arstechnica.com) 14

Discord updated their privacy policy to quietly drop their promise to alert users "in advance" if the company ever started storing contents of video calls, voice calls, or channels. Naturally, this alarmed some users who wondered if the company plans to start retaining call recordings. According to a Discord spokesperson, the answer is no. Ars Technica reports: "There has not been a change in Discord's position on how we store or record the contents of video or voice channels," a Discord spokesperson told Ars. "We recognize that when we recently issued adjusted language in our privacy policy, we inadvertently caused confusion among our users. To be clear, nothing has changed and we have reinserted the language back into our privacy policy, along with some additional clarifying information."

Before users began complaining, the policy was going to be updated to say that Discord would be collecting information on "any content that you upload to the service. For example, you may write messages or posts (including drafts), send voice messages, create custom emojis, create short recordings of GoLive activity, or upload and share files through the services. This also includes your profile information and the information you provide when you create servers."

As users raised concerns on Reddit, Discord staffers seemed to rush to assuage fears, saying, "We understand that the wording of the new privacy policy is broad and can be misunderstood" and promising, "We are going to fix this." Since then, Discord added back in the missing language, word for word: "We generally do not store the contents of video or voice calls or channels. If we were to change that in the future (for example, to facilitate content moderation), we would disclose that to you in advance." A Reddit user identifying as a Discord staffer told Redditors that Discord won't "regularly" collect this type of content.
That doesn't mean it will never happen though. "In response to user outrage, the policy's new updated language now also specifies that Discord may collect some of this type of content in the future," adds Ars.

"We may build features that help users engage with voice and video content, like create or send short recordings," Discord's new policy states.
Social Networks

Belgium Bans TikTok From Federal Government Work Phones (reuters.com) 21

Belgian federal government employees will no longer be allowed to use the Chinese-owned video app TikTok on their work phones, Belgian Prime Minister Alexander De Croo said on Friday. From a report: De Croo said the Belgian national security council had warned of the risks associated with the large amounts of data collected by TikTok, which is owned by Chinese firm ByteDance, and the fact that the company is required to cooperate with Chinese intelligence services. "That is the reality," the prime minister said in a statement. "That's why it is logical to forbid the use of TikTok on phones provided by the federal government. The safety of our information must prevail." The European Commission and the European Parliament last month banned TikTok from staff phones due to growing concerns about the company, and whether China's government could harvest users' data or advance its interests.
Science

Scientists Create Mice With Two Fathers After Making Eggs From Male Cells (theguardian.com) 180

Scientists have created mice with two biological fathers by generating eggs from male cells, a development that opens up radical new possibilities for reproduction. The Guardian reports: The advance could ultimately pave the way for treatments for severe forms of infertility, as well as raising the tantalizing prospect of same-sex couples being able to have a biological child together in the future. "This is the first case of making robust mammal oocytes from male cells," said Katsuhiko Hayashi, who led the work at Kyushu University in Japan and is internationally renowned as a pioneer in the field of lab-grown eggs and sperm. Hayashi, who presented the development at the Third International Summit on Human Genome Editing at the Francis Crick Institute in London on Wednesday, predicts that it will be technically possible to create a viable human egg from a male skin cell within a decade. Others suggested this timeline was optimistic given that scientists are yet to create viable lab-grown human eggs from female cells.

The study, which has been submitted for publication in a leading journal, relied on a sequence of intricate steps to transform a skin cell, carrying the male XY chromosome combination, into an egg, with the female XX version. Male skin cells were reprogrammed into a stem cell-like state to create so-called induced pluripotent stem (iPS) cells. The Y-chromosome of these cells was then deleted and replaced by an X chromosome "borrowed" from another cell to produce iPS cells with two identical X chromosomes. "The trick of this, the biggest trick, is the duplication of the X chromosome," said Hayashi. "We really tried to establish a system to duplicate the X chromosome."

Finally, the cells were cultivated in an ovary organoid, a culture system designed to replicate the conditions inside a mouse ovary. When the eggs were fertilized with normal sperm, the scientists obtained about 600 embryos, which were implanted into surrogate mice, resulting in the birth of seven mouse pups. The efficiency of about 1% was lower than the efficiency achieved with normal female-derived eggs, where about 5% of embryos went on to produce a live birth. The baby mice appeared healthy, had a normal lifespan, and went on to have offspring as adults. "They look OK, they look to be growing normally, they become fathers," said Hayashi. He and colleagues are now attempting to replicate the creation of lab-grown eggs using human cells.

AI

Sceptical Investors Worry Whether Advances in AI Will Make Money (ft.com) 46

Silicon Valley VCs fearing a repeat of falling crypto values warn against pouring cash into hype-fuelled start-ups. From a report: Gordon Ritter, founder of San Francisco-based venture fund Emergence Capital, believes that recent developments in the field of artificial intelligence represent a significant technological advance. He just cannot see a way to make money out of them. "Everyone has stars in their eyes about what could happen," says Ritter, whose firm was an early investor in successful start ups such as Zoom. "There's a flow [of opinion that AI] will do everything. We're going against that flow." The scepticism reflects a tension among Silicon Valley VCs, who are caught between excitement over AI and a broader tech downturn that has led to falling investment in start-ups over the past year. But the recent launch of "generative AI" tools such as OpenAI's ChatGPT chatbot, capable of answering complex questions with text in natural-sounding language, has resulted in fresh excitement over the potential emergence of a new group of industry-defining companies.

[...] Many VCs express caution, put off not only by eye-watering valuations, but also the huge amount of capital AI groups require as they build "foundation models" -- machine-learning systems that require huge amounts of data and computing power to operate. One investor said that, because of the huge amount of capital and computing resources required, recent leaps in generative AI were comparable to landing on the moon: a massively impressive technical achievement, only replicable by those with nation-state level wealth. "Companies are extremely overvalued and the only justifiable investment thesis is to get in incredibly early," said another veteran investor. "Otherwise you're only buying in because of FOMO."

United Kingdom

UK Now Seen As 'Toxic' For Satellite Launches, MPs Told (theguardian.com) 72

Britain's failed attempt to send satellites into orbit was a "disaster" and MPs are being urged to redirect funding to hospitals, with the country now seen as "toxic" for future launches. The Guardian reports: Senior figures at the Welsh company Space Forge, which lost a satellite when Virgin Orbit's Start Me Up mission failed to reach orbit, said a "seismic change" was needed for the UK to be appealing for space missions. Lengthy delays by the Civil Aviation Authority (CAA), as well as the launch failure, had left Space Forge six months behind its competition in the race to be the first company to bring a satellite back down to Earth, when it had been six months ahead, the science and technology committee heard.

Patrick McCall, a non-executive director at Space Forge, said: "The CAA is taking a different approach to risk, and a bit to process and timing as well. But I think unless there is, without wanting to be too dramatic, a seismic change in that approach, the UK is not going to be competitive from a launch perspective. I think the conclusion I've reached is right now it's not a good use of money, because our regulatory framework is not competitive." He added that the UK ought to consider spending the money it was investing in launch capability on other areas, such as hospitals.

Greg Clark, the chair of the committee, said it was a "disaster" that an attempt to show what the UK was capable of had turned "toxic for a privately funded launch." "We had the first attempted launch but the result is that you as an investor in space are saying there is no chance of investors supporting another launch from the UK with the current regulator conditions." Dan Hart, the CEO of Virgin Orbit, told MPs he had expected the CAA to work more similarly to the Federal Aviation Authority in the US but he had found the UK regulator more conservative. The company has since ended its contract with Spaceport Cornwall at Newquay airport but said it was still hoping to launch from the site in the future. Sir Stephen Hillier, the chair of the CAA, said: "Our primary duty is to ensure that the space activity in the UK is conducted safely. The CAA licensed in advance of technical readiness."

NASA

NASA's DART Data Validates Kinetic Impact As Planetary Defense Method (nasa.gov) 31

After analyzing the data collected from NASA's successful Double Asteroid Redirection Test (DART) last year, the DART team found that the kinetic impactor mission "can be effective in altering the trajectory of an asteroid, a big step toward the goal of preventing future asteroid strikes on Earth." The findings were published in four papers in the journal Nature. From a NASA press release: The first paper reports DART's successful demonstration of kinetic impactor technology in detail: reconstructing the impact itself, reporting the timeline leading up to impact, specifying in detail the location and nature of the impact site, and recording the size and shape of Dimorphos. The authors, led by Terik Daly, Carolyn Ernst, and Olivier Barnouin of APL, note DART's successful autonomous targeting of a small asteroid, with limited prior observations, is a critical first step on the path to developing kinetic impactor technology as a viable operational capability for planetary defense. Their findings show intercepting an asteroid with a diameter of around half a mile, such as Dimorphos, can be achieved without an advance reconnaissance mission, though advance reconnaissance would give valuable information for planning and predicting the outcome. What is necessary is sufficient warning time -- several years at a minimum, but preferably decades. "Nevertheless," the authors state in the paper, DART's success "builds optimism about humanity's capacity to protect the Earth from an asteroid threat."

The second paper uses two independent approaches based on Earth-based lightcurve and radar observations. The investigation team, led by Cristina Thomas of Northern Arizona University, arrived at two consistent measurements of the period change from the kinetic impact: 33 minutes, plus or minus one minute. This large change indicates the recoil from material excavated from the asteroid and ejected into space by the impact (known as ejecta) contributed significant momentum change to the asteroid, beyond that of the DART spacecraft itself. The key to kinetic impact is that the push to the asteroid comes not only from colliding spacecraft, but also from this ejecta recoil. The authors conclude: "To serve as a proof-of-concept for the kinetic impactor technique of planetary defense, DART needed to demonstrate that an asteroid could be targeted during a high-speed encounter and that the target's orbit could be changed. DART has successfully done both."

In the third paper, the investigation team, led by Andrew Cheng of APL, calculated the momentum change transferred to the asteroid as a result of DART's kinetic impact by studying the change in the orbital period of Dimorphos. They found the impact caused an instantaneous slowing in Dimorphos' speed along its orbit of about 2.7 millimeters per second -- again indicating the recoil from ejecta played a major role in amplifying the momentum change directly imparted to the asteroid by the spacecraft. That momentum change was amplified by a factor of 2.2 to 4.9 (depending on the mass of Dimorphos), indicating the momentum change transferred because of ejecta production significantly exceeded the momentum change from the DART spacecraft alone. DART's scientific value goes beyond validating kinetic impactor as a means of planetary defense. By smashing into Dimorphos, the mission has broken new ground in the study of asteroids. DART's impact made Dimorphos an "active asteroid" -- a space rock that orbits like an asteroid but has a tail of material like a comet -- which is detailed in the fourth paper led by Jian-Yang Li of the Planetary Science Institute.

Businesses

OpenAI Is Now Everything It Promised Not To Be: Corporate, Closed-Source, and For-Profit (vice.com) 115

OpenAI is today unrecognizable, with multi-billion-dollar deals and corporate partnerships. From a report: OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The blog stated that "since our research is free from financial obligations, we can better focus on a positive human impact," and that all researchers would be encouraged to share "papers, blog posts, or code, and our patents (if any) will be shared with the world."

Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact, but instead, as many critics including co-founder Musk have argued, is powered by speed and profit. And this company is unleashing technology that, while flawed, is still poised to increase some elements of workplace automation at the expense of human employees. Google, for example, has highlighted the efficiency gains from AI that autocompletes code, as it lays off thousands of workers. When OpenAI first began, it was envisioned as doing basic AI research in an open way, with undetermined ends. Co-founder Greg Bockman told The New Yorker, "Our goal right now...is to do the best thing there is to do. It's a little vague." This resulted in a shift in direction in 2018 when the company looked to capital resources for some direction. "Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission," the company wrote in an updated charter in 2018. By March 2019, OpenAI shed its non-profit status and set up a "capped profit" sector, in which the company could now receive investments and would provide investors with profit capped at 100 times their investment.

Intel

Intel Releases Software Platform for Quantum Computing Developers (reuters.com) 17

Intel on Tuesday released a software platform for developers to build quantum algorithms that can eventually run on a quantum computer that the chip giant is trying to build. From a report: The platform, called Intel Quantum SDK, would for now allow those algorithms to run on a simulated quantum computing system, said Anne Matsuura, Intel Labs' head of quantum applications and architecture. Matsuura said developers can use the long-established programming language C++ to build quantum algorithms, making it more accessible for people without quantum computing expertise. "The Intel Quantum SDK helps programmers get ready for future large-scale commercial quantum computers," Matsuura said in a statement. "It will also advance the industry by creating a community of developers that will accelerate the development of applications."
United States

Biden's Semiconductor Plan Flexes the Power of the Federal Government (nytimes.com) 139

Semiconductor manufacturers seeking a slice of nearly $40 billion in new federal subsidies will need to ensure affordable child care for their workers, limit stock buybacks and share certain excess profits with the government, the Biden administration will announce on Tuesday. From a report: The new requirements represent an aggressive attempt by the federal government to bend the behavior of corporate America to accomplish its economic and national security objectives. As the Biden administration makes the nation's first big foray into industrial policy in decades, officials are also using the opportunity to advance policies championed by liberals that seek to empower workers. While the moves would advance some of the left-behind portions of the president's agenda, they could also set a fraught precedent for attaching policy strings to federal funding.

Last year, a bipartisan group of lawmakers passed the CHIPS Act, which devoted $52 billion to expanding U.S. semiconductor manufacturing and research, in hopes of making the nation less reliant on foreign suppliers for critical chips that power computers, household appliances, cars and more. The prospect of accessing those funds has already enticed domestic and foreign-owned chip makers to announce plans for or begin construction on new projects in Arizona, Texas, Ohio, New York and other states. On Tuesday, the Commerce Department will release its application for manufacturers seeking funds under the law. It will include a variety of requirements that go far beyond simply encouraging semiconductor production.

Science

Scientists Target 'Biocomputing' Breakthrough With Use of Human Brain Cells (ft.com) 38

Scientists propose to develop a biological computer powered by millions of human brain cells that they say could outperform silicon-based machines while consuming far less energy. From a report: The international team, led by Johns Hopkins University in Baltimore, published in the journal Frontiers in Science on Tuesday a detailed road map to what they call "organoid intelligence." The hardware will include arrays of brain organoids -- tiny three-dimensional neural structures grown from human stem cells -- connected to sensors and output devices and trained by machine learning, big data and other techniques.

The aim is to develop an ultra-efficient system that can solve problems beyond the reach of conventional digital computers, while aiding development in neuroscience and other areas of medical research. The project's ambition mirrors work on the more advanced quantum computing but raises ethical questions around the "consciousness" of brain organoid assemblies. "I expect an intelligent dynamic system based on synthetic biology, but not constrained by the many functions the brain has to serve in an organism," said Professor Thomas Hartung of Johns Hopkins, who has gathered a community of 40 scientists to develop the technology. They have signed a "Baltimore declaration" calling for more research "to explore the potential of organoid cell cultures to advance our understanding of the brain and unleash new forms of biocomputing while recognising and addressing the associated ethical implications."

Iphone

Thieves Spy on iPhone Owners' Passcodes, Then Steal Their Phones and Money (9to5mac.com) 84

After an iPhone was stolen, $10,000 vanished from the owner's bank account — and they were locked out of their Apple account's photos, contacts and notes. The thieves "stole thousands of dollars through Apple Pay" and "opened an Apple Card to make fraudulent charges," writes 9 to 5 Mac, citing a report from the Wall Street Journal. These thieves often work in groups with one distracting a victim while another records over a shoulder as they enter their passcode. Others have been known to even befriend victims, asking them to open social media or other apps on their iPhones so they can watch and memorize the passcode before stealing it. A 12-person crime ring in Minnesota was recently taken down after targeting iPhones like this in bars. Almost $300,000 was stolen from 40 victims by this group before they were caught.
The Journal adds that "similar stories are piling up in police stations around the country," while one of their article's authors has tweeted Apple's official response. "We sympathize with users who have had this experience and we take all attacks on our users very seriously, no matter how rare.... We will continue to advance the protections to help keep user accounts secure."

The reporter suggests alphanumeric passwords are harder to steal, while MacRumors offers some other simple fixes. "Use Face ID or Touch ID as much as possible when in public to prevent thieves from spying... In situations where entering the passcode is necessary, users can hold their hands over their screen to hide passcode entry."
Australia

Australians Able To Opt Out of Targeted Ads, Erase Their Data Under Proposed Privacy Reforms (theguardian.com) 37

An anonymous reader quotes a report from The Guardian: Australians would gain greater control of their personal information, including the ability to opt out of targeted ads, erase their data and sue for serious breaches of privacy, under a proposal to the Albanese government. On Thursday the attorney general, Mark Dreyfus, will release a review conducted by his department into modernization of the Privacy Act which calls to expand its remit to small businesses and add new safeguards for use of data by political parties. Although the document is not government policy, in January Dreyfus told Guardian Australia the right to sue for privacy breaches and European-style reforms such as the right to be forgotten would be considered for the next tranche of legislation.

In 2022 the Albanese government passed a bill increasing penalties for companies that fail to protect customer data in the wake of major data breaches at telco Optus and health insurer Medibank. A summary section of the review, seen in advance by Guardian Australia, called for the exemption from the Privacy Act for small businesses to be abolished, citing community expectations that if small businesses are provided personal information "they will keep it safe." But first the government should conduct an "impact analysis" and give support to ensure small businesses can comply with their obligations, it said. Despite calls to abolish the privacy exemptions for political parties, the review proposed only increased safeguards, such as for parties to publish a privacy policy and not target voters "based on sensitive information or traits" except for political opinions, membership of a political association, or a trade union. "There was very strong support for increasing the protections for personal information under the Act," the review said.

The review called for new limits on targeted advertising, including to prohibit targeting to a child except where it is in their "best interests," and to provide others with an "an unqualified right to opt-out" of targeted ads and their information being disclosed for direct marketing purposes. The Privacy Act should include a new overarching requirement that "the collection, use and disclosure of personal information must be fair and reasonable in the circumstances," it said. The review also proposes individual rights modeled on the European Union's general data protection regulation including to: object to the collection, use or disclosure of personal information; request erasure of personal information; and to de-index online search results containing sensitive information, excessive detail or "inaccurate, out-of-date, incomplete, irrelevant, or misleading" information. The review suggested that consent should be required for collection and use of precise geolocation tracking data. The government should "consult on introducing a criminal offense for malicious re-identification of de-identified information where there is an intention to harm another or obtain an illegitimate benefit," it said. The report said that individuals wanted "more agency to seek redress for interferences with their privacy," proposing the creation of a right to sue for "serious invasions of privacy," which was also a recommendation of the Australian Law Reform Commission in 2014.

Slashdot Top Deals