×
AI

Google Makes Its Text-To-Music AI Public (techcrunch.com) 16

An anonymous reader quotes a report from TechCrunch: Google [on Wednesday] released MusicLM, a new experimental AI tool that can turn text descriptions into music. Available in the AI Test Kitchen app on the web, Android or iOS, MusicLM lets users type in a prompt like "soulful jazz for a dinner party" or "create an industrial techno sound that is hypnotic" and have the tool create several versions of the song. Users can specify instruments like "electronic" or "classical," as well as the "vibe, mood, or emotion" they're aiming for, as they refine their MusicLM-generated creations.

When Google previewed MusicLM in an academic paper in January, it said that it had "no immediate plans" to release it. The coauthors of the paper noted the many ethical challenges posed by a system like MusicLM, including a tendency to incorporate copyrighted material from training data into the generated songs. But in the intervening months, Google says it's been working with musicians and hosting workshops to "see how [the] technology can empower the creative process." One of the outcomes? The version of MusicLM in AI Test Kitchen won't generate music with specific artists or vocals. Make of that what you will. It seems unlikely, in any case, that the broader challenges around generative music will be easily remedied.
You can sign up to try MusicLM here.
Medicine

Pancreatic Cancer Vaccine Shows Promise In Small Trial 91

A personalized cancer vaccine made by BioNTech, the German company that produced the Pfizer-BioNTech COVID-19 vaccine, has shown promising results against pancreatic cancer. The vaccine, which teaches patients' immune systems to attack their tumors, provoked an immune response in half of the 16 patients treated, and those patients did not experience relapses of their cancer during the study. The New York Times reports: Researchers at Memorial Sloan Kettering Cancer Center in New York, led by Dr. Vinod Balachandran, extracted patients' tumors and shipped samples of them to Germany. There, scientists at BioNTech, the company that made a highly successful COVID vaccine with Pfizer, analyzed the genetic makeup of certain proteins on the surface of the cancer cells. Using that genetic data, BioNTech scientists then produced personalized vaccines designed to teach each patient's immune system to attack the tumors. Like BioNTech's COVID shots, the cancer vaccines relied on messenger RNA. In this case, the vaccines instructed patients' cells to make some of the same proteins found on their excised tumors, potentially provoking an immune response that would come in handy against actual cancer cells.

The study was small: Only 16 patients, all of them white, were given the vaccine, part of a treatment regimen that also included chemotherapy and a drug intended to keep tumors from evading people's immune responses. And the study could not entirely rule out factors other than the vaccine having contributed to better outcomes in some patients. [...] But the simple fact that scientists could create, quality-check and deliver personalized cancer vaccines so quickly -- patients began receiving the vaccines intravenously roughly nine weeks after having their tumors removed -- was a promising sign, experts said.

In patients who did not appear to respond to the vaccine, the cancer tended to return around 13 months after surgery. Patients who did respond, though, showed no signs of relapse during the roughly 18 months they were tracked. Intriguingly, one patient showed evidence of a vaccine-activated immune response in the liver after an unusual growth developed there. The growth later disappeared in imaging tests. "It's anecdotal, but it's nice confirmatory data that the vaccine can get into these other tumor regions," said Dr. Nina Bhardwaj, who studies cancer vaccines at the Icahn School of Medicine at Mount Sinai.
"This is the first demonstrable success -- and I will call it a success, despite the preliminary nature of the study -- of an mRNA vaccine in pancreatic cancer," said Dr. Anirban Maitra, a specialist in the disease at the University of Texas MD Anderson Cancer Center, who was not involved in the study. "By that standard, it's a milestone."

The study has been published in the journal Nature.
Google

Google Announces PaLM 2, Its Next Generation Language Model (blog.google) 6

Google, in a blog post: PaLM 2 is a state-of-the-art language model with improved multilingual, reasoning and coding capabilities.

Multilinguality: PaLM 2 [PDF] is more heavily trained on multilingual text, spanning more than 100 languages. This has significantly improved its ability to understand, generate and translate nuanced text -- including idioms, poems and riddles -- across a wide variety of languages, a hard problem to solve. PaLM 2 also passes advanced language proficiency exams at the "mastery" level.
Reasoning: PaLM 2's wide-ranging dataset includes scientific papers and web pages that contain mathematical expressions. As a result, it demonstrates improved capabilities in logic, common sense reasoning, and mathematics.
Coding: PaLM 2 was pre-trained on a large quantity of publicly available source code datasets. This means that it excels at popular programming languages like Python and JavaScript, but can also generate specialized code in languages like Prolog, Fortran and Verilog.

Even as PaLM 2 is more capable, it's also faster and more efficient than previous models -- and it comes in a variety of sizes, which makes it easy to deploy for a wide range of use cases. We'll be making PaLM 2 available in four sizes from smallest to largest: Gecko, Otter, Bison and Unicorn. Gecko is so lightweight that it can work on mobile devices and is fast enough for great interactive applications on-device, even when offline. This versatility means PaLM 2 can be fine-tuned to support entire classes of products in more ways, to help more people.

At I/O today, we announced over 25 new products and features powered by PaLM 2. That means that PaLM 2 is bringing the latest in advanced AI capabilities directly into our products and to people -- including consumers, developers, and enterprises of all sizes around the world. Here are some examples:

PaLM 2's improved multilingual capabilities are allowing us to expand Bard to new languages, starting today. Plus, it's powering our recently announced coding update.
Workspace features to help you write in Gmail and Google Docs, and help you organize in Google Sheets are all tapping into the capabilities of PaLM 2 at a speed that helps people get work done better, and faster.
Med-PaLM 2, trained by our health research teams with medical knowledge, can answer questions and summarize insights from a variety of dense medical texts. It achieves state-of-the-art results in medical competency, and was the first large language model to perform at "expert" level on U.S. Medical Licensing Exam-style questions. We're now adding multimodal capabilities to synthesize information like x-rays and mammograms to one day improve patient outcomes. Med-PaLM 2 will open up to a small group of Cloud customers for feedback later this summer to identify safe, helpful use cases.

IT

The Problem With Weather Apps (theatlantic.com) 57

An anonymous reader shares a report:Weather apps are not all the same. There are tens of thousands of them, from the simply designed Apple Weather to the expensive, complex, data-rich Windy.App. But all of these forecasts are working off of similar data, which are pulled from places such as the National Oceanic and Atmospheric Administration (NOAA) and the European Centre for Medium-Range Weather Forecasts. Traditional meteorologists interpret these models based on their training as well as their gut instinct and past regional weather patterns, and different weather apps and services tend to use their own secret sauce of algorithms to divine their predictions. On an average day, you're probably going to see a similar forecast from app to app and on television. But when it comes to how people feel about weather apps, these edge cases -- which usually take place during severe weather events -- are what stick in a person's mind. "Eighty percent of the year, a weather app is going to work fine," Matt Lanza, a forecaster who runs Houston's Space City Weather, told me. "But it's that 20 percent where people get burned that's a problem."

No people on the planet have a more tortured and conflicted relationship with weather apps than those who interpret forecasting models for a living. "My wife is married to a meteorologist, and she will straight up question me if her favorite weather app says something different than my forecast," Lanza told me. "That's how ingrained these services have become in most peoples' lives." The basic issue with weather apps, he argues, is that many of them remove a crucial component of a good, reliable forecast: a human interpreter who can relay caveats about models or offer a range of outcomes instead of a definitive forecast. [...] What people seem to be looking for in a weather app is something they can justify blindly trusting and letting into their lives -- after all, it's often the first thing you check when you roll over in bed in the morning. According to the 56,400 ratings of Carrot in Apple's App Store, its die-hard fans find the app entertaining and even endearing. "Love my psychotic, yet surprisingly accurate weather app," one five-star review reads. Although many people need reliable forecasting, true loyalty comes from a weather app that makes people feel good when they open it.

Space

A Growing Number of Scientists Are Convinced the Future Influences the Past (vice.com) 200

An anonymous reader quotes a report from Motherboard: Have you ever found yourself in a self-imposed jam and thought, "Well, if it isn't the consequences of my own actions"? It's a common refrain that exposes a deeper truth about the way we humans understand time and causality. Our actions in the past are correlated to our experience of the future, whether that's a good outcome, like acing a test because you prepared, or a bad one, like waking up with a killer hangover. But what if this forward causality could somehow be reversed in time, allowing actions in the future to influence outcomes in the past? This mind-bending idea, known as retrocausality, may seem like science fiction grist at first glance, but it is starting to gain real traction among physicists and philosophers, among other researchers, as a possible solution to some of the most intractable riddles underlying our reality.

In other words, people are becoming increasingly "retro-curious," said Kenneth Wharton, a professor of physics at San Jose State University who has published research about retrocausality, in a call with Motherboard. Even though it may feel verboten to consider a future that affects the past, Wharton and others think it could account for some of the strange phenomena observed in quantum physics, which exists on the tiny scale of atoms.

"We have instincts about all sorts of things, and some are stronger than others," said Wharton, who recently co-authored an article about retrocausality with Huw Price, a distinguished professor emeritus at the University of Bonn and an emeritus fellow of Trinity College, Cambridge. "I've found our instincts of time and causation are our deepest, strongest instincts that physicists and philosophers -- and humans -- are loath to give up," he added. Scientists, including Price, have speculated about the possibility that the future might influence the past for decades, but the renewed curiosity about retrocausality is driven by more recent findings about quantum mechanics. [...] While there are a range of views about the mechanics and consequences of retrocausal theories, a growing community of researchers think this concept has the potential to answer fundamental questions about the universe.
"The problem facing physics right now is that our two pillars of successful theories don't talk to each other," Wharton explained. "One is based in space and time, and one has left space and time aside for this giant quantum wave function."

"The solution to this, as everyone seems to have agreed without discussing it, is that we've got to quantize gravity," he continued. "That's the goal. Hardly anyone has said, 'what if things really are in space and time, and we just have to make sense of quantum theory in space and time'? That will be a whole new way to unify everything that people are not looking into."

Price agreed that this retrocausality could provide a new means to finally "eliminate the tension" between quantum mechanics and classical physics (including special relativity). "Another possible big payoff is that retrocausality supports the so-called 'epistemic' view of the wave function in the usual quantum mechanics description -- the idea that it is just an encoding of our incomplete knowledge of the system," he continued. "That makes it much easier to understand the so-called collapse of the wave function, as a change in information, as folk such as Einstein and Schoedinger thought, in the early days. In this respect, I think it gets rid of some more of the (apparently) non-classical features of quantum mechanics, by saying that they don't amount to anything physically real."
Supercomputing

UK To Invest 900 Million Pounds In Supercomputer In Bid To Build Own 'BritGPT' (theguardian.com) 35

An anonymous reader quotes a report from The Guardian: The UK government is to invest 900 million pounds in a cutting-edge supercomputer as part of an artificial intelligence strategy that includes ensuring the country can build its own "BritGPT". The treasury outlined plans to spend around 900 million pounds on building an exascale computer, which would be several times more powerful than the UK's biggest computers, and establishing a new AI research body. An exascale computer can be used for training complex AI models, but also have other uses across science, industry and defense, including modeling weather forecasts and climate projections. The Treasury said the investment will "allow researchers to better understand climate change, power the discovery of new drugs and maximize our potential in AI.".

An exascale computer is one that can carry out more than one billion billion simple calculations a second, a metric known as an "exaflops". Only one such machine is known to exist, Frontier, which is housed at America's Oak Ridge National Laboratory and used for scientific research -- although supercomputers have such important military applications that it may be the case that others already exist but are not acknowledged by their owners. Frontier, which cost about 500 million pounds to produce and came online in 2022, is more than twice as powerful as the next fastest machine.

The Treasury said it would award a 1 million-pound prize every year for the next 10 years to the most groundbreaking AI research. The award will be called the Manchester Prize, in memory of the so-called Manchester Baby, a forerunner of the modern computer built at the University of Manchester in 1948. The government will also invest 2.5 billion pounds over the next decade in quantum technologies. Quantum computing is based on quantum physics -- which looks at how the subatomic particles that make up the universe work -- and quantum computers are capable of computing their way through vast numbers of different outcomes.

Medicine

How Medicare Advantage Plans Use Algorithms To Cut Off Care For Seniors In Need (statnews.com) 92

An anonymous reader quotes a report from STAT News: Health insurance companies have rejected medical claims for as long as they've been around. But a STAT investigation found artificial intelligence is now driving their denials to new heights in Medicare Advantage, the taxpayer-funded alternative to traditional Medicare that covers more than 31 million people. Behind the scenes, insurers are using unregulated predictive algorithms, under the guise of scientific rigor, to pinpoint the precise moment when they can plausibly cut off payment for an older patient's treatment. The denials that follow are setting off heated disputes between doctors and insurers, often delaying treatment of seriously ill patients who are neither aware of the algorithms, nor able to question their calculations. Older people who spent their lives paying into Medicare, and are now facing amputation, fast-spreading cancers, and other devastating diagnoses, are left to either pay for their care themselves or get by without it. If they disagree, they can file an appeal, and spend months trying to recover their costs, even if they don't recover from their illnesses.

The algorithms sit at the beginning of the process, promising to deliver personalized care and better outcomes. But patient advocates said in many cases they do the exact opposite -- spitting out recommendations that fail to adjust for a patient's individual circumstances and conflict with basic rules on what Medicare plans must cover. "While the firms say [the algorithm] is suggestive, it ends up being a hard-and-fast rule that the plan or the care management firms really try to follow," said David Lipschutz, associate director of the Center for Medicare Advocacy, a nonprofit group that has reviewed such denials for more than two years in its work with Medicare patients. "There's no deviation from it, no accounting for changes in condition, no accounting for situations in which a person could use more care."

STAT's investigation revealed these tools are becoming increasingly influential in decisions about patient care and coverage. The investigation is based on a review of hundreds of pages of federal records, court filings, and confidential corporate documents, as well as interviews with physicians, insurance executives, policy experts, lawyers, patient advocates, and family members of Medicare Advantage beneficiaries. It found that, for all of AI's power to crunch data, insurers with huge financial interests are leveraging it to help make life-altering decisions with little independent oversight. AI models used by physicians to detect diseases such as cancer, or suggest the most effective treatment, are evaluated by the Food and Drug Administration. But tools used by insurers in deciding whether those treatments should be paid for are not subjected to the same scrutiny, even though they also influence the care of the nation's sickest patients.

Space

Texas Is Planning To Make a Huge Public Investment In Space (arstechnica.com) 103

An anonymous reader quotes a report from Ars Technica: As part of the state's biennial budget process, Texas Governor Greg Abbott has called on the state legislature to provide $350 million to create and fund a Texas Space Commission for the next two years. "With companies seeking to expand space travel in coming years, continued development of the space industry in the state will ensure Texas remains at the forefront not only in the United States, but the entire world," Abbott stated in his budget document for the 88th Legislature. "Further investment will cement Texas as the preeminent location for innovation and development in this rapidly growing industry. Due to increased competition from other states and internationally, further planning and coordination is needed to keep Texas at the cutting edge." Texas has a historic budget surplus this year due to oil prices, inflation, and other factors driving economic growth. The state is projected to have $188.2 billion available in general revenue for funding the business of the state over the 2024-2025 period, a surplus of $32.7 billion over spending during the previous two years.

In their initial drafts, both the House and the Senate budget bills for this legislative session include the full $350 million in funding for a space commission. The initiative is being led by the chair of the House Appropriations Committee, Texas Rep. Greg Bonnen, whose district just south of Houston is adjacent to NASA's Johnson Space Center. A source said the bill "has all of the support it needs to pass" from leaders in both the House and Senate. Bonnen's office did not specify what the Texas Space Commission will address, including how the money would be spent. A second source in the Texas Legislature told Ars that details about the commission's funding priorities were expected to be worked out later in the legislative session, which ends on May 29.

However, the framework for the proposed space commission appears to have been prepared by a Houston-based workforce-development organization called TexSpace, which published an annual report in December calling for the creation of such a commission. According to this document, the commission would "focus on policy and arranging statewide strategy by monitoring local, state, and federal policies and opportunities and establishing an economic ecosystem for Texas' space enterprises." It would include 15 members, including those appointed by political officials, as well as an appointee each from SpaceX and Blue Origin. [...] The commission will likely seek to ensure that SpaceX and Blue Origin continue to grow their presence in the state and to nurture other, smaller startups.
"Compared to the Texas proposal, Space Florida has a modest annual budget of $12.5 million," notes Ars.

"Florida leaders made the brilliant decision to invest in the commercial space industry years ago, and that investment has paid off," Anna Alexopoulos Farrar, a vice president of communications for Space Florida, told Ars. "Space Florida alone had a $5.9 billion economic impact on the state over the past 15 years, and we project a $1.1 billion impact every year starting this year. It's not surprising that other states want to emulate our proven model, and we welcome the challenge from our friends in Texas -- competition yields the best outcomes for both businesses and taxpayers."
Science

Higher Risks of Stroke and Heart Disease Linked to Added Sugars (cnn.com) 77

A new study on added sugars (also known as "free sugars") concluded they're bad for your health, reports NBC News.

"The research, published in the journal BMC Medicine, found that diets higher in free sugars — a category that includes sugar added to processed foods and sodas, as well as that found in fruit juice and syrups — raise one's risk of heart disease and stroke." The study relied on data about the eating habits of more than 110,000 people ages 37 to 73 in the United Kingdom, whose health outcomes were then tracked over about nine years. The results suggested that each 5% increase in the share of a person's total energy intake that comes from free sugars was associated with a 6% higher risk of heart disease and a 10% higher risk of stroke.

An author of the study, Cody Watling, a doctoral student at the University of Oxford, said the most common forms of sugar the study participants ate were "preserves and confectionary," with the latter category including cookies, sugary pastries and scones. Fruit juice, sugar-sweetened beverages and desserts were also common, he added.... The people found to have the highest risk of heart disease or stroke consumed about 95 grams of free sugar per day, or 18% of their daily energy intake, Watling said. By comparison, U.S. guidelines suggest that added sugars should make up no more than 10% of one's daily calories.

"Avoiding sugar-sweetened beverages is probably the single most important thing we can be doing," said Walter Willett, a professor of epidemiology and nutrition at Harvard University who was not involved in the study. Willett added that although there are some health benefits to drinking a small glass of orange juice occasionally, its sugar content means "a glass of fruit juice is the same thing as Coke...."

The Oxford researchers found a positive relationship when it comes to fiber, unlike sugar intake: Consuming 5 grams of fiber a day was associated with a 4% lower risk of heart disease, the study suggested, although that did not hold true when researchers controlled for participants' body-mass indexes.... Watling said, the study demonstrates that the types of carbs people choose to eat may matter more than the total amount. "What's really important for overall general health and well-being is that we're consuming carbohydrates that are rich in whole grains," he said, while "minimizing the consumption of sugar-sweetened beverages, as well any kind of confectionary products that have added sugars."

It's a point underscored by CNN: After over nine years of follow-up, the researchers found total carbohydrate intake wasn't associated with cardiovascular disease. But when they analyzed how outcomes differed depending on the types and sources of carbohydrates eaten, they found higher free sugar intake was associated with a higher risk for cardiovascular disease and greater waist circumference. The more free sugars some participants consumed, the greater their risk of cardiovascular disease, heart disease and stroke was....

"This study provides much needed nuance to public health discussions about the health effects of dietary carbohydrates," said Dr. Maya Adam, director of Health Media Innovation and clinical assistant professor of pediatrics at Stanford University School of Medicine, via email. Adam wasn't involved in the study. "The main takeaways are that all carbs are not created equal...."

CNN adds that the mechanism seems to be that sugar intake "can promote inflammation," according to an assistant cardiology professor at Columbia's medical center. "This can cause stress on the heart and blood vessels, which can lead to increased blood pressure..."
Medicine

Wearable Ultrasound Patch Images the Heart In Real Time (ieee.org) 5

A wearable ultrasound imager for the heart that is roughly the size of a postage stamp, can be worn for up to 24 hours, and works even during exercise may one day help doctors spot cardiac problems that current medical technology might miss, a new study finds. IEEE Spectrum reports: Now scientists have developed a wearable ultrasound device that can enable safe, continuous, real-time, long-term, and highly detailed imaging of the heart. They detailed their findings online on January 25 in the journal Nature. "Potential applications include continuously monitoring the heart in daily life, during exercise, during surgery, and much more," says study coauthor Ray Wu, a nanoengineer at UC San Diego. "This will open up the possibility to detect previously undetectable symptoms of disease, identify symptoms in their very early stages, and greatly improve patient outcomes."

The new device is a patch 1.9 centimeters long by 2.2 cm wide and only 0.9 millimeters thick. It uses an array of piezoelectric transducers to send and receive ultrasound waves in order to generate a constant stream of images of the structure and function of the heart. The researchers were able to get such images even during exercise on a stationary bike. No skin irritation or allergy was seen after 24 hours of continuous wear. "The most exciting result is that our patch performs well when an individual is moving," Hu says. "Our patch allows us to evaluate heart performance throughout exercise, providing valuable information about the heart when it is under high stress." The new patch is about as flexible as human skin. It can also stretch up to 110 percent of its size, which means it can survive far more strain than typically experienced on human skin. These features help it stick onto the body, something not possible with the rigid equipment often used for cardiac imaging.

In the new study, the researchers focused on imaging the left ventricle, the largest of the heart's four chambers "and strongly considered to be the most important in terms of cardiovascular health, as it is responsible for pumping oxygenated blood to the entire body," Wu says. Cardiac imaging generally focuses on the left ventricle, but the new device can image all of the heart's four chambers simultaneously, "so it may be possible for future research to focus on other or multiple chambers," he adds. In addition, "the imager can be applied to image various other organs, such as the stomach, kidney, or liver." Traditional cardiac ultrasound imaging constantly rotates an ultrasound probe to analyze the heart in multiple dimensions. To eliminate the need for this rotation, the array of ultrasound sensors and emitters in the new device is shaped like a cross so that ultrasonic waves can travel at right angles to each other. The scientists developed a custom deep-learning AI model that can analyze the data from the patch and automatically and continuously estimate vital details, such as the percentage of blood pumped out of the left ventricle with each beat, and the volume of blood the heart pumps out with each beat and every minute. The root of most heart problems is the heart not pumping enough blood, issues that often manifest only when the body is moving, the researchers note.

Robotics

Automation Caused More than Half America's Income Inequality Since 1980, Study Claims (scitechdaily.com) 287

A newly published study co-authored by MIT economist Daron Acemoglu "quantifies the extent to which automation has contributed to income inequality in the U.S.," reports SciTechDaily, "simply by replacing workers with technology — whether self-checkout machines, call-center systems, assembly-line technology, or other devices." Over the last four decades, the income gap between more- and less-educated workers has grown significantly; the study finds that automation accounts for more than half of that increase. "This single one variable ... explains 50 to 70 percent of the changes or variation between group inequality from 1980 to about 2016," Acemoglu says....

Acemoglu and Pascual Restrepo, an assistant professor of economics at Boston University, used U.S. Bureau of Economic Analysis statistics on the extent to which human labor was used in 49 industries from 1987 to 2016, as well as data on machinery and software adopted in that time. The scholars also used data they had previously compiled about the adoption of robots in the U.S. from 1993 to 2014. In previous studies, Acemoglu and Restrepo have found that robots have by themselves replaced a substantial number of workers in the U.S., helped some firms dominate their industries, and contributed to inequality.
At the same time, the scholars used U.S. Census Bureau metrics, including its American Community Survey data, to track worker outcomes during this time for roughly 500 demographic subgroups... By examining the links between changes in business practices alongside changes in labor market outcomes, the study can estimate what impact automation has had on workers.

Ultimately, Acemoglu and Restrepo conclude that the effects have been profound. Since 1980, for instance, they estimate that automation has reduced the wages of men without a high school degree by 8.8 percent and women without a high school degree by 2.3 percent, adjusted for inflation.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Crime

San Jose Police Announce Three Stolen Vehicles Recovered Using Automatic License Plate Reader (kron4.com) 114

Saturday night in the Silicon Valley city of San Jose, the assistant police chief tweeted out praise for their recently-upgraded Automatic License Plate Readers: Officers in Air3 [police helicopter], monitoring the ALPR system, got alerted to 3 stolen cars. They directed ground units to the cars. All 3 drivers in custody! No dangerous vehicle pursuits occurred, nor were they needed.

2 drivers tried to run away. But, you can't outrun a helicopter!"

There's photos — one of the vehicles appears to be a U-Haul pickup truck — and the tweet drew exactly one response, from San Jose mayor Matt Mahan: "Nice job...! Appreciate the excellent police work and great to see ALPRs having an impact. Don't steal cars in San Jose!"
Some context: The San Jose Spotlight (a nonprofit local news site) noted that prior to last year license plate readers had been mounted exclusively on police patrol cars (and in use since 2006). But last year the San Jose Police Department launched a new "pilot program" with four cameras mounted at a busy intersection, that "captured nearly 300,000 plate scans in just the last month, according to city data."

By August this had led to plans for 150 more stationary ALPR cameras, a local TV station reported. "Just this week, police said they solved an armed robbery and arrested a suspected shooter thanks to the cameras." During a forum to update the community, San Jose police also mentioned success stories in other cities like Vallejo where they've reported a 100% increase in identifying stolen vehicles. San Jose is now installing hundreds around the city and the first batch is coming in the next two to three months....

The biggest concern among those attending Wednesday's virtual forum was privacy. But the city made it clear the data is only shared with trained police officers and certain city staff, no out-of-state or federal agencies. "Anytime that someone from the San Jose Police Department accesses the ALPR system, they have to input a reason, the specific plates they are looking for and all of that information is logged so that we can keep track of how many times its being used and what its being used for," said Albert Gehami, Digital Privacy Officer for San Jose.

More privacy concerns were raised in September, reports the San Jose Spotlight: The San Jose City Council unanimously approved a policy Tuesday that formally bans the police department from selling any license plate data, using that information for investigating a person's immigration status or for monitoring legally protected activities like protests or rallies.

Even with these new rules, some privacy advocates and community groups are still opposed to the technology. Victor Sin, chair of the Santa Clara Valley Chapter of ACLU of Northern California, expressed doubt that the readers are improving public safety. He made the comments in a letter to the council from himself and leaders of four other community organizations. "Despite claims that (automated license plate reader) systems can reduce crime, researchers have expressed concerns about the rapid acquisition of this technology by law enforcement without evidence of its efficacy," the letter reads. Groups including the Asian Law Alliance and San Jose-Silicon Valley NAACP also said the city should reduce the amount of time it keeps license plate data on file down from one year.....

Mayor Sam Liccardo said he's already convinced the readers are useful, but added the council should try to find a way to measure their effect. "It's probably not a bad idea for us to decide what are the outcomes we're trying to achieve, and if there is some reasonable metric that captures that outcome in a meaningful way," Liccardo said. "Was this used to actually help us arrest anybody, or solve a crime or prevent an accident?"

An EFF position paper argues that "ALPR data is gathered indiscriminately, collecting information on millions of ordinary people." By plotting vehicle times and locations and tracing past movements, police can use stored data to paint a very specific portrait of drivers' lives, determining past patterns of behavior and possibly even predicting future ones — in spite of the fact that the vast majority of people whose license plate data is collected and stored have not even been accused of a crime.... [ALPR technology] allows officers to track everyone..."
Maybe the police officer's tweet was to boost public support for the technology? It's already led to a short report from another local news station: San Jose police recovered three stolen cars using their automated license-plate recognition technology (ALPR) on Saturday, according to officials with the San Jose Police Department.

Officers inside of Air3, one of SJPD's helicopters, spotted three stolen cars using ALPR before directing ground units their way. Police say no pursuits occurred, though two of the drivers tried to run away.

United States

US National Cyber Strategy To Stress Biden Push on Regulation (washingtonpost.com) 29

The Biden administration is set to unveil a national strategy that for the first time calls for comprehensive cybersecurity regulation of the nation's critical infrastructure, explicitly recognizing that years of a voluntary approach have failed to secure the nation against cyberattacks, according to senior administration officials. From a report: The strategy builds on the first-ever oil and gas pipeline regulations imposed last year by the administration after a hack of one of the country's largest pipelines led to a temporary shutdown, causing long lines at gas stations and fears of a fuel shortage. The attack on Colonial Pipeline by Russian-speaking criminals elevated ransomware to an issue of national security. The strategy, drawn up by the White House Office of the National Cyber Director (ONCD), is moving through the final stages of interagency approval -- involving more than 20 departments and agencies -- and is expected to be signed by President Biden in the coming weeks, according to the officials, who spoke on the condition of anonymity because the document is not yet public.

"It's a break from the previous strategies, which focused on information sharing and public-private partnership as the solution," said James Lewis, a cybersecurity expert at the Center for Strategic and International Studies think tank. "This goes well beyond that. It says things that others have been afraid to say." For instance, according to a draft copy of the strategy, one of the stated goals is: "Use Regulation to support National Security and Public Safety." Under that, it says that regulation "can level the playing field" to meet the needs of national security, according to two individuals familiar with the draft. It also states that "while voluntary approaches to critical infrastructure cybersecurity have produced meaningful improvements, the lack of mandatory requirements has too often resulted in inconsistent and, in many cases inadequate, outcomes."

Privacy

Meet the Spy Tech Companies Helping Landlords Evict People (vice.com) 263

schwit1 shares an excerpt from a Motherboard article: Some renters may savor the convenience of "smart home" technologies like keyless entry and internet-connected doorbell cameras. But tech companies are increasingly selling these solutions to landlords for a more nefarious purpose: spying on tenants in order to evict them or raise their rent. "You CAN raise rents in NYC!" reads the headline of one promotional email sent to landlords. It was a sales pitch from Teman, a tech company that makes surveillance systems for apartment buildings. Teman's sales pitch proposes a solution to a frustration for many New York City landlords, who have tenants living in older apartments that are protected by a myriad of rent control and stabilization laws. The company's email suggests a workaround: "3 Simple Steps to Re-Regulate a Unit." First, use one of Teman's automated products to catch a tenant breaking a law or violating their lease, such as by having unapproved subletters or loud parties. Then, "vacate" them and merge their former apartment with one next door or above or below, creating a "new" unit that's not eligible for rent protections. "Combine a $950/mo studio and $1400/mo one-bedroom into a $4200/mo DEREGULATED two-bedroom," the email enticed. Teman's surveillance systems can even "help you identify which units are most-likely open to moving out (or being evicted!)." [...]

Erin McElroy, a professor of American Studies at the University of Texas at Austin who tracks eviction trends, also says that digital surveillance of residential buildings is increasing, particularly in New York City, which she calls the "landlord tech epicenter." Any camera system can document possibly eviction-worthy behavior, but McElroy identified two companies, Teman and Reliant Safety, that use the biometrics of tenants with the explicit goal of facilitating evictions. These companies are part of an expanding industry known as "proptech," encompassing all the technology used for acquiring and managing real estate. A report by Future Market Insights predicts that proptech will quadruple its current value, becoming a $86.5 billion industry by 2023. It is also sprouting start-ups to ease all aspects of the business -- including the unsavory ones. [...]

Reliant Safety, which claims to watch over 20,000 apartment units nationwide, has a less colorful corporate pedigree. It is owned by the Omni Organization, a private developer founded in 2004 that "acquires, rehabilitates, builds and manages quality affordable housing throughout the United States," according to its website. The company claims it has acquired and managed more than 17,000 affordable housing units. Many of the properties it lists are in New York City. Omni's website features spotless apartment complexes under blue skies and boasts about sponsorship of after-school programs, food giveaways, and homeless transition programs. Reliant's website features videos that depict various violations detected by its surveillance cameras. The website has a page of "Lease Violations" it says its system has detected, which include things such as "pet urination in hallway," "hallway fistfight," "improper mattress disposal," "tenant slips in hallway," as well as several alleged assaults, videos of fistfights in hallways, drug sales at doorways and break-ins through smashed windows. Almost all of them show Black or brown people and almost all are labeled as being from The Bronx -- where, in 2016, Omni opened a 140-unit affordable housing building at 655 Morris Avenue that boasted about "state-of-the-art facial recognition building access" running on ubiquitous cameras in common areas. Reliant presents these as "case studies" and lists outcomes that include arrest and eviction. Part of its package of services is "illegal sublet detection" using biometrics submitted by tenants to suss out anyone not authorized to be there. While Reliant claims its products are rooting out illegal and dangerous activity, the use of surveillance and biometrics to further extend policing into minority communities are a major cause for concern to privacy advocates.

Businesses

Amazon To Layoff Over 18,000 Employees (seekingalpha.com) 54

Longtime Slashdot reader walterbyrd shares a report from Seeking Alpha: Amazon is reducing its headcount by more than 18K employees the company confirmed on Wednesday. Amazon CEO Andy Jassy confirmed plans to eliminate "just over 18K roles" between the reductions made in November and the latest round in a statement. The company was previously projected to cut about 10K roles, but has accelerated layoffs due to economic uncertainty. The bulk of roles due for elimination are concentrated in Amazon Stores and PXT organizations, per the statement.

Jassy said that he was hoping to notify all impacted employees prior to his statement, but was forced to comment due to a report in the Wall Street Journal regarding the staff reductions. "We typically wait to communicate about these outcomes until we can speak with the people who are directly impacted. However, because one of our teammates leaked this information externally, we decided it was better to share this news earlier so you can hear the details directly from me," he said. "We intend on communicating with impacted employees (or where applicable in Europe, with employee representative bodies) starting on January 18."

Technology

DeepMind AI Topples Experts at Complex Game Stratego 21

Game-playing AIs that interact with humans are laying important groundwork for real-world applications. From a report: Another game long considered extremely difficult for artificial intelligence (AI) to master has fallen to machines. An AI called DeepNash, made by London-based company DeepMind, has matched expert humans at Stratego, a board game that requires long-term strategic thinking in the face of imperfect information. The achievement, described in Science on 1 December, comes hot on the heels of a study reporting an AI that can play Diplomacy, in which players must negotiate as they cooperate and compete. "The rate at which qualitatively different game features have been conquered -- or mastered to new levels -- by AI in recent years is quite remarkable," says Michael Wellman at the University of Michigan in Ann Arbor, a computer scientist who studies strategic reasoning and game theory. "Stratego and Diplomacy are quite different from each other, and also possess challenging features notably different from games for which analogous milestones have been reached."

Stratego has characteristics that make it much more complicated than chess, Go or poker, all of which have been mastered by AIs (the latter two games in 2015 and 2019). In Stratego, two players place 40 pieces each on a board, but cannot see what their opponent's pieces are. The goal is to take turns moving pieces to eliminate those of the opponent and capture a flag. Stratego's game tree -- the graph of all possible ways in which the game could go -- has 10^535 states, compared with Go's 10^360. In terms of imperfect information at the start of a game, Stratego has 10^66 possible private positions, which dwarfs the 106 such starting situations in two-player Texas hold'em poker. "The sheer complexity of the number of possible outcomes in Stratego means algorithms that perform well on perfect-information games, and even those that work for poker, don't work," says Julien Perolat, a DeepMind researcher based in Paris.

[...] For two weeks in April, DeepNash competed with human Stratego players on online game platform Gravon. After 50 matches, DeepNash was ranked third among all Gravon Stratego players since 2002. "Our work shows that such a complex game as Stratego, involving imperfect information, does not require search techniques to solve it," says team member Karl Tuyls, a DeepMind researcher based in Paris. "This is a really big step forward in AI." "The results are impressive," agrees Noam Brown, a researcher at Meta AI, headquartered in New York City, and a member of the team that in 2019 reported the poker-playing AI Pluribus.
Math

Computer Program For Particle Physics At Risk of Obsolescence (quantamagazine.org) 105

"Maintenance of the software that's used for the hardest physics calculations rests almost entirely with a retiree," reports Quanta magazine, saying the situation "reveals the problematic incentive structure of academia." Particle physicists use some of the longest equations in all of science. To look for signs of new elementary particles in collisions at the Large Hadron Collider, for example, they draw thousands of pictures called Feynman diagrams that depict possible collision outcomes, each one encoding a complicated formula that can be millions of terms long. Summing formulas like these with pen and paper is impossible; even adding them with computers is a challenge. The algebra rules we learn in school are fast enough for homework, but for particle physics they are woefully inefficient.

Programs called computer algebra systems strive to handle these tasks. And if you want to solve the biggest equations in the world, for 33 years one program has stood out: FORM. Developed by the Dutch particle physicist Jos Vermaseren, FORM is a key part of the infrastructure of particle physics, necessary for the hardest calculations. However, as with surprisingly many essential pieces of digital infrastructure, FORM's maintenance rests largely on one person: Vermaseren himself. And at 73, Vermaseren has begun to step back from FORM development. Due to the incentive structure of academia, which prizes published papers, not software tools, no successor has emerged. If the situation does not change, particle physics may be forced to slow down dramatically...

Without ongoing development, FORM will get less and less usable — only able to interact with older computer code, and not aligned with how today's students learn to program. Experienced users will stick with it, but younger researchers will adopt alternative computer algebra programs like Mathematica that are more user-friendly but orders of magnitude slower. In practice, many of these physicists will decide that certain problems are off-limits — too difficult to handle. So particle physics will stall, with only a few people able to work on the hardest calculations.

In April, Vermaseren is holding a summit of FORM users to plan for the future. They will discuss how to keep FORM alive: how to maintain and extend it, and how to show a new generation of students just how much it can do. With luck, hard work and funding, they may preserve one of the most powerful tools in physics.

Thanks to long-time Slashdot reader g01d4 for submitting the story.
The Almighty Buck

Remittances Grow 5% in 2022, Despite Global Headwinds (worldbank.org) 22

Remittances to low- and middle-income countries (LMICs) withstood global headwinds in 2022, growing an estimated 5% to $626 billion. This is sharply lower than the 10.2% increase in 2021, according to the latest World Bank Migration and Development Brief. World Bank: Remittances are a vital source of household income for LMICs. They alleviate poverty, improve nutritional outcomes, and are associated with increased birth weight and higher school enrollment rates for children in disadvantaged households. Studies show that remittances help recipient households to build resilience, for example through financing better housing and to cope with the losses in the aftermath of disasters.

Remittance flows to developing regions were shaped by several factors in 2022. A reopening of host economies as the COVID-19 pandemic receded supported migrants' employment and their ability to continue helping their families back home. Rising prices, on the other hand, adversely affected migrants' real incomes. Also influencing the value of remittances is the appreciation of the ruble, which translated into higher value, in U.S. dollar terms, of outward remittances from Russia to Central Asia. In the case of Europe, a weaker euro had the opposite effect of reducing the U.S. dollar valuation of remittance flows to North Africa and elsewhere. In countries that experienced scarcity of foreign exchange and multiple exchange rates, officially recorded remittance flows declined as flows shifted to alternative channels offering better rates.

IT

Is Everyone Still Getting Remote Work Wrong? (zdnet.com) 129

ZDNet asks: why is everyone getting remote working wrong?

Researchers at tech analyst Gartner believe a rigid requirement to return to offices is a mistake. But the researchers also believe so-called "hybrid" schedules often are also flawed: "Most of those work models delivered below-average outcomes," the research found, and the common factor was some kind of rigid on-site requirement. Much more successful was a "hybrid-flexible" set-up offering leaders and employees the opportunity to choose where they work from. But most successful by far were workplaces that offered this flexibility and also included elements of "intentional collaboration and empathy-based management", where bosses don't force staff to come to the office just to keep an eye on them.

How the working week is organized matters: get it right, and staff are more likely to want to stay, and more likely to perform well. Autonomy also reduces fatigue, which in turn means workers are likely to sustain good performance over time.

ZDNet also tested virtual reality meetings — concluding they're "still undeniably somewhat clunky and can make you feel a bit awkward."

But at the same time, "I was also surprised by how much benefit they could potentially deliver." Sure, a meeting with avatars that only look a bit like your colleagues, in a fantasy meeting room that wouldn't look out of place in a Bond villain's lair does feel a bit ridiculous. But it also — and this was the revelation to me — adds a level of engagement that you just don't get from a video meeting of colleagues occupying flat tiles on a screen. It provides a sense of being there (wherever 'there' was) that adds meaning beyond what you get from staring into a monitor.

I'm not saying I want to have every meeting in VR from now on: far from it. But we have to see the present state of hybrid and remote working as just the current state of the art, and to keep experimenting, and thinking, about the way we work.

United States

US Renewable Energy Will Surge Past Coal and Nuclear by Year's End 115

Renewables are on track to generate more power than coal in the United States this year. But the question is whether they can grow fast enough to meet the country's climate goals. From a report: Supply chain constraints and trade disputes have slowed wind and solar installations, raising questions about the United States' ability to meet the emission reductions sought by the Inflation Reduction Act. The Biden administration is banking on the landmark climate law cutting emissions by 40 percent below 2005 levels by 2030. Many analysts think the United States will ultimately shake off the slowdown thanks to the Inflation Reduction Act's $369 billion in clean energy investments. But it may take time for the law's impact to be felt. Tax guidance needs to be finalized before developers begin plunking down money on new facilities, and companies now face headwinds in the form of higher interest rates and the looming threat of a recession.

The Inflation Reduction Act's emission reductions hinge on the country's ability to at least double the rate of renewable installations over the record levels observed in 2020 and 2021, said John Larsen, a partner at the Rhodium Group. "Every year we don't have capacity additions beyond the record is lost ground," he said. "It's going to be that much harder to make that up over time. There is a point where we don't get to the outcomes we projected because we blew the first few years of the transition." For now, U.S. renewable output is edging higher. Wind and solar output are up 18 percent through Nov. 20 compared to the same time last year and have grown 58 percent compared to 2019, according to the U.S. Energy Information Administration. The government energy tracker predicts that wind, solar and hydro will generate 22 percent of U.S. electricity by the end of this year. That is more than coal at 20 percent and nuclear at 19 percent.

Slashdot Top Deals