×
Government

Biden Orders Every US Agency To Appoint a Chief AI Officer 48

An anonymous reader quotes a report from Ars Technica: The White House has announced the "first government-wide policy (PDF) to mitigate risks of artificial intelligence (AI) and harness its benefits." To coordinate these efforts, every federal agency must appoint a chief AI officer with "significant expertise in AI." Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said. As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting "safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition," OMB said. Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.

The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It's up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency's mission and ensure "equitable outcomes," OMB said. [...] Among the chief AI officer's primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They'll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a "significant impact on rights or safety," OMB said. Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB's minimum standards for responsible AI use. Once a determination is made, the officers will "centrally track" the determinations, informing OMB of any major changes to "conditions or context in which the AI is used." The officers will also regularly convene "a new Chief AI Officer Council to coordinate" efforts and share innovations government-wide.
Chief AI officers must consult with the public and maintain options to opt-out of "AI-enabled decisions," OMB said. "However, these chief AI officers also have the power to waive opt-out options "if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency."
Security

New 'Loop DoS' Attack May Impact Up to 300,000 Online Systems (thehackernews.com) 10

BleepingComputer reports on "a new denial-of-service attack dubbed 'Loop DoS' targeting application layer protocols."

According to their article, the attack "can pair network services into an indefinite communication loop that creates large volumes of traffic." Devised by researchers at the CISPA Helmholtz-Center for Information Security, the attack uses the User Datagram Protocol (UDP) and impacts an estimated 300,000 host and their networks. The attack is possible due to a vulnerability, currently tracked as CVE-2024-2169, in the implementation of the UDP protocol, which is susceptible to IP spoofing and does not provide sufficient packet verification. An attacker exploiting the vulnerability creates a self-perpetuating mechanism that generates excessive traffic without limits and without a way to stop it, leading to a denial-of-service (DoS) condition on the target system or even an entire network. Loop DoS relies on IP spoofing and can be triggered from a single host that sends one message to start the communication.

According to the Carnegie Mellon CERT Coordination Center (CERT/CC) there are three potential outcomes when an attacker leverages the vulnerability:

— Overloading of a vulnerable service and causing it to become unstable or unusable.
— DoS attack on the network backbone, causing network outages to other services.
— Amplification attacks that involve network loops causing amplified DOS or DDOS attacks.

CISPA researchers Yepeng Pan and Professor Dr. Christian Rossow say the potential impact is notable, spanning both outdated (QOTD, Chargen, Echo) and modern protocols (DNS, NTP, TFTP) that are crucial for basic internet-based functions like time synchronization, domain name resolution, and file transfer without authentication... The researchers warned that the attack is easy to exploit, noting that there is no evidence indicating active exploitation at this time. Rossow and Pan shared their findings with affected vendors and notified CERT/CC for coordinated disclosure. So far, vendors who confirmed their implementations are affected by CVE-2024-2169 are Broadcom, Cisco, Honeywell, Microsoft, and MikroTik.

To avoid the risk of denial of service via Loop DoS, CERT/CC recommends installing the latest patches from vendors that address the vulnerability and replace products that no longer receive security updates. Using firewall rules and access-control lists for UDP applications, turning off unnecessary UDP services, and implementing TCP or request validation are also measures that can mitigate the risk of an attack. Furthermore, the organization recommends deploying anti-spoofing solutions like BCP38 and Unicast Reverse Path Forwarding (uRPF), and using Quality-of-Service (QoS) measures to limit network traffic and protect against abuse from network loops and DoS amplifications.

Thanks to long-time Slashdot reader schneidafunk for sharing the article.
Google

Google DeepMind's New AI Assistant Helps Elite Soccer Coaches Get Even Better (technologyreview.com) 16

Soccer teams are always looking to get an edge over their rivals. Whether it's studying players' susceptibility to injury, or opponents' tactics -- top clubs look at reams of data to give them the best shot of winning. They might want to add a new AI assistant developed by Google DeepMind to their arsenal. From a report: It can suggest tactics for soccer set-pieces that are even better than those created by professional club coaches. The system, called TacticAI, works by analyzing a dataset of 7,176 corner kicks taken by players for Liverpool FC, one of the biggest soccer clubs in the world. Corner kicks are awarded to an attacking team when the ball passes over the goal line after touching a player on the defending team. In a sport as free-flowing and unpredictable as soccer, corners -- like free kicks and penalties -- are rare instances in the game when teams can try out pre-planned plays.

TacticAI uses predictive and generative AI models to convert each corner kick scenario -- such as a receiver successfully scoring a goal, or a rival defender intercepting the ball and returning it to their team -- into a graph, and the data from each player into a node on the graph, before modeling the interactions between each node. The work was published in Nature Communications today. Using this data, the model provides recommendations about where to position players during a corner to give them, for example, the best shot at scoring a goal, or the best combination of players to get up front. It can also try to predict the outcomes of a corner, including whether a shot will take place, or which player is most likely to touch the ball first.

AI

Chinese and Western Scientists Identify 'Red Lines' on AI Risks (ft.com) 28

Leading western and Chinese AI scientists have issued a stark warning that tackling risks around the powerful technology requires global co-operation similar to the cold war effort to avoid nuclear conflict. From a report: A group of renowned international experts met in Beijing last week, where they identified "red lines" on the development of AI, including around the making of bioweapons and launching cyber attacks. In a statement seen by the Financial Times, issued in the days after the meeting, the academics warned that a joint approach to AI safety was needed to stop "catastrophic or even existential risks to humanity within our lifetimes."

"In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology," the statement said. Signatories include Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are often described as "godfathers" of AI; Stuart Russell, a professor of computer science at the University of California, Berkeley; and Andrew Yao, one of China's most prominent computer scientists. The statement followed the International Dialogue on AI Safety in Beijing last week, a meeting that included officials from the Chinese government in a signal of tacit official endorsement for the forum and its outcomes.

Science

Ultraprocessed Foods Linked To Heart Disease, Diabetes, Mental Disorders and Early Death, Study Finds (cnn.com) 221

Eating ultraprocessed foods raises the risk of developing or dying from dozens of adverse health conditions, according to a new review of 45 meta-analyses on almost 10 million people. From a report: "We found consistent evidence linking higher intakes of ultra-processed foods with over 70% of the 45 different health outcomes we assessed," said senior author Wolfgang Marx, a senior research fellow at the Food & Mood Centre at Deakin University in Geelong, Australia, in an email. A higher intake was considered about one serving or about 10% more ultraprocessed foods per day, said Heinz Freisling, a scientist in the nutrition and metabolism branch of the World Health Organization's International Agency for Research on Cancer, in an email.

"This proportion can be regarded as 'baseline' and for people consuming more than this baseline, the risk might increase," said Freisling, who was not involved in the study. Researchers graded each study as having credible or strong, highly suggestive, suggestive, weak or no evidence. All the studies in the review were published in the past three years, and none was funded by companies involved in the production of ultraprocessed foods, the authors said. "Strong evidence shows that a higher intake of ultra-processed foods was associated with approximately 50% higher risk of cardiovascular disease-related death and common mental disorders," said lead author Dr. Melissa Lane, a postdoctoral research fellow at Deakin, in an email. Cardiovascular disease encompasses heart attacks, stroke, clogged arteries and peripheral artery disease.
The study: Ultra-processed food exposure and adverse health outcomes: umbrella review of epidemiological meta-analyses (BMJ)
AI

'Luddite' Tech-Skeptics See Bad AI Outcomes for Labor - and Humanity (theguardian.com) 202

"I feel things fraying," says Nick Hilton, host of a neo-luddite podcast called The Ned Ludd Radio Hour.

But he's one of the more optimistic tech skeptics interviewed by the Guardian: Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience — taking things slowly for a novice like me — that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines.... Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope. He is the lead researcher at a nonprofit called the Machine Intelligence Research Institute in Berkeley, California... "If you put me to a wall," he continues, "and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." By "remaining timeline", Yudkowsky means: until we face the machine-wrought end of all things...

Yudkowsky was once a founding figure in the development of human-made artificial intelligences — AIs. He has come to believe that these same AIs will soon evolve from their current state of "Ooh, look at that!" smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail. Don't imagine a human-made brain in one box, Yudkowsky advises. To grasp where things are heading, he says, try to picture "an alien civilisation that thinks a thousand times faster than us", in lots and lots of boxes, almost too many for us to feasibly dismantle, should we even decide to...

[Molly Crabapple, a New York-based artist, believes] "a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that's introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it's shaped by power, and it's generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they're dumb? That was concocted by bosses." Where a techno-pessimist like Yudkowsky would have us address the biggest-picture threats conceivable (to the point at which our fingers are fumbling for the nuclear codes) neo-luddites tend to focus on ground-level concerns. Employment, especially, because this is where technology enriched by AIs seems to be causing the most pain....

Watch out, says [writer/podcaster Riley] Quinn at one point, for anyone who presents tech as "synonymous with being forward-thinking and agile and efficient. It's typically code for 'We're gonna find a way around labour regulations'...." One of his TrashFuture colleagues Nate Bethea agrees. "Opposition to tech will always be painted as irrational by people who have a direct financial interest in continuing things as they are," he says.

Thanks to Slashdot reader fjo3 for sharing the article.
The Courts

New Bill Would Let Defendants Inspect Algorithms Used Against Them In Court (theverge.com) 47

Lauren Feiner reports via The Verge: Reps. Mark Takano (D-CA) and Dwight Evans (D-PA) reintroduced the Justice in Forensic Algorithms Act on Thursday, which would allow defendants to access the source code of software used to analyze evidence in their criminal proceedings. It would also require the National Institute of Standards and Technology (NIST) to create testing standards for forensic algorithms, which software used by federal enforcers would need to meet.

The bill would act as a check on unintended outcomes that could be created by using technology to help solve crimes. Academic research has highlighted the ways human bias can be built into software and how facial recognition systems often struggle to differentiate Black faces, in particular. The use of algorithms to make consequential decisions in many different sectors, including both crime-solving and health care, has raised alarms for consumers and advocates as a result of such research.

Takano acknowledged that gaining or hiring the deep expertise needed to analyze the source code might not be possible for every defendant. But requiring NIST to create standards for the tools could at least give them a starting point for understanding whether a program matches the basic standards. Takano introduced previous iterations of the bill in 2019 and 2021, but they were not taken up by a committee.

Earth

Cop28 Deal Will Fail Unless Rich Countries Quit Fossil Fuels, Says Climate Negotiator 184

The credibility of the Cop28 agreement to "transition away" from fossil fuels rides on the world's biggest historical polluters like the US, UK and Canada rethinking current plans to expand oil and gas production, according to the climate negotiator representing 135 developing countries. The Guardian: In an exclusive interview with the Guardian, Pedro Pedroso, the outgoing president of the G77 plus China bloc of developing countries, warned that the landmark deal made at last year's climate talks in Dubai risked failing. "We achieved some important outcomes at Cop28 but the challenge now is how we translate the deal into meaningful action for the people," Pedroso said. "As we speak, unless we lie to ourselves, none of the major developed countries, who are the most important historical emitters, have policies that are moving away from fossil fuels, on the contrary, they are expanding," said Pedroso.

These countries must also deliver adequate finance for poorer nations to transition -and adapt to the climate crisis. In Dubai, Sultan Al Jaber, Cop28 president and chief of the Emirates national oil company, was subject to widespread scrutiny -- understandable given that the UAE is the world's seventh biggest oil producer with the fifth largest gas reserves. Yet the US was by far the biggest oil and gas producer in the world last year -- setting a new record, during a year that was the hottest ever recorded. The US, UK, Canada, Australia and Norway account for 51% of the total planned oil and gas expansion by 2050, according to research by Oil Change International. "It's very easy to label some emerging economies, especially the Gulf states, as climate villains, but this is very unfair by countries with historic responsibilities -- who keep trying to scapegoat and deviate the attention away from themselves. Just look at US fossil fuel plans and the UK's new drilling licenses for the North Sea, and Canada which has never met any of its emission reduction goals, not once," said Pedroso, a Cuban diplomat.
Medicine

Hospitals Owned By Private Equity Are Harming Patients, Reports Find (arstechnica.com) 199

Private equity firms are increasingly buying hospitals across the US, and when they do, patients suffer, according to two separate reports. Specifically, the equity firms cut corners, slash services, lay off staff, lower quality of care, take on substantial debt, and reduce charity care, leading to lower ratings and more medical errors, the reports collectively find. ArsTechnica: Last week, the financial watchdog organization Private Equity Stakeholder Project (PESP) released a report delving into the state of two of the nation's largest hospital systems, Lifepoint and ScionHealth -- both owned by private equity firm Apollo Global Management. Through those two systems, Apollo runs 220 hospitals in 36 states, employing around 75,000 people. The report found that some of Apollo's hospitals were among the worst in their respective states, based on a ranking by The Lown Institute Hospital Index. The index ranks hospitals and health systems based on health equity, value, and outcomes, PESP notes. The hospitals also have dismal readmission rates and government rankings.

The Center for Medicare and Medicaid Services (CMS) ranks hospitals on a one- to five-star system, with the national average of 3.2 stars overall and about 30 percent of hospitals at two stars or below. Apollo's overall average is 2.8 stars, with nearly 40 percent of hospitals at two stars or below. The other report, a study published in JAMA late last month, found that the rate of serious medical errors and health complications increases among patients in the first few years after private equity firms take over. The study examined Medicare claims from 51 private equity-run hospitals and 259 matched control hospitals. Specifically, the study, led by researchers at Harvard University, found that patients admitted to private equity-owned hospitals had a 25 percent increase in developing hospital-acquired conditions compared with patients in the control hospitals. In private equity hospitals, patients experienced a 27 percent increase in falls, a 38 percent increase in central-line bloodstream infections (despite placing 16 percent fewer central lines than control hospitals), and surgical site infections doubled.

Education

'A Groundbreaking Study Shows Kids Learn Better On Paper, Not Screens. Now What?' (theguardian.com) 130

In an opinion piece for the Guardian, American journalist and author John R. MacArthur discusses the alarming decline in reading skills among American youth, highlighted by a Department of Education survey showing significant drops in text comprehension since 2019-2020, with the situation worsening since 2012. While remote learning during the pandemic and other factors like screen-based reading are blamed, a new study by Columbia University suggests that reading on paper is more effective for comprehension than reading on screens, a finding not yet widely adopted in digital-focused educational approaches. From the report: What if the principal culprit behind the fall of middle-school literacy is neither a virus, nor a union leader, nor "remote learning"? Until recently there has been no scientific answer to this urgent question, but a soon-to-be published, groundbreaking study from neuroscientists at Columbia University's Teachers College has come down decisively on the matter: for "deeper reading" there is a clear advantage to reading a text on paper, rather than on a screen, where "shallow reading was observed." [...] [Dr Karen Froud] and her team are cautious in their conclusions and reluctant to make hard recommendations for classroom protocol and curriculum. Nevertheless, the researchers state: "We do think that these study outcomes warrant adding our voices ... in suggesting that we should not yet throw away printed books, since we were able to observe in our participant sample an advantage for depth of processing when reading from print."

I would go even further than Froud in delineating what's at stake. For more than a decade, social scientists, including the Norwegian scholar Anne Mangen, have been reporting on the superiority of reading comprehension and retention on paper. As Froud's team says in its article: "Reading both expository and complex texts from paper seems to be consistently associated with deeper comprehension and learning" across the full range of social scientific literature. But the work of Mangen and others hasn't influenced local school boards, such as Houston's, which keep throwing out printed books and closing libraries in favor of digital teaching programs and Google Chromebooks. Drunk on the magical realism and exaggerated promises of the "digital revolution," school districts around the country are eagerly converting to computerized test-taking and screen-reading programs at the precise moment when rigorous scientific research is showing that the old-fashioned paper method is better for teaching children how to read.

Indeed, for the tech boosters, Covid really wasn't all bad for public-school education: "As much as the pandemic was an awful time period," says Todd Winch, the Levittown, Long Island, school superintendent, "one silver lining was it pushed us forward to quickly add tech supports." Newsday enthusiastically reports: "Island schools are going all-in on high tech, with teachers saying they are using computer programs such as Google Classroom, I-Ready, and Canvas to deliver tests and assignments and to grade papers." Terrific, especially for Google, which was slated to sell 600 Chromebooks to the Jericho school district, and which since 2020 has sold nearly $14bn worth of the cheap laptops to K-12 schools and universities.

If only Winch and his colleagues had attended the Teachers College symposium that presented the Froud study last September. The star panelist was the nation's leading expert on reading and the brain, John Gabrieli, an MIT neuroscientist who is skeptical about the promises of big tech and its salesmen: "I am impressed how educational technology has had no effect on scale, on reading outcomes, on reading difficulties, on equity issues," he told the New York audience. "How is it that none of it has lifted, on any scale, reading? ... It's like people just say, "Here is a product. If you can get it into a thousand classrooms, we'll make a bunch of money.' And that's OK; that's our system. We just have to evaluate which technology is helping people, and then promote that technology over the marketing of technology that has made no difference on behalf of students ... It's all been product and not purpose." I'll only take issue with the notion that it's "OK" to rob kids of their full intellectual potential in the service of sales -- before they even get started understanding what it means to think, let alone read.

Businesses

Workplace Wellness Programs Have Little Benefit, Study Finds 86

An Oxford researcher measured the effect of popular workplace mental health interventions, and discovered little to none. From a report: Employee mental health services have become a billion-dollar industry. New hires, once they have found the restrooms and enrolled in 401(k) plans, are presented with a panoply of digital wellness solutions, mindfulness seminars, massage classes, resilience workshops, coaching sessions and sleep apps. These programs are a point of pride for forward-thinking human resource departments, evidence that employers care about their workers. But a British researcher who analyzed survey responses from 46,336 workers at companies that offered such programs found that people who participated in them were no better off than colleagues who did not.

The study, published this month in Industrial Relations Journal, considered the outcomes of 90 different interventions and found a single notable exception: Workers who were given the opportunity to do charity or volunteer work did seem to have improved well-being. Across the study's large population, none of the other offerings -- apps, coaching, relaxation classes, courses in time management or financial health -- had any positive effect. Trainings on resilience and stress management actually appeared to have a negative effect.
AI

What Happened When California's State Government Examined the Risks and Benefits of AI? (msn.com) 80

An anonymous reader shared this report from the Los Angeles Times: AI that can generate text, images and other content could help improve state programs but also poses risks, according to a report released by the governor's office on Tuesday. Generative AI could help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments and answer questions about state services. Still, deploying the technology, the analysis warned, also comes with concerns around data privacy, misinformation, equity and bias. "When used ethically and transparently, GenAI has the potential to dramatically improve service delivery outcomes and increase access to and utilization of government programs," the report stated...

AI advancements could benefit California's economy. The state is home to 35 of the world's 50 top AI companies and data from Pitchfork says the GenAI market could reach $42.6 billion in 2023, the report said. Some of the risks outlined in the report include spreading false information, giving consumers dangerous medical advice and enabling the creation of harmful chemicals and nuclear weapons. Data breaches, privacy and bias are also top concerns along with whether AI will take away jobs. "Given these risks, the use of GenAI technology should always be evaluated to determine if this tool is necessary and beneficial to solve a problem compared to the status quo," the report said.

AI

ChatGPT Generates Fake Data Set To Support Scientific Hypothesis (nature.com) 41

Researchers have used the technology behind the AI chatbot ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim. From a report: In a paper published in JAMA Ophthalmology on 9 November, the authors used GPT-4 -- the latest version of the large language model on which ChatGPT runs -- paired with Advanced Data Analysis (ADA), a model that incorporates the programming language Python and can perform statistical analysis and create data visualizations. The AI-generated data compared the outcomes of two surgical procedures and indicated -- wrongly -- that one treatment is better than the other.

"Our aim was to highlight that, in a few minutes, you can create a data set that is not supported by real original data, and it is also opposite or in the other direction compared to the evidence that are available," says study co-author Giuseppe Giannaccare, an eye surgeon at the University of Cagliari in Italy. The ability of AI to fabricate convincing data adds to concern among researchers and journal editors about research integrity. "It was one thing that generative AI could be used to generate texts that would not be detectable using plagiarism software, but the capacity to create fake but realistic data sets is a next level of worry," says Elisabeth Bik, a microbiologist and independent research-integrity consultant in San Francisco, California. "It will make it very easy for any researcher or group of researchers to create fake measurements on non-existent patients, fake answers to questionnaires or to generate a large data set on animal experiments."

Medicine

Covid Lockdowns 'Were Worth It', Argues Infectious Disease Expert on CNN (cnn.com) 274

A new book argues lockdowns during the pandemic were "a failure." But in response CNN published an opinion piece disagreeing — written by physician/infectious disease expert Kent Sepkowitz from the Memorial Sloan Kettering Cancer Center in New York — who argues "You bet it was worth it." [Authors Joe Nocera and Bethany McLean] consider the lockdown a single activity stretched across the entire pandemic; in contrast, I would distinguish the initial lockdown, which was crucial, from the off-and-on lockdowns as therapies, vaccines and overall care improved. There is an argument to be made that these were not anywhere near as effective... One only had to work in health care in New York City to see the difference between early 2020, when the explosion of cases overwhelmed the city, versus later in 2020 when an effective therapy had been identified, supplies and diagnostic testing had been greatly improved (though still completely inadequate) and the makeshift ICUs and emergency rooms had been set in place. It was still a nightmare to be sure, but it was a vastly more organized nightmare.

The "short-term benefits" at the start of the pandemic are simple to characterize: Every infection that was delayed due to the lockdowns was a day to the good, a day closer to the release of the mRNA vaccines in December 2020, a less-hectic day for the health care workers, a day for clinical trials to mature. Therefore, the authors' statement that lockdowns "were a mistake that should not be repeated" because they had no "purpose other than keeping hospitals from being overrun in the short-term" is to me a fundamental misunderstanding of the day-to-day work that was being done. Most disturbing to me about this assessment and the others that have come along are the minimal mention of the death and debility the infection caused. A reminder for those who have forgotten just how brutal the pandemic was: Worldwide there have been 7 million deaths. In the U.S., there have been more than a million deaths, millions have some post-infection debility and many health care workers remain profoundly demoralized. [By these figures the U.S., with 4.2% of the world's population, had 14% of Covid fatalities.]

In this context, many of the outcomes of concern listed by Nocera and McLean — suicidal thoughts in teens, alcoholism and drug use increases, violence — are as easily explained by this staggering death toll as by the cabin fever brought on by lockdowns. Once again: About 1 out of every 350 Americans died in the Covid-19 pandemic. Another way to consider the impact of so many deaths is examination of life expectancy. Of note, life expectancy in the U.S. fell in 2020 (1.8 years) and 2021 (0.6 years), the sharpest drop since the 1920s; per the US Centers for Disease Control and Prevention, 74% of the drop was attributed to Covid-19... To fall more than two years so precipitously requires the deaths of many in their 30s and 40s and 50s, as occurred with the first year of the pandemic.

AI

Generative AI Already Taking White Collar Jobs and Wages in Online Freelancing World (ft.com) 76

An anonymous reader shares a report: In an ingenious study published this summer, US researchers showed that within a few months of the launch of ChatGPT, copywriters and graphic designers on major online freelancing platforms saw a significant drop in the number of jobs they got, and even steeper declines in earnings. This suggested not only that generative AI was taking their work, but also that it devalues the work they do still carry out.

Most strikingly, the study found that freelancers who previously had the highest earnings and completed the most jobs were no less likely to see their employment and earnings decline than other workers. If anything, they had worse outcomes. In other words, being more skilled was no shield against loss of work or earnings. But the online freelancing market covers a very particular form of white-collar work and of labour market. What about looking higher up the ranks of the knowledge worker class? For that, we can turn to a recent, fascinating Harvard Business School study, which monitored the impact of giving GPT-4, OpenAI's latest and most advanced offering, to employees at Boston Consulting Group.

Businesses

Researchers Revolt Against Weekend Conferences (nature.com) 214

In response to studies that relate high rates of female attrition from biomedical research fields to the obligations of motherhood, researchers concerned about inclusivity are now debating the issue of weekend conference duties. Nature: Because published findings are often old news in the rapidly changing biomedical fields, in-person conferences offer a crucial opportunity for scientists to stay current on trends that shape projects and funding outcomes. Yet fields often expect rock-star-like travel schedules on an economy-class budget in addition to long, irregular weekday hours at the laboratory. This is why early-career scientists with children say that they must seek alternative childcare or risk being scooped or excluded from a collaboration simply because they missed a weekend conference.

International meetings are often scheduled over weekends because that's the only time venues have availability. Few cities have both suitable venues and enough hotel space to welcome 21,000 people from around the world, and even meetings for 3,000 researchers must be booked many years in advance. Because local businesses and regional associations tend to book venues during the working week, large meetings that span three to five days often need to start or end over a weekend. Women who continue to break the glass ceiling in biomedicine are now pitching this timing as an example of unnecessary conflict between work and family.

Businesses

Dimon Warns of 'Most Dangerous Time in Decades' (nytimes.com) 122

JPMorgan Chase's chief executive, Jamie Dimon, is as close as Wall Street has to a statesman, and on Friday he sounded a major alarm about the global effects of the conflict in Israel and Gaza. From a report: "This may be the most dangerous time the world has seen in decades," he said in a statement accompanying the bank's quarterly earnings. He warned of "far-reaching impacts on energy and food markets, global trade and geopolitical relationships."

For Mr. Dimon, weighing in on geopolitics isn't new: He consistently warns of dangers from the war in Ukraine and elsewhere. On Friday, he said he was preparing the nation's largest bank for a range of scary outcomes, with other risks including high inflation and rising interest rates. But on a call with reporters, he described the Gaza conflict as "the highest and most important thing for the Western world." Otherwise, JPMorgan and other big banks appear to be operating smoothly. JPMorgan's profit rose to $13.2 billion in the third quarter, a 35 percent rise from the same period last year. Executives at the bank said the tumult of the regional banking crisis of the spring, which resulted in JPMorgan taking over First Republic, was steadily fading. "U.S. consumers and businesses generally remain healthy," Mr. Dimon said, "although, consumers are spending down their excess cash buffers."

United States

Who Runs the Best US Schools? It May Be the Defense Department (nytimes.com) 94

Schools for children of military members achieve results rarely seen in public education. From a report: Amy Dilmar, a middle-school principal in Georgia, is well aware of the many crises threatening American education. The lost learning that piled up during the coronavirus pandemic. The gaping inequalities by race and family income that have only gotten worse. A widening achievement gap between the highest- and lowest-performing students. But she sees little of that at her school in Fort Moore, Ga. The students who solve algebra equations and hone essays at Faith Middle School attend one of the highest-performing school systems in the country. It is run not by a local school board or charter network, but by the Defense Department. With about 66,000 students -- more than the public school enrollment in Boston or Seattle -- the Pentagon's schools for children of military members and civilian employees quietly achieve results most educators can only dream of.

On the National Assessment of Educational Progress, a federal exam that is considered the gold standard for comparing states and large districts, the Defense Department's schools outscored every jurisdiction in math and reading last year and managed to avoid widespread pandemic losses. Their schools had the highest outcomes in the country for Black and Hispanic students, whose eighth-grade reading scores outpaced national averages for white students. Eighth graders whose parents only graduated from high school -- suggesting lower family incomes, on average -- performed as well in reading as students nationally whose parents were college graduates. The schools reopened relatively quickly during the pandemic, but last year's results were no fluke. While the achievement of U.S. students overall has stagnated over the last decade, the military's schools have made gains on the national test since 2013. And even as the country's lowest-performing students -- in the bottom 25th percentile -- have slipped further behind, the Defense Department's lowest-performing students have improved in fourth-grade math and eighth-grade reading.

Security

NSA Shares Top Ten Cybersecurity Misconfigurations (cisa.gov) 31

The National Security Agency (NSA), in partnership with the Cybersecurity and Infrastructure Security Agency (CISA), have highlighted the ten most common cybersecurity misconfigurations in large organizations. In their join cybersecurity advisory (CSA), they also detail the tactics, techniques, and procedures (TTPs) actors use to exploit these misconfigurations. From the report: Through NSA and CISA Red and Blue team assessments, as well as through the activities of NSA and CISA Hunt and Incident Response teams, the agencies identified the following 10 most common network misconfigurations:

1. Default configurations of software and applications
2. Improper separation of user/administrator privilege
3. Insufficient internal network monitoring
4. Lack of network segmentation
5. Poor patch management
6. Bypass of system access controls
7. Weak or misconfigured multifactor authentication (MFA) methods
8. Insufficient access control lists (ACLs) on network shares and services
9. Poor credential hygiene
10. Unrestricted code execution

NSA and CISA encourage network defenders to implement the recommendations found within the Mitigations section of this advisory -- including the following -- to reduce the risk of malicious actors exploiting the identified misconfigurations: Remove default credentials and harden configurations; Disable unused services and implement access controls; Update regularly and automate patching, prioritizing patching of known exploited vulnerabilities; and Reduce, restrict, audit, and monitor administrative accounts and privileges.

NSA and CISA urge software manufacturers to take ownership of improving security outcomes of their customers by embracing secure-by-design and-default tactics, including: Embedding security controls into product architecture from the start of development and throughout the entire software development lifecycle (SDLC); Eliminating default passwords; Providing high-quality audit logs to customers at no extra charge; and Mandating MFA, ideally phishing-resistant, for privileged users and making MFA a default rather than opt-in feature.
A PDF version of the report can be downloaded here (PDF).
Medicine

'Cancer Moonshot' Projects Funded Include Implant to Sense and Treat Cancer, Tumor-Targetting Bacteria (arpa-h.gov) 42

Researchers from several U.S. institutions are collaborating "to develop and test an implantable device able to sense signs of the kind of inflammation associated with cancer," reports CBS News, "and delivery therapy when needed." Northwestern said the implant could significantly improve outcomes for patients with ovarian, pancreatic and other difficult-to-treat cancers — potentially cutting cancer-related deaths in the U.S. in half. "Instead of tethering patients to hospital beds, IV bags and external monitors, we'll use a minimally invasive procedure to implant a small device that continuously monitors their cancer and adjusts their immunotherapy dose in real time," said Rice University bioengineer Omid Veiseh. "This kind of 'closed-loop therapy' has been used for managing diabetes, where you have a glucose monitor that continuously talks to an insulin pump. But for cancer immunotherapy, it's revolutionary."
The project and team are named THOR, an acronym for "targeted hybrid oncotherapeutic regulation..." explains an announcement from Johns Hopkins. "THOR's proposed implant, or 'hybrid advanced molecular manufacturing regulator,' goes by the acronym HAMMR..."

The project will take five and a half years, and includes funding for a first-phase clinical trial treating recurrent ovarian cancer slated to begin in the fourth year. The research is funded by America's newly-established Advanced Research Projects Agency for Health (ARPA-H), according to a statement from the agency, representing its "commitment to supporting Cancer Moonshot goals of decreasing cancer deaths and improving the quality of life for patients..."

And they're also funding two more projects: The Synthetic Programmable bacteria for Immune-directed Killing in tumor Environments (SPIKEs) project, led by a team at the University of Missouri in Columbia, Missouri, aims to develop an inexpensive and safe therapy using bacteria specifically selected for tumor-targeting. Through SPIKEs, researchers intend to engineer bacteria that can recruit and regulate tumor-targeting immune cells, boosting the body's ability to fight off cancer without side-effects from traditional medications. Up to $19 million is allocated towards SPIKEs.

An additional project, with up to $50 million in potential funding inclusive of options, seeks to map cancer cell biomarkers to drastically improve multi-cancer early detection (MCED) and streamline clinical intervention when tumors are still small. Led by the Georgia Institute of Technology in Atlanta, Georgia, the Cancer and Organ Degradome Atlas (CODA) project aims to understand the cellular profiles unique to diseased cancer cells. The CODA platform intends to develop a suite of biosensor tools that can reliably recognize a range of cancer-specific markers and, ultimately, produce a highly precise, accurate, and cost-effective MCED test that can identify common cancers when they are most treatable.

In a statement, ARPA-H's director said that "With these awards, we hope to see crucial advancements in patient-tailored therapies, better and earlier tumor detection methods, and cell therapies that can help the immune system target cancer cells for destruction."

Slashdot Top Deals