AI

AI Researchers Analyze Similarities of Scarlett Johanssson's Voice to OpenAI's 'Sky' (npr.org) 87

AI models can evaluate how similar voices are to each other. So NPR asked forensic voice experts at Arizona State University to compare the voice and speech patterns of OpenAI's "Sky" to Scarlett Johansson's... The researchers measured Sky, based on audio from demos OpenAI delivered last week, against the voices of around 600 professional actresses. They found that Johansson's voice is more similar to Sky than 98% of the other actresses.

Yet she wasn't always the top hit in the multiple AI models that scanned the Sky voice. The researchers found that Sky was also reminiscent of other Hollywood stars, including Anne Hathaway and Keri Russell. The analysis of Sky often rated Hathaway and Russell as being even more similar to the AI than Johansson.

The lab study shows that the voices of Sky and Johansson have undeniable commonalities — something many listeners believed, and that now can be supported by statistical evidence, according to Arizona State University computer scientist Visar Berisha, who led the voice analysis in the school's College of Health Solutions and the College of Engineering. "Our analysis shows that the two voices are similar but likely not identical," Berisha said...

OpenAI maintains that Sky was not created with Johansson in mind, saying it was never meant to mimic the famous actress. "It's not her voice. It's not supposed to be. I'm sorry for the confusion. Clearly you think it is," Altman said at a conference this week. He said whether one voice is really similar to another will always be the subject of debate.

Transportation

Amazon's Drones Gets Key Approval, Can Now Fly Farther to More Customers (apnews.com) 19

The Associated Press reports that U.S. federal regulators "have given Amazon key permission that will allow it to expand its drone delivery program, the company announced Thursday." In a blog post published on its website, Seattle-based Amazon said that the Federal Aviation Administration has given its Prime Air delivery service the OK to operate drones "beyond visual line of sight," removing a barrier that has prevented its drones from traveling longer distances. With the approval, Amazon pilots can now operate drones remotely without seeing it with their own eyes.

An FAA spokesperson said the approval applies to College Station, Texas, where the company launched drone deliveries in late 2022. Amazon said its planning to immediately scale its operations in that city in an effort to reach customers in more densely populated areas. It says the approval from regulators also "lays the foundation" to scale its operations to more locations around the country...

Amazon, which has sought this permission for years, said it received approval from regulators after developing a strategy that ensures its drones could detect and avoid obstacles in the air. Furthermore, the company said it submitted other engineering information to the FAA and conducted flight demonstrations in front of federal inspectors. Those demonstrations were also done "in the presence of real planes, helicopters, and a hot air balloon to demonstrate how the drone safely navigated away from each of them," Amazon said.

The article also points out that by the end of the decade, Amazon "has a goal of delivering 500 million packages by drone every year."

To achieve this, Amazon said in its blog post, "we knew we had to design a system capable of serving highly populated areas and that was safer than driving to the store."
Education

College-Level Minecraft-Based CS Courses Approved for US High School Students 58

Long-time Slashdot reader theodp writes: "This is truly game-changing news!" exclaims Minecraft Education's Laylah Bulman in a LinkedIn post targeting high school CS educators. "We're thrilled to announce that the AP Computer Science Principles with Minecraft and MakeCode Curriculum has officially been approved by The College Board! And we are offering free professional learning for our inaugural cohort this summer...!

"Minecraft's highly engaging environment makes complex coding concepts relatable and fun, fostering a deeper understanding and encouraging broader participation. Ready to empower your students? Don't miss this opportunity!"

Recent Edsurge articles (sponsored by Minecraft Education) touted how Minecraft has found its way into computer science and other curricula in New York City and Broward County (Florida), two of the nation's largest school districts... Microsoft-backed nonprofit Code.org has also pushed Minecraft-themed CS tutorials into the nation's classrooms via its wildly-popular annual Hour of Code events since 2015, a year after Microsoft paid $2.5B to buy Minecraft. ("The best way to introduce anyone to STEM or get their curiosity going on, it's Minecraft," declared Microsoft CEO Satya Nadella at the time). Minecraft-related learning initiatives have also received millions of dollars in grants from the U.S. Department of Education and the National Science Foundation.
Space

Study Confirms Einstein Prediction: Black Holes Have a 'Plunging Region' (cnn.com) 61

"Albert Einstein was right," reports CNN. "There is an area at the edge of black holes where matter can no longer stay in orbit and instead falls in, as predicted by his theory of gravity."

The proof came by combining NASA's earth-orbiting NuSTAR telescope with the NICER telescope on the International Space Station to detect X-rays: A team of astronomers has for the first time observed this area — called the "plunging region" — in a black hole about 10,000 light-years from Earth. "We've been ignoring this region, because we didn't have the data," said research scientist Andrew Mummery, lead author of the study published Thursday in the journal Monthly Notices of the Royal Astronomical Society. "But now that we do, we couldn't explain it any other way."
Mummery — also a Fellow in Oxford's physics department — told CNN, "We went out searching for this one specifically — that was always the plan. We've argued about whether we'd ever be able to find it for a really long time. People said it would be impossible, so confirming it's there is really exciting."

Mummery described the plunging region as "like the edge of a waterfall." Unlike the event horizon, which is closer to the center of the black hole and doesn't let anything escape, including light and radiation, in the "plunging region" light can still escape, but matter is doomed by the powerful gravitational pull, Mummery explained. The study's findings could help astronomers better understand the formation and evolution of black holes. "We can really learn about them by studying this region, because it's right at the edge, so it gives us the most information," Mummery said...

According to Christopher Reynolds, a professor of astronomy at the University of Maryland, College Park, finding actual evidence for the "plunging region" is an important step that will let scientists significantly refine models for how matter behaves around a black hole. "For example, it can be used to measure the rotation rate of the black hole," said Reynolds, who was not involved in the study.

Facebook

Extremist Militias Are Coordinating In More Than 100 Facebook Groups (wired.com) 204

An anonymous reader quotes a report from Wired: Join your localMilitia or III% Patriot Group," a post urged the more than 650 members of a Facebook group called the Free American Army. Accompanied by the logo for the Three Percenters militia network and an image of a man in tactical gear holding a long rifle, the post continues: "Now more than ever. Support the American militia page." Other content and messaging in the group is similar. And despite the fact that Facebook bans paramilitary organizing and deemed the Three Percenters an "armed militia group" on its 2021 Dangerous Individuals and Organizations List, the post and group remained up until WIRED contacted Meta for comment about its existence.

Free American Army is just one of around 200 similar Facebook groups and profiles, most of which are still live, that anti-government and far-right extremists are using to coordinate local militia activity around the country. After lying low for several years in the aftermath of the US Capitol riot on January 6, militia extremists have been quietly reorganizing, ramping up recruitment and rhetoric on Facebook -- with apparently little concern that Meta will enforce its ban against them, according to new research by the Tech Transparency Project, shared exclusively with WIRED.

Individuals across the US with long-standing ties to militia groups are creating networks of Facebook pages, urging others to recruit "active patriots" and attend meetups, and openly associating themselves with known militia-related sub-ideologies like that of the anti-government Three Percenter movement. They're also advertising combat training and telling their followers to be "prepared" for whatever lies ahead. These groups are trying to facilitate local organizing, state by state and county by county. Their goals are vague, but many of their posts convey a general sense of urgency about the need to prepare for "war" or to "stand up" against many supposed enemies, including drag queens, immigrants, pro-Palestine college students, communists -- and the US government. These groups are also rebuilding at a moment when anti-government rhetoric has continued to surge in mainstream political discourse ahead of a contentious, high-stakes presidential election. And by doing all of this on Facebook, they're hoping to reach a broader pool of prospective recruits than they would on a comparatively fringe platform like Telegram.
"Many of these groups are no longer fractured sets of localized militia but coalitions formed between multiple militia groups, many with Three Percenters at the helm," said Katie Paul, director of the Tech Transparency Project. "Facebook remains the largest gathering place for extremists and militia movements to cast a wide net and funnel users to more private chats, including on the platform, where they can plan and coordinate with impunity."

Paul has been monitoring "hundreds" of these groups and profiles since 2021 and found that they have been growing "increasingly emboldened with more serious and coordinated organizing" in the past year.
Programming

The BASIC Programming Language Turns 60 (arstechnica.com) 107

ArsTechnica: Sixty years ago, on May 1, 1964, at 4 am in the morning, a quiet revolution in computing began at Dartmouth College. That's when mathematicians John G. Kemeny and Thomas E. Kurtz successfully ran the first program written in their newly developed BASIC (Beginner's All-Purpose Symbolic Instruction Code) programming language on the college's General Electric GE-225 mainframe.

Little did they know that their creation would go on to democratize computing and inspire generations of programmers over the next six decades.

Transportation

Amazon Ends California Drone Deliveries (techcrunch.com) 29

Amazon confirmed it is ending Prime Air drone delivery operations in Lockeford, California. The Central California town of 3,500 was the company's second U.S. drone delivery site, after College Station, Texas. Operations were announced in June 2022. From a report: The retail giant is not offering details around the setback, only noting, "We'll offer all current employees opportunities at other sites, and will continue to serve customers in Lockeford with other delivery methods. We want to thank the community for all their support and feedback over the past few years."

College Station deliveries will continue, along with a forthcoming site in Tolleson, Arizona set to kick off deliveries later this year. Tolleson, a city of just over 7,000, is located in Maricopa County, in the western portion of the Phoenix metropolitan area. Prime Air's arrival brings same-day deliveries to Amazon customers in the region, courtesy of a hybrid fulfillment center/delivery station. The company says it will be contacting impacted customers when the service is up and running. There's no specific information on timing beyond "this year," owing, in part, to ongoing negotiations with both local officials and the FAA required to deploy in the airspace.

Operating Systems

How CP/M Launched the Next 50 Years of Operating Systems (computerhistory.org) 80

50 years ago this week, PC software pioneer Gary Kildall "demonstrated CP/M, the first commercially successful personal computer operating system in Pacific Grove, California," according to a blog post from Silicon Valley's Computer History Museum. It tells the story of "how his company, Digital Research Inc., established CP/M as an industry standard and its subsequent loss to a version from Microsoft that copied the look and feel of the DRI software."

Kildall was a CS instructor and later associate professor at the Naval Postgraduate School (NPS) in Monterey, California... He became fascinated with Intel Corporation's first microprocessor chip and simulated its operation on the school's IBM mainframe computer. This work earned him a consulting relationship with the company to develop PL/M, a high-level programming language that played a significant role in establishing Intel as the dominant supplier of chips for personal computers.

To design software tools for Intel's second-generation processor, he needed to connect to a new 8" floppy disk-drive storage unit from Memorex. He wrote code for the necessary interface software that he called CP/M (Control Program for Microcomputers) in a few weeks, but his efforts to build the electronic hardware required to transfer the data failed. The project languished for a year. Frustrated, he called electronic engineer John Torode, a college friend then teaching at UC Berkeley, who crafted a "beautiful rat's nest of wirewraps, boards and cables" for the task.

Late one afternoon in the fall of 1974, together with John Torode, in the backyard workshop of his home at 781 Bayview Avenue, Pacific Grove, Gary "loaded my CP/M program from paper tape to the diskette and 'booted' CP/M from the diskette, and up came the prompt: *

[...] By successfully booting a computer from a floppy disk drive, they had given birth to an operating system that, together with the microprocessor and the disk drive, would provide one of the key building blocks of the personal computer revolution... As Intel expressed no interest in CP/M, Gary was free to exploit the program on his own and sold the first license in 1975.

What happened next? Here's some highlights from the blog post:
  • "Reluctant to adapt the code for another controller, Gary worked with Glen Ewing to split out the hardware dependent-portions so they could be incorporated into a separate piece of code called the BIOS (Basic Input Output System)... The BIOS code allowed all Intel and compatible microprocessor-based computers from other manufacturers to run CP/M on any new hardware. This capability stimulated the rise of an independent software industry..."
  • "CP/M became accepted as a standard and was offered by most early personal computer vendors, including pioneers Altair, Amstrad, Kaypro, and Osborne..."
  • "[Gary's company] introduced operating systems with windowing capability and menu-driven user interfaces years before Apple and Microsoft... However, by the mid-1980s, in the struggle with the juggernaut created by the combined efforts of IBM and Microsoft, DRI had lost the basis of its operating systems business."
  • "Gary sold the company to Novell Inc. of Provo, Utah, in 1991. Ultimately, Novell closed the California operation and, in 1996, disposed of the assets to Caldera, Inc., which used DRI intellectual property assets to prevail in a lawsuit against Microsoft."

Crime

Lying to Investors? Co-Founder of Startup 'HeadSpin' Gets 18-Month Prison Sentence for Fraud (sfgate.com) 28

The co-founder of Silicon Valley-based software testing startup HeadSpin was sentenced Friday to 18 months in prison and a $1 million fine, reports SFGate — for defrauding investors. Lachwani pleaded guilty to two counts of wire fraud and a count of securities fraud in April 2023, after federal prosecutors accused him of, for years, lying to investors about HeadSpin's finances to raise more money. HeadSpin, founded in 2015, grew to a $1.1 billion valuation by 2020 with over $115 million in funding from investors including Google Ventures and Iconiq Capital... He had personally altered invoices, lied to the company accountant and sent slide decks with fraudulent information to investors, [according to the government's 2021 criminal complaint]...

Breyer, per the New York Times, rejected Lachwani's lawyer's argument that because HeadSpin investors didn't end up losing money, he should receive a light sentence. The judge, who often oversees tech industry cases, reportedly said: "If you win, there are no serious consequences — that simply can't be the law." Still, the sentencing was far lighter than it could have been. The government's prosecuting attorneys had asked for a five-year prison term.

The New York Times reported in December that HeadSpin's financial statements had "often arrived months late, if at all, investors said in legal declarations," while the company's financial department "consisted of one external accountant who worked mostly from home using QuickBooks." And the comnpany also had no human resources department or organizational chart... After Manish Lachwani founded the Silicon Valley software start-up HeadSpin in 2015, he inflated the company's revenue numbers by nearly fourfold and falsely claimed that firms including Apple and American Express were customers. He showed a profit where there were losses. He used HeadSpin's cash to make risky trades on tech stocks. And he created fake invoices to cover it all up.

What was especially breathtaking was how easily Mr. Lachwani, now 48, pulled all that off... [HeadSpin] had no chief financial officer, had no human resources department and was never audited. Mr. Lachwani used that lack of oversight to paint a rosier picture of HeadSpin's growth. Even though its main investors knew the start-up's financials were not accurate, according to Mr. Lachwani's lawyers, they chose to invest anyway, eventually propelling HeadSpin to a $1.1 billion valuation in 2020. When the investors pushed Mr. Lachwani to add a chief financial officer and share more details about the company's finances, he simply brushed them off. These details emerged this month in filings in U.S. District Court for the Northern District of California after Mr. Lachwani had pleaded guilty to three counts of fraud in April...

The absence of controls at HeadSpin is part of an increasingly noticeable pattern at Silicon Valley start-ups that have run into trouble. Over the past decade, investors in tech start-ups were so eager to back hot companies that many often overlooked reckless behavior and gave up key controls like board seats, all in the service of fast growth and disruption. Then when founders took the ethos of "fake it till you make it" too far, their investors were often unaware or helpless...

Now, amid a start-up shakeout, more frauds have started coming to light. The founder of the college aid company Frank has been charged, the internet connectivity start-up Cloudbrink has been sued, and the social media app IRL has been investigated and sued. Last month, Mike Rothenberg, a Silicon Valley investor, was found guilty on 21 counts of fraud and money laundering. On Monday, Trevor Milton, founder of the electric vehicle company Nikola, was sentenced to four years in prison for lying about Nikola's technological capabilities.

The Times points out that similarly, FTX only had a three-person board "with barely any influence over the company, tracked its finances on QuickBooks and used a small, little-known accounting firm." And that Theranos had no financial audits for six years.
Math

A Chess Formula Is Taking Over the World (theatlantic.com) 28

An anonymous reader quotes a report from The Atlantic: In October 2003, Mark Zuckerberg created his first viral site: not Facebook, but FaceMash. Then a college freshman, he hacked into Harvard's online dorm directories, gathered a massive collection of students' headshots, and used them to create a website on which Harvard students could rate classmates by their attractiveness, literally and figuratively head-to-head. The site, a mean-spirited prank recounted in the opening scene of The Social Network, got so much traction so quickly that Harvard shut down his internet access within hours. The math that powered FaceMash -- and, by extension, set Zuckerberg on the path to building the world's dominant social-media empire -- was reportedly, of all things, a formula for ranking chess players: the Elo system.

Fundamentally, what an Elo rating does is predict the outcome of chess matches by assigning every player a number that fluctuates based purely on performance. If you beat a slightly higher-ranked player, your rating goes up a little, but if you beat a much higher-ranked player, your rating goes up a lot (and theirs, conversely, goes down a lot). The higher the rating, the more matches you should win. That is what Elo was designed for, at least. FaceMash and Zuckerberg aside, people have deployed Elo ratings for many sports -- soccer, football, basketball -- and for domains as varied as dating, finance, and primatology. If something can be turned into a competition, it has probably been Elo-ed. Somehow, a simple chess algorithm has become an all-purpose tool for rating everything. In other words, when it comes to the preferred way to rate things, Elo ratings have the highest Elo rating. [...]

Elo ratings don't inherently have anything to do with chess. They're based on a simple mathematical formula that works just as well for any one-on-one, zero-sum competition -- which is to say, pretty much all sports. In 1997, a statistician named Bob Runyan adapted the formula to rank national soccer teams -- a project so successful that FIFA eventually adopted an Elo system for its official rankings. Not long after, the statistician Jeff Sagarin applied Elo to rank NFL teams outside their official league standings. Things really took off when the new ESPN-owned version of Nate Silver's 538 launched in 2014 and began making Elo ratings for many different sports. Some sports proved trickier than others. NBA basketball in particular exposed some of the system's shortcomings, Neil Paine, a stats-focused sportswriter who used to work at 538, told me. It consistently underrated heavyweight teams, for example, in large part because it struggled to account for the meaninglessness of much of the regular season and the fact that either team might not be trying all that hard to win a given game. The system assumed uniform motivation across every team and every game. Pretty much anything, it turns out, can be framed as a one-on-one, zero-sum game.
Arpad Emmerich Elo, creator of the Elo rating system, understood the limitations of his invention. "It is a measuring tool, not a device of reward or punishment," he once remarked. "It is a means to compare performances, assess relative strength, not a carrot waved before a rabbit, or a piece of candy given to a child for good behavior."
Google

Google To Employees: 'We Are a Workplace' 260

Google, once known for its unconventional approach to business, has taken a decisive step towards becoming a more traditional company by firing 28 employees who participated in protests against a $1.2 billion contract with the Israeli government. The move comes after sit-in demonstrations on Tuesday at Google offices in Silicon Valley and New York City, where employees opposed the company's support for Project Nimbus, a cloud computing contract they argue harms Palestinians in Gaza. Nine employees were arrested during the protests.

In a note to employees, CEO Sundar Pichai said, "We have a culture of vibrant, open discussion... But ultimately we are a workplace and our policies and expectations are clear: this is a business, and not a place to act in a way that disrupts coworkers or makes them feel unsafe, to attempt to use the company as a personal platform, or to fight over disruptive issues or debate politics."

Google also says that the Project Nimbus contract is "not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services."

Axios adds: Google prided itself from its early days on creating a university-like atmosphere for the elite engineers it hired. Dissent was encouraged in the belief that open discourse fostered innovation. "A lot of Google is organized around the fact that people still think they're in college when they work here," then-CEO Eric Schmidt told "In the Plex" author Steven Levy in the 2000s.

What worked for an organization with a few thousand employees is harder to maintain among nearly 200,000 workers. Generational shifts in political and social expectations also mean that Google's leadership and its rank-and-file aren't always aligned.
Education

Students Are Likely Writing Millions of Papers With AI 115

Amanda Hoover reports via Wired: Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows. A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.
Education

Harvard Reinstates Standardized Testing Requirement (axios.com) 84

Harvard College is reinstating the requirement for standardized testing, reversing course on a pandemic-era policy that made them optional. It follows similar moves from elite universities like Yale, Dartmouth, and MIT. Axios reports: At Harvard, the mandate will be in place for students applying to begin school in fall 2025. Harvard had previously committed to a test-optional policy for applicants through the class of 2030, which would have started in fall 2026. Most students who applied since the pandemic began have submitted test scores despite the test-optional policy, the university said.

Reviewing SAT/ACT scores as part of a student's application packet helps an admissions decision be holistic, the university said in a statement. "Standardized tests are a means for all students, regardless of their background and life experience, to provide information that is predictive of success in college and beyond," Hopi Hoekstra, a Harvard dean, said in the statement. "Indeed, when students have the option of not submitting their test scores, they may choose to withhold information that, when interpreted by the admissions committee in the context of the local norms of their school, could have potentially helped their application."

Earth

March Marks Yet Another Record In Global Heat (reuters.com) 158

According to the European Union, Earth has reached its warmest March on record, capping a 10-month streak in which every month set a new temperature record. Reuters reports: Each of the last 10 months ranked as the world's hottest on record, compared with the corresponding month in previous years, the EU's Copernicus Climate Change Service (C3S) said in a monthly bulletin. The 12 months ending with March also ranked as the planet's hottest ever recorded 12-month period, C3S said. From April 2023 to March 2024, the global average temperature was 1.58 degrees Celsius above the average in the 1850-1900 pre-industrial period.

C3S' dataset goes back to 1940, which the scientists cross-checked with other data to confirm that last month was the hottest March since the pre-industrial period. Already, 2023 was the planet's hottest year in global records going back to 1850. El Nino peaked in December-January and is now weakening, which may help to break the hot streak toward the end of the year. But despite El Nino easing in March, the world's average sea surface temperature hit a record high, for any month on record, and marine air temperatures remained unusually high, C3S said.
"The main driver of the warming is fossil fuel emissions," said Friederike Otto, a climate scientist at Imperial College London's Grantham Institute. Failure to reduce these emissions will continue to drive the warming of the planet, resulting in more intense droughts, fires, heatwaves and heavy rainfall, Otto said.
Education

Professors Are Now Using AI to Grade Essays. Are There Ethical Concerns? (cnn.com) 102

A professor at Ithaca College runs part of each student's essay through ChatGPT, "asking the AI tool to critique and suggest how to improve the work," reports CNN. (The professor said "The best way to look at AI for grading is as a teaching assistant or research assistant who might do a first pass ... and it does a pretty good job at that.")

And the same professor then requires their class of 15 students to run their draft through ChatGPT to see where they can make improvements, according to the article: Both teachers and students are using the new technology. A report by strategy consultant firm Tyton Partners, sponsored by plagiarismâdetection platform Turnitin, found half of college students used AI tools in Fall 2023. Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023.

Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating assignments. They're also using the burgeoning tools to create quizzes, polls, videos and interactives to up the ante" for what's expected in the classroom. Students, on the other hand, are leaning on tools such as ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint and other products.

But while some schools have formed policies on how students can or can't use AI for schoolwork, many do not have guidelines for teachers. The practice of using AI for writing feedback or grading assignments also raises ethical considerations. And parents and students who are already spending hundreds of thousands of dollars on tuition may wonder if an endless feedback loop of AI-generated and AI-graded content in college is worth the time and money.

A professor of business ethics at the University ofâVirginia "suggested teachers use AI to look at certain metrics — such as structure, language use and grammar — and give a numerical score on those figures," according to the article. ("But teachers should then grade students' work themselves when looking for novelty, creativity and depth of insight.")

But a writer's workshop teacher at the University of Lynchburg in Virginia "also sees uploading a student's work to ChatGPT as a 'huge ethical consideration' and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms..."

Even the Ithaca professor acknowledged to CNN that "If teachers use it solely to grade, and the students are using it solely to produce a final product, it's not going to work."
Medicine

America's FDA Forced to Settle 'Groundless' Lawsuit Over Its Ivermectin Warnings (msn.com) 350

As a department of America's federal Health agency, the Food and Drug Administration is responsible for public health rules, including prescription medicines. And the FDA "has not changed its position that currently available clinical trial data do not demonstrate that ivermectin is effective against COVID-19," they confirmed to CNN this week. "The agency has not authorized or approved ivermectin for use in preventing or treating COVID-19."

But there was also a lawsuit. In "one of its more popular pandemic-era social media campaigns," the agency tweeted out "You are not a horse. You are not a cow. Seriously, y'all. Stop it." The post attracted nearly 106,000 likes — and over 46,000 reposts, and was followed by another post on Instagram. "Stop it with the #ivermectin. It's not authorized for treating #COVID."

Los Angeles Times business columnist Michael Hiltzik writes that the posts triggered a "groundless" lawsuit: It was those latter two lines that exercised three physicians who had been prescribing ivermectin for patients. They sued the FDA in 2022, asserting that its advisory illegally interfered with the practice of medicine — specifically with their ability to continue prescribing the drug. A federal judge in Texas threw out their case, but the 5th Circuit Court of Appeals — the source of a series of chuckleheaded antigovernment rulings in recent years — reinstated it last year, returning it to the original judge for reconsideration.

Now the FDA has settled the case by agreeing to delete the horse post and two similar posts from its accounts on the social media platforms X, LinkedIn and Facebook. The agency also agreed to retire a consumer advisory titled "Why You Should Not Use Ivermectin to Treat or Prevent COVID-19." In defending its decision, the FDA said it "has chosen to resolve this lawsuit rather than continuing to litigate over statements that are between two and nearly four years old."

That sounds reasonable enough, but it's a major blunder. It leaves on the books the 5th Circuit's adverse ruling, in which a panel of three judges found that the FDA's advisory crossed the line from informing consumers, which they said is all right, to recommending that consumers take some action, which they said is not all right... That's a misinterpretation of the law and the FDA's actions, according to Dorit Rubinstein Reiss of UC College of the Law in San Francisco. "The FDA will seek to make recommendations against the misuse of products in the future, and having that decision on the books will be used to litigate against it," she observed after the settlement.

"A survey by Boston University and the University of Michigan estimated that Medicare and private insurers had wasted $130 million on ivermectin prescriptions for COVID in 2021 alone."
Space

Henrietta Leavitt, Cosmology Pioneer, Receives Belated Obituary (nytimes.com) 14

Longtime Slashdot reader necro81 writes: The New York Times has an occasional series called "Overlooked," whereby notable people whose deaths were overlooked at the time receive the obituary they deserve. Their latest installment eulogizes Henrietta Swan Leavitt, who passed away in 1921 at age 53. From the report: "In the early 20th century, when Henrietta Leavitt began studying photographs of distant stars at the Harvard College Observatory, astronomers had no idea how big the universe was... Leavitt, working as a poorly paid member of a team of mostly women [computers] who cataloged data for the scientists at the observatory, found a way to peer out into the great unknown and measure it."

Leavitt discovered the period-luminosity relationship for Cepheid variable stars. The relationship, now known as Leavitt's Law, is a crucial rung in the cosmic distance ladder, the methods for measuring the distance to stars, galaxies, and across the visible universe. From the report: "[Leavitt's Law] underpinned the research of other pioneering astronomers, including Edwin Hubble and Harlow Shapley, whose work in the years after World War I demolished long-held ideas about our solar system's place in the cosmos. Leavitt's Law has been used on the Hubble Telescope and the James Webb Space Telescope in making new calculations about the rate of expansion of the universe and the proximity of stars billions of light years from earth. 'She cracked into something that was not only impressive scientifically but shifted an entire paradigm of thinking...'"

Science

Memories Are Made By Breaking DNA - and Fixing It (nature.com) 23

When a long-term memory forms, some brain cells experience a rush of electrical activity so strong that it snaps their DNA. Then, an inflammatory response kicks in, repairing this damage and helping to cement the memory, a study in mice shows. Nature: The findings, published on 27 March in Nature, are "extremely exciting," says Li-Huei Tsai, a neurobiologist at the Massachusetts Institute of Technology in Cambridge who was not involved in the work. They contribute to the picture that forming memories is a "risky business," she says. Normally, breaks in both strands of the double helix DNA molecule are associated with diseases including cancer. But in this case, the DNA damage-and-repair cycle offers one explanation for how memories might form and last.

It also suggests a tantalizing possibility: this cycle might be faulty in people with neurodegenerative diseases such as Alzheimer's, causing a build-up of errors in a neuron's DNA, says study co-author Jelena Radulovic, a neuroscientist at the Albert Einstein College of Medicine in New York City. [...] To better understand the part these DNA breaks play in memory formation, Radulovic and her colleagues trained mice to associate a small electrical shock with a new environment, so that when the animals were once again put into that environment, they would 'remember' the experience and show signs of fear, such as freezing in place. Then the researchers examined gene activity in neurons in a brain area key to memory -- the hippocampus. They found that some genes responsible for inflammation were active in a set of neurons four days after training. Three weeks after training, the same genes were much less active.

The Courts

Apple Sues Former Employee For Leaking Journal App, Vision Pro Details (macrumors.com) 47

Apple has sued its former employee Andrew Aude for leaking information about more than a half-dozen Apple products and policies, including its then-unannounced Journal app and Vision Pro headset, product development policies, strategies for regulatory compliance, employee headcounts, and more. MacRumors reports: Aude joined Apple as an iOS software engineer in 2016, shortly after graduating college. He worked on optimizing battery performance, making him "privy to information regarding dozens of Apple's most sensitive projects," according to the complaint. In April 2023, for example, Apple alleges that Aude leaked a list of finalized features for the iPhone's Journal app to a journalist at The Wall Street Journal on a phone call. That same month, The Wall Street Journal's Aaron Tilley published a report titled "Apple Plans iPhone Journaling App in Expansion of Health Initiatives."

Using the encrypted messaging app Signal, Aude is said to have sent "over 1,400" messages to the same journalist, who Aude referred to as "Homeboy." He is also accused of sending "over 10,000 text messages" to another journalist at the website The Information, and he allegedly traveled "across the continent" to meet with her. Other leaks relate to the Vision Pro and other hardware: "As another example, an October 2020 screenshot on Mr. Aude's Apple-issued work iPhone shows that he disclosed Apple's development of products within the spatial computing space to a non-Apple employee. Mr. Aude made this disclosure even though Apple's development efforts were confidential and not known to the public. Over the following months, Mr. Aude disclosed additional Apple confidential information -- including information concerning unannounced products, and hardware information."

Apple believes that Aude's actions were "extensive and purposeful," with Aude allegedly admitting that he leaked information so he could "kill" products and features with which he took issue. The company alleges that his wrongful disclosures resulted in at least five news articles discussing the company's confidential and proprietary information. Apple says these public revelations impeded its ability to "surprise and delight" with its latest products. Apple said it learned of Aude's wrongful disclosures in late 2023, and the company fired him for his alleged misconduct in December of that year. [...] Apple is seeking both compensatory and punitive damages in an amount to be determined at trial, and it is also seeking other legal remedies.
The full complaint can be read here (PDF).
AI

Americans' Use of ChatGPT Is Ticking Up 25

Pew Research: It's been more than a year since ChatGPT's public debut set the tech world abuzz. And Americans' use of the chatbot is ticking up: 23% of U.S. adults say they have ever used it, according to a Pew Research Center survey conducted in February, up from 18% in July 2023. The February survey also asked Americans about several ways they might use ChatGPT, including for workplace tasks, for learning and for fun. While growing shares of Americans are using the chatbot for these purposes, the public is more wary than not of what the chatbot might tell them about the 2024 U.S. presidential election. About four-in-ten adults have not too much or no trust in the election information that comes from ChatGPT. By comparison, just 2% have a great deal or quite a bit of trust.

Most Americans still haven't used the chatbot, despite the uptick since our July 2023 survey on this topic. But some groups remain far more likely to have used it than others. Adults under 30 stand out: 43% of these young adults have used ChatGPT, up 10 percentage points since last summer. Use of the chatbot is also up slightly among those ages 30 to 49 and 50 to 64. Still, these groups remain less likely than their younger peers to have used the technology. Just 6% of Americans 65 and up have used ChatGPT. Highly educated adults are most likely to have used ChatGPT: 37% of those with a postgraduate or other advanced degree have done so, up 8 points since July 2023. This group is more likely to have used ChatGPT than those with a bachelor's degree only (29%), some college experience (23%) or a high school diploma or less (12%).

Slashdot Top Deals