The Courts

Court Rules TCL's 'QLED' TVs Aren't Truly QLED (techradar.com) 43

A German court ruled that TCL misled consumers by marketing certain TVs as "QLED" when they "do not deliver the color reproduction expected from QLED TVs." It has ordered the company to stop advertising or selling those models in Germany. TechRadar reports: The case was filed by Samsung, which claimed that TCL was running deceptive advertising, and more court cases on the same topic are coming in other countries, including the US. The lawsuits all make the same claim: that what TCL calls a QLED isn't a QLED as it's commonly understood, and that consumers are being mis-sold TVs as a result. The court found that TCL's quantum dot TVs, such as the QLED870 series available in Germany, didn't deliver the characteristics of a quantum dot LED, and that consumers were being misled as a result.

The tests were commissioned by Seoul chemicals company Hansol Chemical (which, it's worth noting, works with Samsung, a key TCL rival, and which heavily promoted the results of these tests alongside launching the court case) and carried out by Geneva's SGS and the UK's Intertek. According to ET News (via Google Translate), "no indium (In) or cadmium (Cd) was detected in three TCL QD TV models. Indium and cadmium are essential materials that cannot be omitted for QD implementation... if neither is present, QD technology cannot be said to have been applied." You can see the test results here.

TCL disputed the findings -- "The QD content may vary depending on the supplier, but it definitely contains cadmium," it responded -- and published its own tests, including a test by SGS, the same firm that conducted tests for Hansol. The results contradicted Hansol Chemical's tests, but those tests used a different methodology: where TCL's tests focused on TCL's quantum dot films, Hansol's commissioned tests were on finished TCL TVs. [...] Hansol Chemical has filed a complaint against TCL with the US Federal Trade Commission, alleging false advertising, and TCL is also facing class action lawsuits in several US states making the same claim. TCL isn't alone here: Hisense has also been targeted in the US.

AI

China Drafts World's Strictest Rules To End AI-Encouraged Suicide, Violence (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence. China's Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or "other means" to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the "planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics" at a time when companion bot usage is rising globally.

[...] Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register -- the guardian would be notified if suicide or self-harm is discussed. Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed "emotional traps," -- chatbots would additionally be prevented from misleading users into making "unreasonable decisions," a translation of the rules indicates.

Perhaps most troubling to AI developers, China's rules would also put an end to building chatbots that "induce addiction and dependence as design goals." [...] AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback. Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms' hopes for global dominance, as China's market is key to promoting companion bots, Business Research Insights reported earlier this month.

Space

What's the Best Ways for Humans to Explore Space? (noemamag.com) 95

Should we leave space exploration to robots — or prioritize human spaceflight, making us a multiplanetary species?

Harvard professor Robin Wordsworth, who's researched the evolution and habitability of terrestrial-type planets, shares his thoughts: In space, as on Earth, industrial structures degrade with time, and a truly sustainable life support system must have the capability to rebuild and recycle them. We've only partially solved this problem on Earth, which is why industrial civilization is currently causing serious environmental damage. There are no inherent physical limitations to life in the solar system beyond Earth — both elemental building blocks and energy from the sun are abundant — but technological society, which developed as an outgrowth of the biosphere, cannot yet exist independently of it. The challenge of building and maintaining robust life-support systems for humans beyond Earth is a key reason why a machine-dominated approach to space exploration is so appealing...

However, it's notable that machines in space have not yet accomplished a basic task that biology performs continuously on Earth: acquiring raw materials and utilizing them for self-repair and growth. To many, this critical distinction is what separates living from non-living systems... The most advanced designs for self-assembling robots today begin with small subcomponents that must be manufactured separately beforehand. Overall, industrial technology remains Earth-centric in many important ways. Supply chains for electronic components are long and complex, and many raw materials are hard to source off-world... If we view the future expansion of life into space in a similar way as the emergence of complex life on land in the Paleozoic era, we can predict that new forms will emerge, shaped by their changed environment, while many historical characteristics will be preserved. For machine technology in the near term, evolution in a more life-like direction seems likely, with greater focus on regenerative parts and recycling, as well as increasingly sophisticated self-assembly capabilities. The inherent cost of transporting material out of Earth's gravity well will provide a particularly strong incentive for this to happen.

If building space habitats is hard and machine technology is gradually developing more life-like capabilities, does this mean we humans might as well remain Earth-bound forever? This feels hard to accept because exploration is an intrinsic part of the human spirit... To me, the eventual extension of the entire biosphere beyond Earth, rather than either just robots or humans surrounded by mechanical life-support systems, seems like the most interesting and inspiring future possibility. Initially, this could take the form of enclosed habitats capable of supporting closed-loop ecosystems, on the moon, Mars or water-rich asteroids, in the mold of Biosphere 2. Habitats would be manufactured industrially or grown organically from locally available materials. Over time, technological advances and adaptation, whether natural or guided, would allow the spread of life to an increasingly wide range of locations in the solar system.

The article ponders the benefits (and the history) of both approaches — with some fasincating insights along the way.

"If genuine alien life is out there somewhere, we'll have a much better chance of comprehending it once we have direct experience of sustaining life beyond our home planet."
Google

Google AI Fabricates Explanations For Nonexistent Idioms (wired.com) 99

Google's search AI is confidently generating explanations for nonexistent idioms, once again revealing fundamental flaws in large language models. Users discovered that entering any made-up phrase plus "meaning" triggers AI Overviews that present fabricated etymologies with unwarranted authority.

When queried about phrases like "a loose dog won't surf," Google's system produces detailed, plausible-sounding explanations rather than acknowledging these expressions don't exist. The system occasionally includes reference links, further enhancing the false impression of legitimacy.

Computer scientist Ziang Xiao from Johns Hopkins University attributes this behavior to two key LLM characteristics: prediction-based text generation and people-pleasing tendencies. "The prediction of the next word is based on its vast training data," Xiao explained. "However, in many cases, the next coherent word does not lead us to the right answer."
Math

Leaving Money on the Table (nber.org) 54

Abstract of a paper on NBER: There is much disagreement about the extent to which financial incentives motivate study participants. We elicit preferences for being paid for completing a survey, including a one-in-twenty chance of winning a $100 electronic gift card, a guaranteed electronic gift card with the same expected value, and an option to refuse payment. More than twice as many participants chose the lottery as chose the guaranteed payment. Given that most people are risk averse, this pattern suggests that factors beyond risk preferences -- such as hassle costs -- influenced their decision-making. Almost 20 percent of participants actively refused payment, demonstrating low monetary motivation. We find both systematic and unobserved heterogeneity in the characteristics of who turned down payment. The propensity to refuse payment is more than four times as large among individuals 50 and older compared to younger individuals, suggesting a tradeoff between financially motivating participants and obtaining a representative sample. Overall, our results suggest that modest electronic gift card payments violate key requirements of Vernon Smith's induced value theory.
Science

Scientists Create 'Woolly Mice' (npr.org) 78

EmagGeek shares a report: Scientists have genetically engineered mice with some key characteristics of an extinct animal that was far larger -- the woolly mammoth. This "woolly mouse" marks an important step toward achieving the researchers' ultimate goal -- bringing a woolly mammoth-like creature back from extinction, they say.

"For us, it's an incredibly big deal," says Beth Shapiro, chief science officer at Colossal Biosciences, a Dallas company trying to resurrect the woolly mammoth and other extinct species. The company announced the creation of the woolly mice Tuesday in a news release and posted a scientific paper online detailing the achievement. Scientists implanted genetically modified embryos in female lab mice that gave birth to the first of the woolly pups in October.

My editorial: One has to wonder why it is necessary or even a great idea to bring back species that nature long ago determined were a failure.

Programming

Rust Developer Survey Finds Increasing Usage, Especially on Linux (rust-lang.org) 26

This year's "State of Rust" survey was completed by 7,310 Rust developers. DevClass note some key findings: When asked about their biggest worries for Rust's future, 45.5 percent cited "not enough usage in the tech industry," up from 42.5 percent last year, just ahead of the 45.2 percent who cited complexity as a concern... Only 18.6 percent declared themselves "not worried," though this is a slight improvement on 17.8 percent in 2023...

Another question asks whether respondents are using Rust at work. 38.2 percent claimed to use it for most of their coding [up from 34% in 2023], and 13.4 percent a few times a week, accounting for just over half of responses. At the organization level there is a similar pattern. 45.5 percent of organizations represented by respondents make "non-trivial use of Rust," up from 38.7 percent last year.

More details from I Programmer: On the up are "Using Rust helps us achieve or goals", now 82% compared to 72% in 2022; "We're likely to use Rust again in the future", up 3% to 78%; and "Using Rust has been worth the cost of Adoption". Going down are "Adopting Rust has been challenging", now 34.5% compared to 38.5% in 2022; and "Overall adopting Rust has slowed down our team" down by over 2% to 7%.
"According to the survey, organizations primarily choose Rust for building correct and bug-free software (87.1%), performance characteristics (84.5%), security and safety properties (74.8%), and development enjoyment (71.2%)," writes The New Stack: Rust seems to be especially popular for creating server backends (53.4%), web and networking services, cloud technologies and WebAssembly, the report said. It also seems to be gaining more traction for embedded use cases... Regarding the preferred development environment, Linux remains the dominant development platform (73.7%).

However, although VS Code remains the leading editor, its usage dropped five percentage points, from 61.7% to 56.7%, but the Zed editor gained notable traction, from 0.7% to 8.9%. Also, "nine out of 10 Rust developers use the current stable version, suggesting strong confidence in the language's stability," the report said...

Overall, 82% of respondents report that Rust helped their company achieve its goals, and daily Rust usage increased to 53% (up four percentage points from 2023). When asked why they use Rust at work, 47% of respondents cited a need for precise control over their software, which is up from 37% when the question was asked two years ago.

AI

Arrested by AI: When Police Ignored Standards After AI Facial-Recognition Matches (msn.com) 55

A county transit police detective fed a poor-quality image to an AI-powered facial recognition program, remembers the Washington Post, leading to the arrest of "Christopher Gatlin, a 29-year-old father of four who had no apparent ties to the crime scene nor a history of violent offenses." He was unable to post the $75,000 cash bond required, and "jailed for a crime he says he didn't commit, it would take Gatlin more than two years to clear his name." A Washington Post investigation into police use of facial recognition software found that law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence... The Post reviewed documents from 23 police departments where detailed records about facial recognition use are available and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime — in most cases contradicting their own internal policies requiring officers to corroborate all leads found through AI. Some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts, The Post found. One police report referred to an uncorroborated AI result as a "100% match." Another said police used the software to "immediately and unquestionably" identify a suspected thief.

Gatlin is one of at least eight people wrongfully arrested in the United States after being identified through facial recognition... All of the cases were eventually dismissed. Police probably could have eliminated most of the people as suspects before their arrest through basic police work, such as checking alibis, comparing tattoos, or, in one case, following DNA and fingerprint evidence left at the scene.

Some statistics from the article about the eight wrongfully-arrested people:
  • In six cases police failed to check alibis
  • In two cases police ignored evidence that contradicted their theory
  • In five cases police failed to collect key pieces of evidence
  • In three cases police ignored suspects' physical characteristics
  • In six cases police relied on problematic witness statements

The article provides two examples of police departments forced to pay $300,000 settlements after wrongful arrests caused by AI mismatches. But "In interviews with The Post, all eight people known to have been wrongly arrested said the experience had left permanent scars: lost jobs, damaged relationships, missed payments on car and home loans. Some said they had to send their children to counseling to work through the trauma of watching their mother or father get arrested on the front lawn.

"Most said they also developed a fear of police."


AI

Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 123

Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
Music

Suno & Udio To RIAA: Your Music Is Copyrighted, You Can't Copyright Styles (torrentfreak.com) 85

AI music generators Suno and Udio responded to the lawsuits filed by the major recording labels, arguing that their platforms are tools for making new, original music that "didn't and often couldn't previously exist."

"Those genres and styles -- the recognizable sounds of opera, or jazz, or rap music -- are not something that anyone owns," the companies said. "Our intellectual property laws have always been carefully calibrated to avoid allowing anyone to monopolize a form of artistic expression, whether a sonnet or a pop song. IP rights can attach to a particular recorded rendition of a song in one of those genres or styles. But not to the genre or style itself." TorrentFreak reports: "[The labels] frame their concern as one about 'copies' of their recordings made in the process of developing the technology -- that is, copies never heard or seen by anyone, made solely to analyze the sonic and stylistic patterns of the universe of pre-existing musical expression. But what the major record labels really don't want is competition." The labels' position is that any competition must be legal, and the AI companies state quite clearly that the law permits the use of copyrighted works in these circumstances. Suno and Udio also make it clear that snippets of copyrighted music aren't stored as a library of pre-existing content in the neural networks of their AI models, "outputting a collage of 'samples' stitched together from existing recordings" when prompted by users.

"[The neural networks were] constructed by showing the program tens of millions of instances of different kinds of recordings," Suno explains. "From analyzing their constitutive elements, the model derived a staggeringly complex collection of statistical insights about the auditory characteristics of those recordings -- what types of sounds tend to appear in which kinds of music; what the shape of a pop song tends to look like; how the drum beat typically varies from country to rock to hip-hop; what the guitar tone tends to sound like in those different genres; and so on." These models are vast stores, not of copyrighted music, the defendants say, but information about what musical styles consist of, and it's from that information new music is made.

Most copyright lawsuits in the music industry are about reproduction and public distribution of identified copyright works, but that's certainly not the case here. "The Complaint explicitly disavows any contention that any output ever generated by Udio has infringed their rights. While it includes a variety of examples of outputs that allegedly resemble certain pre-existing songs, the Complaint goes out of its way to say that it is not alleging that those outputs constitute actionable copyright infringement." With Udio declaring that, as a matter of law, "that key point makes all the difference," Suno's conclusion is served raw. "That concession will ultimately prove fatal to Plaintiffs' claims. It is fair use under copyright law to make a copy of a protected work as part of a back-end technological process, invisible to the public, in the service of creating an ultimately non-infringing new product." Noting that Congress enacted the first copyright law in 1791, Suno says that in the 233 years since, not a single case has ever reached a contrary conclusion.

In addition to addressing allegations unique to their individual cases, the AI companies accuse the labels of various types of anti-competitive behavior. Imposing conditions to prevent streaming services obtaining licensed music from smaller labels at lower rates, seeking to impose a "no AI" policy on licensees, to claims that they "may have responded to outreach from potential commercial counterparties by engaging in one or more concerted refusals to deal." The defendants say this type of behavior is fueled by the labels' dominant control of copyrighted works and by extension, the overall market. Here, however, ownership of copyrighted music is trumped by the existence and knowledge of musical styles, over which nobody can claim ownership or seek to control. "No one owns musical styles. Developing a tool to empower many more people to create music, by scrupulously analyzing what the building blocks of different styles consist of, is a quintessential fair use under longstanding and unbroken copyright doctrine. "Plaintiffs' contrary vision is fundamentally inconsistent with the law and its underlying values."
You can read Suno and Udio's answers to the RIAA's lawsuits here (PDF) and here (PDF).
The Internet

Quantum Internet Draws Near Thanks To Entangled Memory Breakthroughs (newscientist.com) 47

An anonymous reader quotes a report from New Scientist: Efforts to build a global quantum internet have received a boost from two developments in quantum information storage that could one day make it possible to communicate securely across hundreds or thousands of kilometers. The internet as it exists today involves sending strings of digital bits, or 0s and 1s, in the form of electrical or optical signals, to transmit information. A quantum internet, which could be used to send unhackable communications or link up quantum computers, would use quantum bits instead. These rely on a quantum property called entanglement, a phenomenon in which particles can be linked and measuring one particle instantly influences the state of another, no matter how far apart they are. Sending these entangled quantum bits, or qubits, over very long distances, requires a quantum repeater, a piece of hardware that can store the entangled state in memory and reproduce it to transmit it further down the line. These would have to be placed at various points on a long-distance network to ensure a signal gets from A to B without being degraded.

Quantum repeaters don't yet exist, but two groups of researchers have now demonstrated long-lasting entanglement memory in quantum networks over tens of kilometers, which are the key characteristics needed for such a device. Can Knaut at Harvard University and his colleagues set up a quantum network consisting of two nodes separated by a loop of optical fibre that spans 35 kilometers across the city of Boston. Each node contains both a communication qubit, used to transmit information, and a memory qubit, which can store the quantum state for up to a second. "Our experiment really put us in a position where we're really close to working on a quantum repeater demonstration," says Knaut. To set up the link, Knaut and his team entangled their first node, which contains a type of diamond with an atom-sized hole in it, with a photon that they sent to their second node, which contains a similar diamond. When the photon arrives at the second diamond, it becomes entangled with both nodes. The diamonds are able to store this state for a second. A fully functioning quantum repeater using similar technology could be demonstrated in the next couple of years, says Knaut, which would enable quantum networks connecting cities or countries.

In separate work, Xiao-Hui Bao at the University of Science and Technology of China and his colleagues entangled three nodes together, each separated by around 10 kilometers in the city of Hefei. Bao and his team's nodes use supercooled clouds of hundreds of millions of rubidium atoms to generate entangled photons, which they then sent across the three nodes. The central of the three nodes is able to coordinate these photons to link the atom clouds, which act as a form of memory. The key advance for Bao and his team's network is to match the frequency of the photons meeting at the central node, which will be crucial for quantum repeaters connecting different nodes. While the storage time was less than Knaut's team, at 100 microseconds, it is still long enough to perform useful operations on the transmitted information.

News

What's in a Name? The Battle of Baby T. Rex and Nanotyrannus. (nytimes.com) 20

A dinosaur fossil listed for sale in London for $20 million embodies one of the most heated debates in paleontology. From a report: When fossil hunters unearthed the remains of a dinosaur from the hills of eastern Montana five years ago, they carried several key characteristics of a Tyrannosaurus rex: a pair of giant legs for walking, a much smaller pair of arms for slashing prey, and a long tail stretching behind it. But unlike a full-grown T. rex, which would be about the size of a city bus, this dinosaur was more like the size of a pickup truck. The specimen, which is now listed for sale for $20 million at an art gallery in London, raises a question that has come to obsess paleontologists: Is it simply a young T. rex who died before reaching maturity, or does it represent a different but related species of dinosaur known as a Nanotyrannus?

The dispute has produced reams of scientific research and decades of debate, polarizing paleontologists along the way. Now, with dinosaur fossils increasingly fetching eye-popping prices at auction, the once-esoteric dispute has begun to ripple through auction houses and galleries, where some see the T. rex name as a valuable brand that can more easily command high prices. "It's ultimately a quite in-the-weeds question of the taxonomy and the classification of one very particular type of dinosaur," said Steve Brusatte, a paleontologist at the University of Edinburgh. "However, it involves T. rex, and the debate always gets a little bit more ferocious when the king of dinosaurs is involved."

On the internet, juvenile T. rex versus Nanotyrannus has become something of a meme, providing fuel for jokes on niche social media channels. ("I won't believe in Nanotyrannus until it shows up at my own door and devours me," a paleontology student with the handle "TheDinoBuff" joked recently on the social media site X.) The gallery selling the specimen discovered in Montana -- which is known as Chomper -- was faced with a choice. Call it a juvenile T. rex? Label it a Nanotyrannus? Or embrace the ambiguity of an unresolved scientific debate? The David Aaron gallery in London went with calling it a "rare juvenile Tyrannosaurus rex skeleton." It cited an influential 2020 paper on the subject led by Holly N. Woodward, which used an analysis of growth rings within bone samples from two disputed specimens -- which are estimated to have been similarly sized to Chomper -- to argue that they were juveniles nearing growth spurts.

Power

'What Drives This Madness On Small Modular Nuclear Reactors?' (cleantechnica.com) 331

Slashdot reader XXongo writes: Nuclear power plants have historically been built at gigawatt scale. Recently, however, there has been a new dawn seeing multiple projects to build Small Modular Reactors ("SMRs"), both funded by billionaires and by the U.S. Department of Energy.

Recently one of the players farthest ahead in the development, NuScale Power, canceled their headline project, but many other projects continue. In a lengthy analysis, Michael Barnard thinks that's crazy, and attributes the drive toward small reactors to "a tangled web that includes Bill Gates, Silicon Valley, desperate coal towns, desperate nuclear towns, the inability of the USA to build big infrastructure, the U.S. Department of Energy's budget, magical thinking and more." Due to thermal inefficiencies, small reactors are more expensive per unit of power generated, he points out, and the SMR projects ignore most of the field's history's lessons about both the scale of reactors for commercial success and the conditions needed for success.

They are relying on Wright's Law, that each doubling of the number of manufactured items in production manufacturing would bring cost per item down by 20% to 27%, but Barnard points out that the number of reactors needed to achieve enough economy of scale in production to make the reactors make economic sense is unrealistically optimistic. He concludes that only government programs can meet the conditions for successful deployment of nuclear power.

At one point Barnard characters SMRs as "a bunch of lab technologies that have been around for decades that depend on uranium from Russia, that don't have the physical characteristics for cheap nuclear generation and don't have the conditions for success for nuclear generation will be the saviours of the nuclear industry and a key wedge in fighting climate change...

"I like nuclear generation. I know it's safe enough. I'm not concerned about radiation... I just know that it doesn't have the conditions for success to be built and scaled economically in the 21st Century, and wind, water, solar, transmission and storage do."
Facebook

Meta Designed Platforms To Get Children Addicted, Court Documents Allege (theguardian.com) 64

An anonymous reader quotes a report from The Guardian: Instagram and Facebook parent company Meta purposefully engineered its platforms to addict children and knowingly allowed underage users to hold accounts, according to a newly unsealed legal complaint. The complaint is a key part of a lawsuit filed against Meta by the attorneys general of 33 states in late October and was originally redacted. It alleges the social media company knew -- but never disclosed -- it had received millions of complaints about underage users on Instagram but only disabled a fraction of those accounts. The large number of underage users was an "open secret" at the company, the suit alleges, citing internal company documents.

In one example, the lawsuit cites an internal email thread in which employees discuss why a 12-year-old girl's four accounts were not deleted following complaints from the girl's mother stating her daughter was 12 years old and requesting the accounts to be taken down. The employees concluded that "the accounts were ignored" in part because representatives of Meta "couldn't tell for sure the user was underage." The complaint said that in 2021, Meta received over 402,000 reports of under-13 users on Instagram but that 164,000 -- far fewer than half of the reported accounts -- were "disabled for potentially being under the age of 13" that year. The complaint noted that at times Meta has a backlog of up to 2.5m accounts of younger children awaiting action. The complaint alleges this and other incidents violate the Children's Online Privacy and Protection Act, which requires that social media companies provide notice and get parental consent before collecting data from children. The lawsuit also focuses on longstanding assertions that Meta knowingly created products that were addictive and harmful to children, brought into sharp focus by whistleblower Frances Haugen, who revealed that internal studies showed platforms like Instagram led children to anorexia-related content. Haugen also stated the company intentionally targets children under the age of 18.

Company documents cited in the complaint described several Meta officials acknowledging the company designed its products to exploit shortcomings in youthful psychology, including a May 2020 internal presentation called "teen fundamentals" which highlighted certain vulnerabilities of the young brain that could be exploited by product development. The presentation discussed teen brains' relative immaturity, and teenagers' tendency to be driven by "emotion, the intrigue of novelty and reward" and asked how these asked how these characteristics could "manifest ... in product usage." [...] One Facebook safety executive alluded to the possibility that cracking down on younger users might hurt the company's business in a 2019 email. But a year later, the same executive expressed frustration that while Facebook readily studied the usage of underage users for business reasons, it didn't show the same enthusiasm for ways to identify younger kids and remove them from its platforms.

Science

Race Cannot Be Used To Predict Heart Disease, Scientists Say (nytimes.com) 97

Doctors have long relied on a few key patient characteristics to assess risk of a heart attack or stroke, using a calculus that considers blood pressure, cholesterol, smoking and diabetes status, as well as demographics: age, sex and race. Now, the American Heart Association is taking race out of the equation. From a report: The overhaul of the widely used cardiac-risk algorithm is an acknowledgment that, unlike sex or age, race identification in and of itself is not a biological risk factor. The scientists who modified the algorithm decided from the start that race itself did not belong in clinical tools used to guide medical decision making, even though race might serve as a proxy for certain social circumstances, genetic predispositions or environmental exposures that raise the risk of cardiovascular disease.

The revision comes amid rising concern about health equity and racial bias within the U.S. health care system, and is part of a broader trend toward removing race from a variety of clinical algorithms. "We should not be using race to inform whether someone gets a treatment or doesn't get a treatment," said Dr. Sadiya Khan, a preventive cardiologist at Northwestern University Feinberg School of Medicine, who chaired the statement writing committee for the American Heart Association, or A.H.A. The statement was published on Friday [PDF] in the association's journal, Circulation. An online calculator using the new algorithm, called PREVENT, is still in development.

Science

Genetics Makes Some People More Likely To Participate In Genetic Studies (arstechnica.com) 47

An anonymous reader quotes a report from Ars Technica: Stefania Benonisdottir and Augustine Kong at Oxford's Big Data Institute have just demonstrated that we can determine if genetic studies are biased using nothing but the genes of the participants. You may wonder how this was done -- quite reasonably, since we can't very well compare the genes of participants to those of non-participants. The analysis done by Kong and his student relies on the key idea that a genetic sequence that occurs more frequently in participants than in nonparticipants will also occur more frequently in the genetic regions that are shared by two related participants. Put differently, a bit of DNA that is common in the population will show up frequently in the study. But it will still only have a 50/50 chance of showing up in the child of someone who carried a copy. If a bit of DNA makes people more likely to enroll in genetic studies, it will be more common both in the overall data and among closely related family members.

So they checked the genetic sequences shared between first-degree relatives -- either parents and children or siblings (but not twins) -- in the UK Biobank. [...] This analysis used genetic data from about 500,000 people collected between 2006 and 2010. It examined roughly 500,000 genetic regions from around 20,000 pairs of first-degree relatives. They didn't find (or look for) "a gene" that correlates with participation in a study. Rather, they compared all of the shared and not-shared genetic sequences among the pairs of first-degree relatives enrolled in the study and analyzed their relative frequencies according to the above three principles. This analysis allowed them to calculate a polygenic score, a summary of how all of the genetic sequences in aggregate contribute to a trait. They deduced that genetics is positively associated with education level, with being invited to participate in further studies, and with accepting that invitation. Genetics was also associated with low BMI. Education level and BMI are both covariates that are often controlled for when using UK Biobank data. But now, no external information is needed; the ascertainment bias can be determined not from looking at other things about the participants' lives, but from their genes.

Benonisdottir, the first author of the work, explained in an email: "It has previously been reported by others that the UK Biobank is not representative with regard to many traits, including BMI and educational attainment. Thus, the fact that these traits are associated with the participation polygenic score, which does not use any information about EA and BMI but is constructed with weights from analysis using our new method of comparing shared and not-shared alleles of participating first-degree relatives, validates that our method is capturing real information about participation." This validation is essential, since their method is so new. The authors of this study propose that their methodology could be used to look for ascertainment bias using only genetic data and that taking participation data into account could help study outcomes become more accurate. They conclude by noting that "participation" is not thus just a result of someone's characteristics and traits; rather, the propensity to participate is a trait in its own right, and one with a genetic component. Being a joiner is hereditary.

AI

AI Tool Shows Promise For Treating Brain Cancer, Study Finds (bloomberg.com) 13

An artificial-intelligence tool has shown promise at helping doctors fight aggressive brain tumors by identifying characteristics that help guide surgery. From a report: The tool -- called the Cryosection Histopathology Assessment and Review Machine, or CHARM -- studies images to quickly pick out the genetic profile of a kind of tumor called glioma, a process that currently takes days or weeks, said Kun-Hsing Yu, senior author of a report released Friday in the journal Med. Surgeons use detailed diagnoses to guide them while they operate, Yu said, and the ability to get them rapidly could improve patients' outcomes and spare them from multiple surgeries. While glioma varies in severity, an aggressive form called glioblastoma can lead to death in less than six months if untreated. Only 17% of people with glioblastoma survive their second year after being diagnosed, according to the American Association of Neurological Surgeons.

Surgeons use information about the genetic profile of a glioma tumor when deciding how much tissue to remove from a patient's brain, as well as whether to implant wafers coated in a cancer-fighting drug. Getting that information, however, currently requires time-consuming testing. Yu and his team of researchers trained a machine-learning algorithm to do the work by showing it pictures of samples gathered during brain surgery, and then checking its work against those patients' diagnoses. CHARM learned to match or outperform other AI systems at identifying the genetic profile of a tumor.

Encryption

Hackers Can Steal Cryptographic Keys By Video-Recording Power LEDs 60 Feet Away (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: Researchers have devised a novel attack that recovers the secret encryption keys stored in smart cards and smartphones by using cameras in iPhones or commercial surveillance systems to video record power LEDs that show when the card reader or smartphone is turned on. The attacks enable a new way to exploit two previously disclosed side channels, a class of attack that measures physical effects that leak from a device as it performs a cryptographic operation. By carefully monitoring characteristics such as power consumption, sound, electromagnetic emissions, or the amount of time it takes for an operation to occur, attackers can assemble enough information to recover secret keys that underpin the security and confidentiality of a cryptographic algorithm. [...]

On Tuesday, academic researchers unveiled new research demonstrating attacks that provide a novel way to exploit these types of side channels. The first attack uses an Internet-connected surveillance camera to take a high-speed video of the power LED on a smart card reader -- or of an attached peripheral device -- during cryptographic operations. This technique allowed the researchers to pull a 256-bit ECDSA key off the same government-approved smart card used in Minerva. The other allowed the researchers to recover the private SIKE key of a Samsung Galaxy S8 phone by training the camera of an iPhone 13 on the power LED of a USB speaker connected to the handset, in a similar way to how Hertzbleed pulled SIKE keys off Intel and AMD CPUs. Power LEDs are designed to indicate when a device is turned on. They typically cast a blue or violet light that varies in brightness and color depending on the power consumption of the device they are connected to.

There are limitations to both attacks that make them unfeasible in many (but not all) real-world scenarios (more on that later). Despite this, the published research is groundbreaking because it provides an entirely new way to facilitate side-channel attacks. Not only that, but the new method removes the biggest barrier holding back previously existing methods from exploiting side channels: the need to have instruments such as an oscilloscope, electric probes, or other objects touching or being in proximity to the device being attacked. In Minerva's case, the device hosting the smart card reader had to be compromised for researchers to collect precise-enough measurements. Hertzbleed, by contrast, didn't rely on a compromised device but instead took 18 days of constant interaction with the vulnerable device to recover the private SIKE key. To attack many other side channels, such as the one in the World War II encrypted teletype terminal, attackers must have specialized and often expensive instruments attached or near the targeted device. The video-based attacks presented on Tuesday reduce or completely eliminate such requirements. All that's required to steal the private key stored on the smart card is an Internet-connected surveillance camera that can be as far as 62 feet away from the targeted reader. The side-channel attack on the Samsung Galaxy handset can be performed by an iPhone 13 camera that's already present in the same room.
Videos here and here show the video-capture process of a smart card reader and a Samsung Galaxy phone, respectively, as they perform cryptographic operations. "To the naked eye, the captured video looks unremarkable," adds Ars.

"But by analyzing the video frames for different RGB values in the green channel, an attacker can identify the start and finish of a cryptographic operation."
Programming

GitHub Claims Source Code Search Engine Is a Game Changer (theregister.com) 39

Thomas Claburn writes via The Register: GitHub has a lot of code to search -- more than 200 million repositories -- and says last November's beta version of a search engine optimized for source code that has caused a "flurry of innovation." GitHub engineer Timothy Clem explained that the company has had problems getting existing technology to work well. "The truth is from Solr to Elasticsearch, we haven't had a lot of luck using general text search products to power code search," he said in a GitHub Universe video presentation. "The user experience is poor. It's very, very expensive to host and it's slow to index." In a blog post on Monday, Clem delved into the technology used to scour just a quarter of those repos, a code search engine built in Rust called Blackbird.

Blackbird currently provides access to almost 45 million GitHub repositories, which together amount to 115TB of code and 15.5 billion documents. Shifting through that many lines of code requires something stronger than grep, a common command line tool on Unix-like systems for searching through text data. Using ripgrep on an 8-core Intel CPU to run an exhaustive regular expression query on a 13GB file in memory, Clem explained, takes about 2.769 seconds, or 0.6GB/sec/core. [...] At 0.01 queries per second, grep was not an option. So GitHub front-loaded much of the work into precomputed search indices. These are essentially maps of key-value pairs. This approach makes it less computationally demanding to search for document characteristics like the programming language or word sequences by using a numeric key rather than a text string. Even so, these indices are too large to fit in memory, so GitHub built iterators for each index it needed to access. According to Clem, these lazily return sorted document IDs that represent the rank of the associated document and meet the query criteria.

To keep the search index manageable, GitHub relies on sharding -- breaking the data up into multiple pieces using Git's content addressable hashing scheme and on delta encoding -- storing data differences (deltas) to reduce the data and metadata to be crawled. This works well because GitHub has a lot of redundant data (e.g. forks) -- its 115TB of data can be boiled down to 25TB through deduplication data-shaving techniques. The resulting system works much faster than grep -- 640 queries per second compared to 0.01 queries per second. And indexing occurs at a rate of about 120,000 documents per second, so processing 15.5 billion documents takes about 36 hours, or 18 for re-indexing since delta (change) indexing reduces the number of documents to be crawled.

Power

Is Clean Energy Buried at the Bottom of Abandoned Oil Wells? (vox.com) 68

"The U.S. is spending millions to explore a surprising source of untapped power," reports Recode, describing a new pilot program from America's Department of Energy" Geothermal energy works on a simple premise: The Earth's core is hot, and by drilling even just a few miles underground, we can tap into that practically unlimited heat source to generate energy for our homes and businesses without creating nearly as many of the greenhouse gas emissions that come from burning fossil fuels. However, drilling doesn't come cheap — it accounts for half the cost of most geothermal energy projects — and requires specialized labor to map the subsurface, drill into the ground, and install the infrastructure needed to bring energy to the surface.

But the US, in the wake of an oil and gas boom, just so happens to have millions of oil and gas wells sitting abandoned across the country. And oil and gas wells, it turns out, happen to share many of the same characteristics as geothermal wells — namely that they are deep holes in the ground, with pipes that can bring fluids up to the surface. So, the DOE asks, why not repurpose them?

That's exactly what the agency's pilot program, called Wells of Opportunity: ReAmplify, aims to do, awarding a total of $8.4 million to four projects across the country that will each try to tap into some of those old wells to extract geothermal energy rather than gas or oil. If they work, they could be the key to not only reducing the country's use of planet-damaging fossil fuels, but also helping answer the question of how to transition many of the more than 125,000 people who work in oil and gas extraction across the country into clean-energy jobs....

[T]he next year or so will be spent on planning and assessing the feasibility of turning oil wells into geothermal resources, after which energy generation will slowly ramp up. The biggest question is just how scalable these ideas are: One megawatt is, after all, a pittance compared to the country's energy needs.

"Some European countries already rely on direct use of geothermal energy on a large scale," the article points out.

Volcanically-active Iceland, for example, "uses its vast reserves of geothermal energy to heat 90 percent of its homes."

Thanks to Slashdot reader fahrbot-bot for submitting the story

Slashdot Top Deals