Crime

Senator Hawley Proposes Jail Time For People Who Download DeepSeek 226

Senator Josh Hawley has introduced a bill that would criminalize the import, export, and collaboration on AI technology with China. What this means is that "someone who knowingly downloads a Chinese developed AI model like the now immensely popular DeepSeek could face up to 20 years in jail, a million dollar fine, or both, should such a law pass," reports 404 Media. From the report: Hawley introduced the legislation, titled the Decoupling America's Artificial Intelligence Capabilities from China Act, on Wednesday of last year. "Every dollar and gig of data that flows into Chinese AI are dollars and data that will ultimately be used against the United States," Senator Hawley said in a statement. "America cannot afford to empower our greatest adversary at the expense of our own strength. Ensuring American economic superiority means cutting China off from American ingenuity and halting the subsidization of CCP innovation."

Hawley's statement explicitly says that he introduced the legislation because of the release of DeepSeek, an advanced AI model that's competitive with its American counterparts, and which its developers claimed was made for a fraction of the cost and without access to as many and as advanced of chips, though these claims are unverified. Hawley's statement called DeepSeek "a data-harvesting, low-cost AI model that sparked international concern and sent American technology stocks plummeting." Hawley's statement says the goal of the bill is to "prohibit the import from or export to China of artificial intelligence technology, "prohibit American companies from conducting AI research in China or in cooperation with Chinese companies," and "Prohibit U.S. companies from investing money in Chinese AI development."
The Military

Air Force Documents On Gen AI Test Are Just Whole Pages of Redactions 12

An anonymous reader quotes a report from 404 Media: The Air Force Research Laboratory (AFRL), whose tagline is "Win the Fight," has paid more than a hundred thousand dollars to a company that is providing generative AI services to other parts of the Department of Defense. But the AFRL refused to say what exactly the point of the research was, and provided page after page of entirely blacked out, redacted documents in response to a Freedom of Information Act (FOIA) request from 404 Media related to the contract. [...] "Ask Sage: Generative AI Acquisition Accelerator," a December 2023 procurement record reads, with no additional information on the intended use case. The Air Force paid $109,490 to Ask Sage, the record says.

Ask Sage is a company focused on providing generative AI to the government. In September the company announced that the Army was implementing Ask Sage's tools. In October it achieved "IL5" authorization, a DoD term for the necessary steps to protect unclassified information to a certain standard. 404 Media made an account on the Ask Sage website. After logging in, the site presents a list of the models available through Ask Sage. Essentially, they include every major model made by well-known AI companies and open source ones. Open AI's GPT-4o and DALL-E-3; Anthropic's Claude 3.5; and Google's Gemini are all included. The company also recently added the Chinese-developed DeepSeek R1, but includes a disclaimer. "WARNING. DO NOT USE THIS MODEL WITH SENSITIVE DATA. THIS MODEL IS BIASED, WITH TIES TO THE CCP [Chinese Communist Party]," it reads. Ask Sage is a way for government employees to access and use AI models in a more secure way. But only some of the models in the tool are listed by Ask Sage as being "compliant" with or "capable" of handling sensitive data.

[...] [T]he Air Force declined to provide any real specifics on what it paid Ask Sage for. 404 Media requested all procurement records related to the Ask Sage contract. Instead, the Air Force provided a 19 page presentation which seemingly would have explained the purpose of the test, while redacting 18 of the pages. The only available page said "Ask Sage, Inc. will explore the utilization of Ask Sage by acquisition Airmen with the DAF for Innovative Defense-Related Dual Purpose Technologies relating to the mission of exploring LLMs for DAF use while exploring anticipated benefits, clearly define needed solution adaptations, and define clear milestones and acceptance criteria for Phase II efforts."
IT

Cloudflare Rolls Out Digital Tracker To Combat Fake Images (cloudflare.com) 14

Cloudflare, a major web infrastructure company, will now track and verify the authenticity of images across its network through Content Credentials, a digital signature system that documents an image's origin and editing history. The technology, developed by Adobe's Content Authenticity Initiative, embeds metadata showing who created an image, when it was taken, and any subsequent modifications - including those made by AI tools.

Major news organizations including the BBC, Wall Street Journal and New York Times have already adopted the system. The feature is available immediately through a single toggle in Cloudflare Images settings. Users can verify an image's authenticity through Adobe's web tool or Chrome extension.
AI

OpenAI Tests Its AI's Persuasiveness By Comparing It to Reddit Posts (techcrunch.com) 35

Friday TechCrunch reported that OpenAI "used the subreddit, r/ChangeMyView to create a test for measuring the persuasive abilities of its AI reasoning models." The company revealed this in a system card — a document outlining how an AI system works — that was released along with its new "reasoning" model, o3-mini, on Friday.... OpenAI says it collects user posts from r/ChangeMyView and asks its AI models to write replies, in a closed environment, that would change the Reddit user's mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models' responses to human replies for that same post.

The ChatGPT-maker has a content-licensing deal with Reddit that allows OpenAI to train on posts from Reddit users and display these posts within its products. We don't know what OpenAI pays for this content, but Google reportedly pays Reddit $60 million a year under a similar deal. However, OpenAI tells TechCrunch the ChangeMyView-based evaluation is unrelated to its Reddit deal. It's unclear how OpenAI accessed the subreddit's data, and the company says it has no plans to release this evaluation to the public...

The goal for OpenAI is not to create hyper-persuasive AI models but instead to ensure AI models don't get too persuasive. Reasoning models have become quite good at persuasion and deception, so OpenAI has developed new evaluations and safeguards to address it.

Reddit's "ChangeMyView" subreddit has 3.8 million human subscribers, making it a valuable source of real human interactions, according to the article. And it adds one more telling anecdote.

"Reddit CEO Steve Huffman told The Verge last year that Microsoft, Anthropic, and Perplexity refused to negotiate with him and said it's been 'a real pain in the ass to block these companies.'"
Transportation

Boom Supersonic XB-1 Breaks Sound Barrier During Historic Test Flight (cbsnews.com) 65

The XB-1, a civilian supersonic jet developed by Boom Supersonic, successfully broke the sound barrier during a test flight over the Mojave Desert. It reached an altitude of 35,290 feet before accelerating to Mach 1.22, the company said in a press release. CBS News reports: It marks the first time an independently developed jet has broken the sound barrier, Boom Supersonic said, and the plane is the "first supersonic jet made in America." The sound barrier was broken for the first time in 1947, when Air Force pilot Capt. Chuck Yeager flew a rocket-propelled experimental aircraft across the Mojave Desert -- taking off from the Mojave Air and Space Port just as the XB-1 did. [...]

The company will next focus its attention on Overture, a supersonic airliner that will ultimately "bring the benefits of supersonic flight to everyone," Boom Supersonic founder and CEO Blake Scholl said in a statement. The XB-1 jet will be the foundation for Overture, Boom Supersonic said, and many features present on the jet will also be incorporated into the supersonic airliner. The airliner will also use Boom Supersonic's bespoke propulsion system, Symphony, to run on "up to 100% sustainable aviation fuel."

The company said the goal for the plane is for it to be able to carry between 64 and 80 passengers at Mach 1.7, or about 1,295 miles per hour. Existing subsonic airliners fly at between 550 and 600 miles per hour, according to charter company Bitlux. About 130 Overture planes have been pre-ordered, the company said. Airlines including American Airlines, United Airlines and Japan Airlines have placed pre-orders. The company finished building a "superfactory" in North Carolina in 2024, and will eventually produce 66 planes per year.

United States

New CIA Director Touts 'Low Confidence' Assessment About Covid Lab Leak Theory (cnn.com) 196

Slashdot reader DevNull127 writes: "Every US intelligence agency still unanimously maintains that Covid-19 was not developed as a biological weapon," CNN reported today.

But what about the possibility of an accidental leak (rather than Covid-19 originating in wild animal meat from the Wuhan Market)? "The agency has for years said it did not have enough information to determine which origin theory was more likely."

CNN notes there's suddenly been a new announcement "just days" after the CIA's new director took the reins — former lawyer turned Republican House Representative John Ratcliffe. While the market-origin theory remains a possibility according to the CIA, CNN notes that Ratcliffe himself "has long favored the theory that the pandemic originated from research being done in China and vowed in an interview published in Breitbart on Thursday that he would make the issue a Day 1 priority."

"We have low confidence in this judgement," the CIA says in the complete text of its announcement, "and will continue to evaluate any available credible new intelligence reporting or open-source information that could change CIA's assessment."

After speaking to a U.S. official, CNN added these details about the assessment: It was not made based on new intelligence gathered by the US government — officials have long said such intelligence is unlikely to surface so many years later — and instead was reached after a review of existing information.

"CIA continues to assess that both research-related and natural origin scenarios of the COVID-19 pandemic remain plausible," a CIA spokesperson said in a statement Saturday.

CNN adds that "Many scientists believe the virus occurred naturally in animals and spread to humans in an outbreak at a market in Wuhan, China...."
Power

Could New Linux Code Cut Data Center Energy Use By 30%? (datacenterdynamics.com) 65

Two computer scientists at the University of Waterloo in Canada believe changing 30 lines of code in Linux "could cut energy use at some data centers by up to 30 percent," according to the site Data Centre Dynamics.

It's the code that processes packets of network traffic, and Linux "is the most widely used OS for data center servers," according to the article: The team tested their solution's effectiveness and submitted it to Linux for consideration, and the code was published this month as part of Linux's newest kernel, release version 6.13. "All these big companies — Amazon, Google, Meta — use Linux in some capacity, but they're very picky about how they decide to use it," said Martin Karsten [professor of Computer Science in the Waterloo's Math Faculty]. "If they choose to 'switch on' our method in their data centers, it could save gigawatt hours of energy worldwide. Almost every single service request that happens on the Internet could be positively affected by this."

The University of Waterloo is building a green computer server room as part of its new mathematics building, and Karsten believes sustainability research must be a priority for computer scientists. "We all have a part to play in building a greener future," he said. The Linux Foundation, which oversees the development of the Linux OS, is a founder member of the Green Software Foundation, an organization set up to look at ways of developing "green software" — code that reduces energy consumption.

Karsten "teamed up with Joe Damato, distinguished engineer at Fastly" to develop the 30 lines of code, according to an announcement from the university. "The Linux kernel code addition developed by Karsten and Damato was based on research published in ACM SIGMETRICS Performance Evaluation Review" (by Karsten and grad student Peter Cai).

Their paper "reviews the performance characteristics of network stack processing for communication-heavy server applications," devising an "indirect methodology" to "identify and quantify the direct and indirect costs of asynchronous hardware interrupt requests (IRQ) as a major source of overhead...

"Based on these findings, a small modification of a vanilla Linux system is devised that improves the efficiency and performance of traditional kernel-based networking significantly, resulting in up to 45% increased throughput..."
Medicine

Ultra-Fast Cancer Treatments Could Replace Conventional Radiotherapy (bbc.com) 27

CERN's particle accelerator is being used in a pioneering cancer treatment called Flash radiotherapy. This method delivers ultra-high radiation doses in less than a second, minimizes side effects while targeting tumors more effectively than conventional radiotherapy. The BBC reports: In a series of vast underground caverns on the outskirts of Geneva, Switzerland, experiments are taking place which may one day lead to new generation of radiotherapy machines. The hope is that these devices could make it possible to cure complex brain tumors (PDF), eliminate cancers that have metastasized to distant organs, and generally limit the toll which cancer treatment exerts on the human body. The home of these experiments is the European Laboratory for Particle Physics (Cern), best known to the world as the particle physics hub that developed the Large Hadron Collider, a 27 kilometer (16.7 mile)-long ring of superconducting magnets capable of accelerating particles to near the speed of light.

Arguably Cern's crowning achievement was the 2012 discovery of the Higgs boson, the so-called "God Particle" which gives other particles their mass and in doing so lays the foundation for everything that exists in the universe. But in recent years, the centre's unique expertise in accelerating high-energy particles has found a new niche -- the world of cancer radiotherapy. Eleven years ago, Marie-Catherine Vozenin, a radiobiologist now working at Geneva University Hospitals (Hug), and others published a paper outlining a paradigm-shifting approach to traditional radiotherapy treatment which they called Flash. By delivering radiation at ultra-high dose rates, with exposures of less than a second, they showed that it was possible to destroy tumors in rodents while sparing healthy tissue. Its impact was immediate. International experts described it as a seminal breakthrough, and it galvanized fellow radiobiologists around the world to conduct their own experiments using the Flash approach to treat a wide variety of tumors in rodents, household pets, and now humans.

In recent years, animal studies have repeatedly shown that Flash makes it possible to markedly increase the amount of radiation delivered to the body while minimizing the impact that it has on surrounding healthy tissue. In one experiment, healthy lab mice which were given two rounds of radiation via Flash did not develop the typical side effects which would be expected during the second round. In another study, animals treated with Flash for head and neck cancers experienced fewer side effects, such as reduced saliva production or difficulty swallowing. Loo is cautiously optimistic that going forwards, such benefits may also translate to human patients. "Flash produces less normal tissue injury than conventional irradiation, without compromising anti-tumor efficacy -- which could be game-changing," he says. An additional hope is that this could then reduce the risk of secondary cancers (PDF), resulting from radiation-induced damage later in life, although it is still too early to know if that will be the case. [...] But the next phase of research is not only about testing whether Flash works in people. It's also about identifying which kind of radiation is the best one to use.

AI

CIA's Chatbot Stands In For World Leaders 37

The CIA has developed a chatbot to talk to virtual versions of foreign presidents and prime ministers. "Understanding leaders around the world is one of the CIA's most important jobs. Teams of analysts comb through intelligence collected by spies and publicly available information to create profiles of leaders that can predict behaviors," reports the New York Times. "A chatbot powered by artificial intelligence now helps do that work." From the report: The chatbot is part of the spy agency's drive to improve the tools available to CIA analysts and its officers in the field, and to better understand adversaries' technical advances. Core to the effort is to make it easier for companies to work with the most secretive agency. William Burns, CIA director for the past four years, prioritized improving the agency's technology and understanding of how it is used. Incoming Trump administration officials say they plan to build on those initiatives, not tear them down. [...]

The CIA has long used digital tools, spy gadgets and even AI. But with the development of new forms of AI, including the large language models that power chatbots, the agency has stepped up its investments. Making better use of AI, Burns said, is crucial to US competition with China. And better AI models have helped the agency's analysts "digest the avalanche of open-source information out there," he said. The new tools have also helped analysts process clandestinely acquired information, Burns said. New technologies developed by the agency are helping spies navigate cities in authoritarian countries where governments use AI-powered cameras to conduct constant surveillance on their population and foreign spies.
AI

Arrested by AI: When Police Ignored Standards After AI Facial-Recognition Matches (msn.com) 55

A county transit police detective fed a poor-quality image to an AI-powered facial recognition program, remembers the Washington Post, leading to the arrest of "Christopher Gatlin, a 29-year-old father of four who had no apparent ties to the crime scene nor a history of violent offenses." He was unable to post the $75,000 cash bond required, and "jailed for a crime he says he didn't commit, it would take Gatlin more than two years to clear his name." A Washington Post investigation into police use of facial recognition software found that law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence... The Post reviewed documents from 23 police departments where detailed records about facial recognition use are available and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime — in most cases contradicting their own internal policies requiring officers to corroborate all leads found through AI. Some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts, The Post found. One police report referred to an uncorroborated AI result as a "100% match." Another said police used the software to "immediately and unquestionably" identify a suspected thief.

Gatlin is one of at least eight people wrongfully arrested in the United States after being identified through facial recognition... All of the cases were eventually dismissed. Police probably could have eliminated most of the people as suspects before their arrest through basic police work, such as checking alibis, comparing tattoos, or, in one case, following DNA and fingerprint evidence left at the scene.

Some statistics from the article about the eight wrongfully-arrested people:
  • In six cases police failed to check alibis
  • In two cases police ignored evidence that contradicted their theory
  • In five cases police failed to collect key pieces of evidence
  • In three cases police ignored suspects' physical characteristics
  • In six cases police relied on problematic witness statements

The article provides two examples of police departments forced to pay $300,000 settlements after wrongful arrests caused by AI mismatches. But "In interviews with The Post, all eight people known to have been wrongly arrested said the experience had left permanent scars: lost jobs, damaged relationships, missed payments on car and home loans. Some said they had to send their children to counseling to work through the trauma of watching their mother or father get arrested on the front lawn.

"Most said they also developed a fear of police."


AI

World's First AI Chatbot, ELIZA, Resurrected After 60 Years (livescience.com) 37

"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...) Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."

Thanks to long-time Slashdot reader MattSparkes for sharing the news.
Biotech

OpenAI Has Created an AI Model For Longevity Science (technologyreview.com) 33

OpenAI has developed a language model designed for engineering proteins, capable of converting regular cells into stem cells. It marks the company's first venture into biological data and demonstrates AI's potential for unexpected scientific discoveries. An anonymous reader quotes a report from MIT Technology Review: Last week, OpenAI CEO Sam Altman said he was "confident" his company knows how to build an AGI, adding that "superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own." The protein engineering project started a year ago when Retro Biosciences, a longevity research company based in San Francisco, approached OpenAI about working together. That link-up did not happen by chance. Sam Altman, the CEO of OpenAI, personally funded Retro with $180 million, as MIT Technology Review first reported in 2023. Retro has the goal of extending the normal human lifespan by 10 years. For that, it studies what are called Yamanaka factors. Those are a set of proteins that, when added to a human skin cell, will cause it to morph into a young-seeming stem cell, a type that can produce any other tissue in the body. [...]

OpenAI's new model, called GPT-4b micro, was trained to suggest ways to re-engineer the protein factors to increase their function. According to OpenAI, researchers used the model's suggestions to change two of the Yamanaka factors to be more than 50 times as effective -- at least according to some preliminary measures. [...] The model does not work the same way as Google's AlphaFold, which predicts what shape proteins will take. Since the Yamanaka factors are unusually floppy and unstructured proteins, OpenAI said, they called for a different approach, which its large language models were suited to. The model was trained on examples of protein sequences from many species, as well as information on which proteins tend to interact with one another. While that's a lot of data, it's just a fraction of what OpenAI's flagship chatbots were trained on, making GPT-4b an example of a "small language model" that works with a focused data set.

Once Retro scientists were given the model, they tried to steer it to suggest possible redesigns of the Yamanaka proteins. The prompting tactic used is similar to the "few-shot" method, in which a user queries a chatbot by providing a series of examples with answers, followed by an example for the bot to respond to. Although genetic engineers have ways to direct evolution of molecules in the lab, they can usually test only so many possibilities. And even a protein of typical length can be changed in nearly infinite ways (since they're built from hundreds of amino acids, and each acid comes in 20 possible varieties). OpenAI's model, however, often spits out suggestions in which a third of the amino acids in the proteins were changed. "We threw this model into the lab immediately and we got real-world results," says Retro's CEO, Joe Betts-Lacroix. He says the model's ideas were unusually good, leading to improvements over the original Yamanaka factors in a substantial fraction of cases.

AI

Google Reports Halving Code Migration Time With AI Help 12

Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.

For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.

"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.
Biotech

Startup Raises $200 Million To 'De-Extinct' the Woolly Mammoth, Thylacine and Dodo (venturebeat.com) 123

An anonymous reader quotes a report from VentureBeat: Colossal BioSciences has raised $200 million in a new round of funding to bring back extinct species like the woolly mammoth. Dallas- and Boston-based Colossal is making strides in the scientific breakthroughs toward "de-extinction," or bringing back extinct species like the woolly mammoth, thylacine and the dodo. [...] Since launching in September 2021, Colossal has raised $435 million in total funding. This latest round of capital places the company at a $10.2 billion valuation. Colossal will leverage this latest infusion of capital to continue to advance its genetic engineering technologies while pioneering new revolutionary software, wetware and hardware solutions, which have applications beyond de-extinction including species preservation and human healthcare.

"Our recent successes in creating the technologies necessary for our end-to-end de-extinction toolkit have been met with enthusiasm by the investor community. TWG Global and our other partners have been bullish in their desire to help us scale as quickly and efficiently as possible," said CEO Colossal Ben Lamm, in a statement. "This funding will grow our team, support new technology development, expand our de-extinction species list, while continuing to allow us to carry forth our mission to make extinction a thing of the past."
Here's a summary of the startup's progress on its efforts to bring back the woolly mammoth, thylacine and the dodo:

Woolly Mammoth De-extinction Progress
- Generated chromosome-scale reference genomes for elephants and the first de novo assembled mammoth genome
- Acquired and aligned 60+ ancient mammoth genomes and 30+ genomes of extant elephant species, improving mammoth-specific variant accuracy
- Derived pluripotent stem cells for Asian elephants, advancing reproductive technologies essential for de-extinction

Thylacine De-extinction Progress
- Created a 99.9% complete ancient genome for the thylacine using long-read and RNA sequencing
- Assembled telomere-to-telomere genomes of dasyurid species to understand evolutionary relationships and support conservation of marsupials
- Progress in genomics and reproductive technologies positions Colossal ahead of schedule on critical de-extinction steps

Dodo De-extinction Progress
- Completed high-coverage genomes for the dodo, its relatives, and the critically endangered manumea
- Developed tools for avian genome engineering, including techniques for craniofacial gene-editing and primordial germ cell cultivation
- Significant advances in avian-specific genetic techniques are driving progress toward dodo restoration and bird conservation
Transportation

Texas Sues Allstate For Collecting Driver Data To Raise Premiums (gizmodo.com) 62

An anonymous reader quotes a report from Gizmodo: Texas has sued (PDF) one of the nation's largest car insurance providers alleging that it violated the state's privacy laws by surreptitiously collecting detailed location data on millions of drivers and using that information to justify raising insurance premiums. The state's attorney general, Ken Paxton, said the lawsuit against Allstate and its subsidiary Arity is the first enforcement action ever filed by a state attorney general to enforce a data privacy law. It also follows a deceptive business practice lawsuit he filed against General Motors accusing the car manufacturer of misleading customers by collecting and selling driver data.

In 2015, Allstate developed the Arity Driving Engine software development kit (SDK), a package of code that the company allegedly paid mobile app developers to install in their products in order to collect a variety of sensitive data from consumers' phones. The SDK gathered phone geolocation data, accelerometer, and gyroscopic data, details about where phone owners started and ended their trips, and information about "driving behavior," such as whether phone owners appeared to be speeding or driving while distracted, according to the lawsuit. The apps that installed the SDK included GasBuddy, Fuel Rewards, and Life360, a popular family monitoring app, according to the lawsuit.

Paxton's complaint said that Allstate and Arity used the data collected by its SDK to develop and sell products to other insurers like Drivesight, an algorithmic model that assigned a driving risk score to individuals, and ArityIQ, which allowed other insurers to "[a]ccess actual driving behavior collected from mobile phones and connected vehicles to use at time of quote to more precisely price nearly any driver." Allstate and Arity marketed the products as providing "driver behavior" data but because the information was collected via mobile phones the companies had no way of determining whether the owner was actually driving, according to the lawsuit. "For example, if a person was a passenger in a bus, a taxi, or in a friend's car, and that vehicle's driver sped, hard braked, or made a sharp turn, Defendants would conclude that the passenger, not the actual driver, engaged in 'bad' driving behavior," the suit states. Neither Allstate and Arity nor the app developers properly informed customers in their privacy policies about what data the SDK was collecting or how it would be used, according to the lawsuit.
The lawsuit violates Texas' Data Privacy and Security Act (DPSA) and insurance code by failing to address violations within the required 30-day cure period. "In its complaint, filed in federal court, Texas requested that Allstate be ordered to pay a penalty of $7,500 per violation of the state's data privacy law and $10,000 per violation of the state's insurance code, which would likely amount to millions of dollars given the number of consumers allegedly affected," adds the report.

"The lawsuit also asks the court to make Allstate delete all the data it obtained through actions that allegedly violated the privacy law and to make full restitution to customers harmed by the companies' actions."
AI

New LLM Jailbreak Uses Models' Evaluation Skills Against Them (scworld.com) 37

SC Media reports on a new jailbreak method for large language models (LLMs) that "takes advantage of models' ability to identify and score harmful content in order to trick the models into generating content related to malware, illegal activity, harassment and more.

"The 'Bad Likert Judge' multi-step jailbreak technique was developed and tested by Palo Alto Networks Unit 42, and was found to increase the success rate of jailbreak attempts by more than 60% when compared with direct single-turn attack attempts..." For the LLM jailbreak experiments, the researchers asked the LLMs to use a Likert-like scale to score the degree to which certain content contained in the prompt was harmful. In one example, they asked the LLMs to give a score of 1 if a prompt didn't contain any malware-related information and a score of 2 if it contained very detailed information about how to create malware, or actual malware code. After the model scored the provided content on the scale, the researchers would then ask the model in a second step to provide examples of content that would score a 1 and a 2, adding that the second example should contain thorough step-by-step information. This would typically result in the LLM generating harmful content as part of the second example meant to demonstrate the model's understanding of the evaluation scale.

An additional one or two steps after the second step could be used to produce even more harmful information, the researchers found, by asking the LLM to further expand on and add more details to their harmful example. Overall, when tested across 1,440 cases using six different "state-of-the-art" models, the Bad Likert Judge jailbreak method had about a 71.6% average attack success rate across models.

Thanks to Slashdot reader spatwei for sharing the news.
Math

Rational or Not? This Basic Math Question Took Decades To Answer. (quantamagazine.org) 49

Three mathematicians have developed a breakthrough method for proving whether numbers can be written as fractions, solving a problem that has puzzled researchers for decades. Frank Calegari, Vesselin Dimitrov and Yunqing Tang proved the irrationality of an infinite collection of numbers related to the Riemann zeta function, building on Roger Apery's landmark 1978 proof about a single such number.

The new approach, which relies on 19th-century mathematical techniques, has already helped settle a 50-year-old conjecture about modular forms and could lead to more advances in number theory.
Privacy

See the Thousands of Apps Hijacked To Spy On Your Location (404media.co) 49

An anonymous reader quotes a report from 404 Media: Some of the world's most popular apps are likely being co-opted by rogue members of the advertising industry to harvest sensitive location data on a massive scale, with that data ending up with a location data company whose subsidiary has previously sold global location data to US law enforcement. The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games likeCandy Crushand dating apps like Tinder to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem -- not code developed by the app creators themselves -- this data collection is likely happening without users' or even app developers' knowledge.

"For the first time publicly, we seem to have proof that one of the largest data brokers selling to both commercial and government clients appears to be acquiring their data from the online advertising 'bid stream,'" rather than code embedded into the apps themselves, Zach Edwards, senior threat analyst at cybersecurity firm Silent Push and who has followed the location data industry closely, tells 404 Media after reviewing some of the data. The data provides a rare glimpse inside the world of real-time bidding (RTB). Historically, location data firms paid app developers to include bundles of code that collected the location data of their users. Many companies have turned instead to sourcing location information through the advertising ecosystem, where companies bid to place ads inside apps. But a side effect is that data brokers can listen in on that process and harvest the location of peoples' mobile phones.

"This is a nightmare scenario for privacy, because not only does this data breach contain data scraped from the RTB systems, but there's some company out there acting like a global honey badger, doing whatever it pleases with every piece of data that comes its way," Edwards says. Included in the hacked Gravy data are tens of millions of mobile phone coordinates of devices inside the US, Russia, and Europe. Some of those files also reference an app next to each piece of location data. 404 Media extracted the app names and built a list of mentioned apps. The list includes dating sites Tinder and Grindr; massive games such asCandy Crush,Temple Run,Subway Surfers, andHarry Potter: Puzzles & Spells; transit app Moovit; My Period Calendar & Tracker, a period-tracking app with more than 10 million downloads; popular fitness app MyFitnessPal; social network Tumblr; Yahoo's email client; Microsoft's 365 office app; and flight tracker Flightradar24. The list also mentions multiple religious-focused apps such as Muslim prayer and Christian Bible apps, various pregnancy trackers, and many VPN apps, which some users may download, ironically, in an attempt to protect their privacy.
404 Media's full list of apps included in the data can be found here. There are also other lists available from other security researchers.
Open Source

VLC Tops 6 Billion Downloads, Previews AI-Generated Subtitles (techcrunch.com) 68

VLC media player, the popular open-source software developed by nonprofit VideoLAN, has topped 6 billion downloads worldwide and teased an AI-powered subtitle system. From a report: The new feature automatically generates real-time subtitles -- which can then also be translated in many languages -- for any video using open-source AI models that run locally on users' devices, eliminating the need for internet connectivity or cloud services, VideoLAN demoed at CES.
Earth

Thailand Bans Imports of Plastic Waste To Curb Toxic Pollution (theguardian.com) 22

Thailand has banned plastic waste imports over concerns about toxic pollution, as experts warn that failure to agree a global treaty to cut plastic waste will harm human health. From a report: A law banning imports of plastic waste came into force this month in Thailand, after years of campaigning by activists. Thailand is one of several south-east Asian countries that has historically been paid to receive plastic waste from developed nations. The country became a leading destination for exports of plastic waste from Europe, the US, the UK and Japan in 2018 after China, the world's biggest market for household waste, imposed a ban.

Japan is one of the biggest exporters of waste plastic to Thailand, with about 50m kg exported in 2023. Thai customs officials said more than 1.1m tonnes of plastic scraps were imported between 2018 and 2021. Imports of plastic were often mismanaged in Thailand, with many factories burning the waste rather than recycling it, leading to damage to human health and the environment.

Slashdot Top Deals