IT

Amazon Bricks Long-Standing Fire TV Apps With New Update (arstechnica.com) 64

Amazon has issued an update to Fire TV streaming devices and televisions that has broken apps that let users bypass the Fire OS home screen. From a report: The tech giant claims that its latest Fire OS update is about security but has refused to detail any potential security concerns. Users and app developers have reported that numerous apps that used to work with Fire TV devices for years have suddenly stopped working. As first reported by AFTVnews, the update has made apps unable to establish local Android Debug Bridge (ADB) connections and execute ADB commands with Fire TV devices.

The update, Fire OS 7.6.6.9, affects several Fire OS-based TVs, including models from TCL, Toshiba, Hisense, and Amazon's Fire TV Omni QLED Series. Other devices running the update include Amazon's first Fire TV Stick 4K Max, the third-generation Fire TV Stick, as well as the third and second-generation Fire TV Cubes and the Fire TV Stick Lite. A code excerpt shared with AFTVnews by what the publication described as an "affected app developer," which you can view here, shows a line of code indicating that Fire TVs would not be allowed to make ADB connections with a local device or app. As pointed out by AFTVnews, such apps have been used by Fire TV modders for abilities like clearing installed apps' cache and using a different home screen than the Fire OS default.

Google

Google Says Microsoft Offered To Sell Bing To Apple in 2018, But Search-quality Issues Got in the Way (cnbc.com) 21

Microsoft offered to sell its Bing search engine to Apple in 2018, Google said in a court filing earlier this month. The document, from Google's antitrust case against the U.S. Justice Department, was unsealed on Friday. From a report: In the filing earlier this month, Google argued that Microsoft pitched Apple in 2009, 2013, 2015, 2016, 2018 and 2020 about making Bing the default in Apple's Safari web browser, but each time, Apple said no, citing quality issues with Bing. "In each instance, Apple took a hard look at the relative quality of Bing versus Google and concluded that Google was the superior default choice for its Safari users. That is competition," Google wrote in the filing.

The Justice Department said in its own newly unsealed filing that Microsoft has spent almost $100 billion on Bing over 20 years. The Windows and Office software maker launched Bing in 2009, following search efforts under the MSN and Windows Live brands. Today Bing has 3% global market share, according to StatCounter. In the fourth quarter, Microsoft generated $3.2 billion from search and news advertising, while Google search and other revenue totaled $48 billion. Google said in its filing that when Microsoft reached out to Apple in 2018, emphasizing gains in Bing's quality, Microsoft offered to either sell Bing to Apple or establish a Bing-related joint venture with the company.

AI

Google DeepMind Alumni Unveil Bioptimus: Aiming To Build First Universal Biology AI Model (venturebeat.com) 5

An anonymous reader quotes a report from VentureBeat: As the French startup ecosystem continues to boom -- think Mistral, Poolside, and Adaptive -- today the Paris-based Bioptimus, with a mission to build the first universal AI foundation model for biology, emerged from stealth following a seed funding round of $35 million. The new open science model will connect the different scales of biology with generative AI -- from molecules to cells, tissues and whole organisms. Bioptimus unites a team of Google DeepMind alumni and Owkin scientists (AI biotech startup Owkin is itself a French unicorn) who will take advantage of AWS compute and Owkin's data generation capabilities and access to multimodal patient data sourced from leading academic hospitals worldwide. According to a press release, "this all gives the power to create computational representations that establish a strong differentiation against models trained solely on public datasets and a single data modality that are not able to capture the full diversity of biology."

In an interview with VentureBeat, Jean-Philippe Vert, co-founder and CEO of Bioptimus, chief R&D Officer of Owkin and former research lead at Google Brain, said as a smaller, independent company, Bioptimus can move faster than Google DeepMind to gain direct access to the data needed to train biology models. "We have the advantage of being able to more easily and securely collaborate with partners, and have established a level of trust in our work by sharing our AI expertise and making models available to them for research," he said. "This can be hard for big tech to do. Bioptimus will also leverage some of the strongest sovereignty controls in the market today."

Rodolphe Jenatton, a former research scientist at Google DeepMind, has also joined the Bioptimus team, telling VentureBeat the Bioptimus work will be released as open source/open science, at a similar level to Mistral's model releases. "Transparency and sharing and community will be key elements for us," he said. Currently, AI models are limited to specific aspects of biology, Vert explained. "For example, several companies are starting to build language models for protein sequences," he said, adding that there are also initiatives to build a foundation model for images of cells.

However, there is no holistic view of the totality of biology: "The good news is that the AI technology is converging very quickly, with some architectures that allow to have all the data contribute together to a unified model," he explained. "So this is what we want to do. As far as I know that it does not exist yet. But I'm certain that if we didn't do it, someone else would do it in the near future." The biggest bottleneck, he said, is access to data. "It's very different from training an LLM on text on the web," he said. And that access, he pointed out, is what Bioptimus has in spades, through its Owkin partnership.

Mars

Martians Wanted: NASA Opens Call for Simulated Yearlong Mars Mission (nasa.gov) 55

"Would you like to live on Mars?" NASA asked Friday on social media.

"You can help us move humanity toward that goal by participating in a simulated, year-long Mars surface mission at NASA's Johnson Space Center." NASA is seeking applicants to participate in its next simulated one-year Mars surface mission to help inform the agency's plans for human exploration of the Red Planet. The second of three planned ground-based missions called CHAPEA (Crew Health and Performance Exploration Analog) is scheduled to kick off in spring 2025.

Each CHAPEA mission involves a four-person volunteer crew living and working inside a 1,700-square-foot, 3D-printed habitat based at NASA's Johnson Space Center in Houston. The habitat, called the Mars Dune Alpha, simulates the challenges of a mission on Mars, including resource limitations, equipment failures, communication delays, and other environmental stressors. Crew tasks include simulated spacewalks, robotic operations, habitat maintenance, exercise, and crop growth.

NASA is looking for healthy, motivated U.S. citizens or permanent residents who are non-smokers, 30-55 years old, and proficient in English for effective communication between crewmates and mission control. Applicants should have a strong desire for unique, rewarding adventures and interest in contributing to NASA's work to prepare for the first human journey to Mars...

As NASA works to establish a long-term presence for scientific discovery and exploration on the Moon through the Artemis campaign, CHAPEA missions provide important scientific data to validate systems and develop solutions for future missions to the Red Planet. With the first CHAPEA crew more than halfway through their yearlong mission, NASA is using research gained through the simulated missions to help inform crew health and performance support during Mars expeditions.

You can see the simulated Mars habitat in this NASA video.

The deadline for applicants is Tuesday, April 2, according to NASA. "A master's degree in a STEM field such as engineering, mathematics, or biological, physical or computer science from an accredited institution with at least two years of professional STEM experience or a minimum of one thousand hours piloting an aircraft is required."
Firefox

Firefox Maker Mozilla Is Cutting 60 Jobs After Naming New CEO 106

Less than a week after naming Laura Chambers as interim CEO, Firefox's maker Mozilla said it is cutting about 60 jobs, or 5% of its workforce. The cuts are primarily in the product development organization. Bloomberg reports: "We're scaling back investment in some product areas in order to focus on areas that we feel have the greatest chance of success," Mozilla said in a statement. "We intend to re-prioritize resources against products like Firefox Mobile, where there's a significant opportunity to grow and establish a better model for the industry."

Mozilla last cut a significant number of jobs four years ago at the height of the Covid-19 pandemic. The not-for-profit company, which competes with Alphabet Inc.'s Google Chrome, Apple Inc.'s Safari and Microsoft Corp.'s Edge, has been grappling with sliding market share of its Firefox web browser in recent years.
So far in 2024, the tech sector has cut 32,000 jobs.
Communications

The US Government Makes a $42 Million Bet On Open Cell Networks (theverge.com) 26

An anonymous reader quotes a report from The Verge: The US government has committed $42 million to further the development of the 5G Open RAN (O-RAN) standard that would allow wireless providers to mix and match cellular hardware and software, opening up a bigger market for third-party equipment that's cheaper and interoperable. The National Telecommunications and Information Administration (NTIA) grant would establish a Dallas O-RAN testing center to prove the standard's viability as a way to head off Huawei's steady cruise toward a global cellular network hardware monopoly.

Verizon global network and technology president Joe Russo promoted the funding as a way to achieve "faster innovation in an open environment." To achieve the standard's goals, AT&T vice president of RAN technology Robert Soni says that AT&T and Verizon have formed the Acceleration of Compatibility and Commercialization for Open RAN Deployments Consortium (ACCoRD), which includes a grab bag of wireless technology companies like Ericsson, Nokia, Samsung, Dell, Intel, Broadcom, and Rakuten. Japanese wireless carrier Rakuten formed as the first O-RAN network in 2020. The company's then CEO, Tareq Amin, told The Verge's Nilay Patel in 2022 that Open RAN would enable low-cost network build-outs using smaller equipment rather than massive towers -- which has long been part of the promise of 5G.

But O-RAN is about more than that; establishing interoperability means companies like Verizon and AT&T wouldn't be forced to buy all of their hardware from a single company to create a functional network. For the rest of us, that means faster build-outs and "more agile networks," according to Rakuten. In the US, Dish has been working on its own O-RAN network, under the name Project Genesis. The 5G network was creaky and unreliable when former Verge staffer Mitchell Clarke tried it out in Las Vegas in 2022, but the company said in June last year that it had made its goal of covering 70 percent of the US population. Dish has struggled to become the next big cell provider in the US, though -- leading satellite communications company EchoStar, which spun off from Dish in 2008, to purchase the company in January.
The Washington Post writes that O-RAN "is Washington's anointed champion to try to unseat the Chinese tech giant Huawei Technologies" as the world's biggest supplier of cellular infrastructure gear.

According to the Post, Biden has emphasized the importance of O-RAN in conversations with international leaders over the past few years. Additionally, it notes that Congress along with the NTIA have dedicated approximately $2 billion to support the development of this standard.
Education

California Bill Would Require Computer Science For High School Graduation 202

At a press conference last week, a California Assemblymember joined the State Superintendent of Public Instruction in announcing a bill that, if passed, would require every public high school to teach computer science. (And establish CS as a high school graduation requirement by the 2030-31 school year.)

Long-time Slashdot reader theodp says he noticed posters with CS-education advocacy charts and stats "copied verbatim" from the tech giant-backed nonprofit Code.org. (And "a California Dept. of Education news release also echoed Code.org K-12 CS advocacy factoids.") The announcement came less than two weeks after Code.org CEO Hadi Partovi — whose goal is to make CS a HS graduation requirement in all 50 states by 2030 — was a keynote speaker at the Association of California School Administrators Superintendents' Symposium. Even back in an October 20 Facebook post, [California state assemblyman] Berman noted he'd partnered with Code.org on legislation in the past and hinted that something big was in the works on the K-12 CS education front for California. "I had the chance to attend Code.org's 10th anniversary celebration and chat with their founder, Hadi Partovi, as well as CS advocate Aloe Blacc. They've done amazing work expanding access to computer science education... and I've been proud to partner with them on legislation to do that in CA. More to come!"
United States

US To Launch $5 Billion Research Hub To Stay Ahead in Chip Race 45

President Joe Biden's administration plans to launch a $5 billion semiconductor research consortium to bolster chip design and hardware innovation in the US and counter China's efforts to capture the cutting edge of the industry. From a report: Officials on Friday are set to formally establish the National Semiconductor Technology Center, or NSTC, which marks the second major research and development investment from the 2022 Chips Act following a $3 billion advanced packaging initiative. The consortium plans to invest hundreds of millions of dollars into workforce development and intends to open funding applications in early March for research grants, Commerce Undersecretary for Standards and Technology Dr. Laurie E. Locascio said in an interview with Bloomberg News. Officials are working to prevent China from benefiting from NSTC-funded research while filling gaps in the US research ecosystem for key areas like packaging and hardware, she said, as electronic components have become a key US-China battleground.
EU

EU Proposes Criminalizing AI-Generated Child Sexual Abuse and Deepfakes 101

An anonymous reader quotes a report from TechCrunch: AI-generated imagery and other forms of deepfakes depicting child sexual abuse (CSA) could be criminalized in the European Union under plans to update existing legislation to keep pace with technology developments, the Commission announced today. It's also proposing to create a new criminal offense of livestreaming child sexual abuse. The possession and exchange of "pedophile manuals" would also be criminalized under the plan -- which is part of a wider package of measures the EU says is intended to boost prevention of CSA, including by increasing awareness of online risks and to make it easier for victims to report crimes and obtain support (including granting them a right to financial compensation). The proposal to update the EU's current rules in this area, which date back to 2011, also includes changes around mandatory reporting of offenses.

Back in May 2022, the Commission presented a separate piece of CSA-related draft legislation, aiming to establish a framework that could make it obligatory for digital services to use automated technologies to detect and report existing or new child sexual abuse material (CSAM) circulating on their platforms, and identify and report grooming activity targeting kids. The CSAM-scanning plan has proven to be highly controversial -- and it continues to split lawmakers in the parliament and the Council, as well as kicking up suspicions over the Commission's links with child safety tech lobbyists and raising other awkward questions for the EU's executive, over a legally questionable foray into microtargeted ads to promote the proposal. The Commission's decision to prioritize the targeting of digital messaging platforms to tackle CSA has attracted a lot of criticism that the bloc's lawmakers are focusing in the wrong area for combatting a complex societal problem -- which may have generated some pressure for it to come with follow-on proposals. (Not that the Commission is saying that, of course; it describes today's package as "complementary" to its earlier CSAM-scanning proposal.)
"Fast evolving technologies are creating new possibilities for child sexual abuse online, and raises challenges for law enforcement to investigate this extremely serious and wide spread crime," said Ylva Johansson, commissioner for home affairs, in a statement. "A strong criminal law is essential and today we are taking a key step to ensure that we have effective legal tools to rescue children and bring perpetrators to justice. We are delivering on our commitments made in the EU Strategy for a more effective fight against Child sexual abuse presented in July 2020."

The final shape of the proposals will be determined by the EU's co-legislators in the Parliament and Council. "If/when there's agreement on how to amend the current directive on combating CSA, it would enter into force 20 days after its publication in the Official Journal of the EU," adds TechCrunch.
Social Networks

The Atlantic Warns of a Rising 'Authoritarian Technocracy' (theatlantic.com) 70

In the behavior of tech companies, the Atlantic's executive editor warns us about "a clear and coherent ideology that is seldom called out for what it is: authoritarian technocracy. As the most powerful companies in Silicon Valley have matured, this ideology has only grown stronger, more self-righteous, more delusional, and — in the face of rising criticism — more aggrieved." The new technocrats are ostentatious in their use of language that appeals to Enlightenment values — reason, progress, freedom — but in fact they are leading an antidemocratic, illiberal movement. Many of them profess unconditional support for free speech, but are vindictive toward those who say things that do not flatter them. They tend to hold eccentric beliefs.... above all, that their power should be unconstrained. The systems they've built or are building — to rewire communications, remake human social networks, insinuate artificial intelligence into daily life, and more — impose these beliefs on the population, which is neither consulted nor, usually, meaningfully informed. All this, and they still attempt to perpetuate the absurd myth that they are the swashbuckling underdogs.
The article calls out Marc Andreessen's Techno-Optimist Manifesto for saying "We believe in adventure... rebelling against the status quo, mapping uncharted territory, conquering dragons, and bringing home the spoils for our community..." (The Atlantic concludes Andreessen's position "serves only to absolve him and the other Silicon Valley giants of any moral or civic duty to do anything but make new things that will enrich them, without consideration of the social costs, or of history.")

The article notes that Andreessen "also identifies a list of enemies and 'zombie ideas' that he calls upon his followers to defeat, among them 'institutions' and 'tradition.'" But the Atlantic makes a broader critique not just of Andreessen but of other Silicon Valley elites. "The world that they have brought into being over the past two decades is unquestionably a world of reckless social engineering, without consequence for its architects, who foist their own abstract theories and luxury beliefs on all of us..." None of this happens without the underlying technocratic philosophy of inevitability — that is, the idea that if you can build something new, you must. "In a properly functioning world, I think this should be a project of governments," [Sam] Altman told my colleague Ross Andersen last year, referring to OpenAI's attempts to develop artificial general intelligence. But Altman was going to keep building it himself anyway. Or, as Zuckerberg put it to The New Yorker many years ago: "Isn't it, like, inevitable that there would be a huge social network of people? ... If we didn't do this someone else would have done it."
The article includes this damning chat log from a 2004 conversation Zuckerberg had with a friend:

Zuckerberg: If you ever need info about anyone at Harvard.
Zuckerberg: Just ask.
Zuckerberg: I have over 4,000 emails, pictures, addresses, SNS
Friend: What? How'd you manage that one?
Zuckerberg: People just submitted it.
Zuckerberg: I don't know why.
Zuckerberg: They "trust me"
Zuckerberg: Dumb fucks.'

But the article also reminds us that in Facebook's early days, "Zuckerberg listed 'revolutions' among his interests." The main dangers of authoritarian technocracy are not at this point political, at least not in the traditional sense. Still, a select few already have authoritarian control, more or less, to establish the digital world's rules and cultural norms, which can be as potent as political power...

[I]n recent years, it has become clear that regulation is needed, not least because the rise of technocracy proves that Silicon Valley's leaders simply will not act in the public's best interest. Much should be done to protect children from the hazards of social media, and to break up monopolies and oligopolies that damage society, and more. At the same time, I believe that regulation alone will not be enough to meaningfully address the cultural rot that the new technocrats are spreading.... We do not have to live in the world the new technocrats are designing for us. We do not have to acquiesce to their growing project of dehumanization and data mining. Each of us has agency.

No more "build it because we can." No more algorithmic feedbags. No more infrastructure designed to make the people less powerful and the powerful more controlling. Every day we vote with our attention; it is precious, and desperately wanted by those who will use it against us for their own profit and political goals. Don't let them.
  • The article specifically recommends "challenging existing norms about the use of apps and YouTube in classrooms, the ubiquity of smartphones in adolescent hands, and widespread disregard for individual privacy. People who believe that we all deserve better will need to step up to lead such efforts."
  • "Universities should reclaim their proper standing as leaders in developing world-changing technologies for the good of humankind. (Harvard, Stanford, and MIT could invest in creating a consortium for such an effort — their endowments are worth roughly $110 billion combined.)"

Japan

Japan's Successful Moon Landing Was the Most Precise Ever (nature.com) 32

Japan has become the fifth country in the world to soft-land a spacecraft on the Moon, using precision technology that allowed it to touch down closer to its target landing site than any mission has before. However the spacecraft might have survived on the lunar surface for just a few hours due to power failure. Nature: Telemetry showed that the Smart Lander for Investigating Moon, or SLIM, touched down in its target area near Shioli crater, south of the lunar equator early Saturday morning, four months after lifting off from the Tanegashima Space Centre, off the south coast of Japan. [...] According to [Hitoshi] Kuninaka (VP of Kanegawa-based Japan Aerospace Exploration Agency), SLIM has very likely achieved its primary goal -- to land on the Moon with an unprecedented accuracy of 100 metres, which is a big leap from previous ranges of a few to dozens of kilometres. SLIM carried vision-based navigation technology, which was intended to image the surface as it flew over the Moon, and locate itself quickly by matching the images with onboard maps.

It remains unclear if the car-sized, 200-kilogram spacecraft actually touched down in the planned, two-step manner with its five legs. Unlike previous Moon landers, which used four legs to simultaneously reach a relatively flat area, SLIM was designed to hit a 15-degree slope outside Shioli crater first with one leg at the back, and then tip forward to stabilize on the four front legs. Observers suggest that SLIM might have rolled during its touch-down, preventing its solar cells from facing the Sun. Kuninaka said not enough data were available to establish the probe's posture or orientation. However, if some sunlight is able to reach the solar cells there is a chance that SLIM could come back to life.

NASA

NASA Postpones Plans To Send Humans To Moon (theguardian.com) 71

NASA has postponed its plans to send humans to the moon after delays hit its hugely ambitious Artemis programme, which aims to get spaceboots bouncing again on the lunar surface for the first time in half a century. From a report: The US space agency has announced the Artemis III mission to land four astronauts near the lunar south pole will be delayed a year until September 2026. Artemis II, a 10-day expedition to send a crew around the moon and back to test life support systems, will also be pushed back to September 2025.

NASA said the delays would allow its teams to work through development challenges associated with the programme, which partners with private companies including Elon Musk's SpaceX and Lockheed Martin and uses some largely untested spacecraft and technology. "We are returning to the moon in a way we never have before, and the safety of our astronauts is Nasa's top priority as we prepare for future Artemis missions," said the Nasa administrator Bill Nelson. Washington wants to establish a long-term human presence outside Earth's orbit, including construction of a lunar base camp as well as a space station that circles the moon. Its ultimate plans are to send people to Mars, but it has decided to return to the moon first to learn more about deep space before embarking on what would be a months-long voyage to the red planet.

Security

Russian Hackers Were Inside Ukraine Telecoms Giant For Months (reuters.com) 26

An anonymous reader quotes a report from Reuters: Russian hackers were inside Ukrainian telecoms giant Kyivstar's system from at least May last year in a cyberattack that should serve as a "big warning" to the West, Ukraine's cyber spy chief told Reuters. The hack, one of the most dramatic since Russia's full-scale invasion nearly two years ago, knocked out services provided by Ukraine's biggest telecoms operator for some 24 million users for days from Dec. 12. In an interview, Illia Vitiuk, head of the Security Service of Ukraine's (SBU) cybersecurity department, disclosed exclusive details about the hack, which he said caused "disastrous" destruction and aimed to land a psychological blow and gather intelligence. "This attack is a big message, a big warning, not only to Ukraine, but for the whole Western world to understand that no one is actually untouchable," he said. He noted Kyivstar was a wealthy, private company that invested a lot in cybersecurity.

The attack wiped "almost everything", including thousands of virtual servers and PCs, he said, describing it as probably the first example of a destructive cyberattack that "completely destroyed the core of a telecoms operator." During its investigation, the SBU found the hackers probably attempted to penetrate Kyivstar in March or earlier, he said in a Zoom interview on Dec. 27. "For now, we can say securely, that they were in the system at least since May 2023," he said. "I cannot say right now, since what time they had ... full access: probably at least since November." The SBU assessed the hackers would have been able to steal personal information, understand the locations of phones, intercept SMS-messages and perhaps steal Telegram accounts with the level of access they gained, he said. A Kyivstar spokesperson said the company was working closely with the SBU to investigate the attack and would take all necessary steps to eliminate future risks, adding: "No facts of leakage of personal and subscriber data have been revealed."

Investigating the attack is harder because of the wiping of Kyivstar's infrastructure. Vitiuk said he was "pretty sure" it was carried out by Sandworm, a Russian military intelligence cyberwarfare unit that has been linked to cyberattacks in Ukraine and elsewhere. A year ago, Sandworm penetrated a Ukrainian telecoms operator, but was detected by Kyiv because the SBU had itself been inside Russian systems, Vitiuk said, declining to identify the company. The earlier hack has not been previously reported. Vitiuk said SBU investigators were still working to establish how Kyivstar was penetrated or what type of trojan horse malware could have been used to break in, adding that it could have been phishing, someone helping on the inside or something else. If it was an inside job, the insider who helped the hackers did not have a high level of clearance in the company, as the hackers made use of malware used to steal hashes of passwords, he said. Samples of that malware have been recovered and are being analysed, he added.

AI

'A Global Watermarking Standard Could Help Safeguard Elections In the ChatGPT Era' (thehill.com) 104

"To prevent disinformation from eroding democratic values worldwide, the U.S. must establish a global watermarking standard for text-based AI-generated content," writes retired U.S. Army Col. Joe Buccino in an opinion piece for The Hill. While President Biden's October executive order requires watermarking of AI-derived video and imagery, it offers no watermarking requirement for text-based content. "Text-based AI represents the greatest danger to election misinformation, as it can respond in real-time, creating the illusion of a real-time social media exchange," writes Buccino. "Chatbots armed with large language models trained with reams of data represent a catastrophic risk to the integrity of elections and democratic norms."

Joe Buccino is a retired U.S. Army colonel who serves as an A.I. research analyst with the U.S. Department of Defense Defense Innovation Board. He served as U.S. Central Command communications director from 2021 until September 2023. Here's an excerpt from his report: Watermarking text-based AI content involves embedding unique, identifiable information -- a digital signature documenting the AI model used and the generation date -- into the metadata generated text to indicate its artificial origin. Detecting this digital signature requires specialized software, which, when integrated into platforms where AI-generated text is common, enables the automatic identification and flagging of such content. This process gets complicated in instances where AI-generated text is manipulated slightly by the user. For example, a high school student may make minor modifications to a homework essay created through Chat-GPT4. These modifications may drop the digital signature from the document. However, that kind of scenario is not of great concern in the most troubling cases, where chatbots are let loose in massive numbers to accomplish their programmed tasks. Disinformation campaigns require such a large volume of them that it is no longer feasible to modify their output once released.

The U.S. should create a standard digital signature for text, then partner with the EU and China to lead the world in adopting this standard. Once such a global standard is established, the next step will follow -- social media platforms adopting the metadata recognition software and publicly flagging AI-generated text. Social media giants are sure to respond to international pressure on this issue. The call for a global watermarking standard must navigate diverse international perspectives and regulatory frameworks. A global standard for watermarking AI-generated text ahead of 2024's elections is ambitious -- an undertaking that encompasses diplomatic and legislative complexities as well as technical challenges. A foundational step would involve the U.S. publicly accepting and advocating for a standard of marking and detection. This must be followed by a global campaign to raise awareness about the implications of AI-generated disinformation, involving educational initiatives and collaborations with the giant tech companies and social media platforms.

In 2024, generative AI and democratic elections are set to collide. Establishing a global watermarking standard for text-based generative AI content represents a commitment to upholding the integrity of democratic institutions. The U.S. has the opportunity to lead this initiative, setting a precedent for responsible AI use worldwide. The successful implementation of such a standard, coupled with the adoption of detection technologies by social media platforms, would represent a significant stride towards preserving the authenticity and trustworthiness of democratic norms.

The Courts

The Humble Emoji Has Infiltrated the Corporate World (theatlantic.com) 56

An anonymous reader shares a report: A court in Washington, D.C., has been stuck with a tough, maybe impossible question: What does full moon face emoji mean? Let me explain: In the summer of 2022, Ryan Cohen, a major investor in Bed Bath & Beyond, responded to a tweet about the beleaguered retailer with this side-eyed-moon emoji. Later that month, Cohen -- hailed as a "meme king" for his starring role in the GameStop craze -- disclosed that his stake in the company had grown to nearly 12 percent; the stock price subsequently shot up. That week, he sold all of his shares and walked away with a reported $60 million windfall.

Now shareholders are suing him for securities fraud, claiming that Cohen misled investors by using the emoji the way meme-stock types sometimes do -- to suggest that the stock was going "to the moon." A class-action lawsuit with big money on the line has come to legal arguments such as this: "There is no way to establish objectively the truth or falsity of a tiny lunar cartoon," as Cohen's lawyers wrote in an attempt to get the emoji claim dismissed. That argument was denied, and the court held that "emojis may be actionable."

The humble emoji -- and its older cousin, the emoticon -- has infiltrated the corporate world, especially in tech. Last month, when OpenAI briefly ousted Sam Altman and replaced him with an interim CEO, the company's employees reportedly responded with a vulgar emoji on Slack. That FTX, the failed cryptocurrency exchange once run by Sam Bankman-Fried, apparently used these little icons to approve million-dollar expense reports was held up during bankruptcy proceedings as a damning example of its poor corporate controls. And in February, a judge allowed a lawsuit to move forward alleging that an NFT company called Dapper Labs was illegally promoting unregistered securities on Twitter, because "the 'rocket ship' emoji, 'stock chart' emoji, and 'money bags' emoji objectively mean one thing: a financial return on investment."

AI

AI Companies Would Be Required To Disclose Copyrighted Training Data Under New Bill (theverge.com) 42

An anonymous reader quotes a report from The Verge: Two lawmakers filed a bill requiring creators of foundation models to disclose sources of training data so copyright holders know their information was taken. The AI Foundation Model Transparency Act -- filed by Reps. Anna Eshoo (D-CA) and Don Beyer (D-VA) -- would direct the Federal Trade Commission (FTC) to work with the National Institute of Standards and Technology (NIST) to establish rules for reporting training data transparency. Companies that make foundation models will be required to report sources of training data and how the data is retained during the inference process, describe the limitations or risks of the model, how the model aligns with NIST's planned AI Risk Management Framework and any other federal standards might be established, and provide information on the computational power used to train and run the model. The bill also says AI developers must report efforts to "red team" the model to prevent it from providing "inaccurate or harmful information" around medical or health-related questions, biological synthesis, cybersecurity, elections, policing, financial loan decisions, education, employment decisions, public services, and vulnerable populations such as children.

The bill calls out the importance of training data transparency around copyright as several lawsuits have come out against AI companies alleging copyright infringement. It specifically mentions the case of artists against Stability AI, Midjourney, and Deviant Art, (which was largely dismissed in October, according to VentureBeat), and Getty Images' complaint against Stability AI. The bill still needs to be assigned to a committee and discussed, and it's unclear if that will happen before the busy election campaign season starts. Eshoo and Beyer's bill complements the Biden administration's AI executive order, which helps establish reporting standards for AI models. The executive order, however, is not law, so if the AI Foundation Model Transparency Act passes, it will make transparency requirements for training data a federal rule.

Medicine

How Two Pharmacists Figured Out That Decongestants Don't Work (scientificamerican.com) 143

In 2005, the reclassification of pseudoephedrine to behind-the-counter status led to widespread use of oral phenylephrine in OTC decongestants, despite evidence of its ineffectiveness. Randy Hatton, a clinical professor in the College of Pharmacy at the University of Florida, and his colleague worked to bring this issue to the FDA's attention, revealing loopholes in the regulatory process for older OTC drugs. Hatton writes in an opinion piece for Scientific American: Before the FDA required that drugs had to be proven effective, it determined whether OTC drugs were effective through expert panels that reviewed existing data. These OTC monographs establish what older OTC ingredients can be marketed without FDA approval. The oral decongestant monograph panel reviewed a few published studies and multiple unpublished studies for phenylephrine. Of the unpublished studies, only four studies showed oral phenylephrine was effective, while seven showed it was no better than placebo. We requested copies of all evidence used by the nasal decongestant review panel via a Freedom of Information Act request and performed a systematic review and meta-analysis ourselves. [...]

The FDA has multiple regulatory processes for different types of medicinal compounds. People are perhaps most familiar with the New Drug Application process, which leads to clinical trials for prescription drug approvals. However, many OTC or nonprescription drugs are regulated differently. In fact, a law passed in 1951, the Durham-Humphrey Amendment to the 1938 Food, Drug, and Cosmetic Act, created the categories of prescription and nonprescription drugs. In 1962, the act was amended again so that drugs had to be shown to be effective, hence the requirement for well-done clinical trials. But what about the drugs that were approved before 1962? This is the loophole that some OTC drugs fall through. For prescription drugs, FDA tried to address pre-1962 approvals through a review of over 3,000 prescription drugs. Most of those drugs have now been reviewed and addressed, but there are still unapproved prescription drugs on the market today, such as an extended-release form of oral nitroglycerin. For nonprescription drugs, FDA established the OTC monograph process 10 years after the 1962 amendment to the Food, Drug, and Cosmetic Act, which required products not proven effective to be reconsidered. FDA formed advisory panels grouping hundreds of ingredients into 26 categories based on the products' uses. After gathering all available information, both published and unpublished, from manufacturers, the advisory panels issued final reports to FDA about whether these ingredients were GRASE (generally recognized as safe and effective), not GRASE, or inconclusive. GRASE ingredients can be used in nonprescription drugs without FDA approval if the use matches the monograph.
"The oral phenylephrine example shows that FDA needs more funding to look at these old drugs," concludes Hatton. "We need public funds to support independent researchers who want to examine these products objectively. The government should be able to spend millions to save consumers billions on ineffective products. Companies that market these products have no incentive to prove they don't work. Nonprescription drugs must be effective -- not just safe."
Privacy

UK Police To Be Able To Run Face Recognition Searches on 50 Million Driving Licence Holders (theguardian.com) 24

The police will be able to run facial recognition searches on a database containing images of Britain's 50 million driving licence holders under a law change being quietly introduced by the government. From a report: Should the police wish to put a name to an image collected on CCTV, or shared on social media, the legislation would provide them with the powers to search driving licence records for a match. The move, contained in a single clause in a new criminal justice bill, could put every driver in the country in a permanent police lineup, according to privacy campaigners.

Facial recognition searches match the biometric measurements of an identified photograph, such as that contained on driving licences, to those of an image picked up elsewhere. The intention to allow the police or the National Crime Agency (NCA) to exploit the UK's driving licence records is not explicitly referenced in the bill or in its explanatory notes, raising criticism from leading academics that the government is "sneaking it under the radar." Once the criminal justice bill is enacted, the home secretary, James Cleverly, must establish "driver information regulations" to enable the searches, but he will need only to consult police bodies, according to the bill.

Security

Attack Discovered Against SSH (arstechnica.com) 66

jd writes: Ars Technica is reporting a newly-discovered man-in-the-middle attack against SSH. This only works if you are using "ChaCha20-Poly1305" or "CBC with Encrypt-then-MAC", so it isn't a universal flaw. The CVE numbers for this vulnerability are CVE-2023-48795, CVE-2023-46445, and CVE-2023-46446.

From TFA:

At its core, Terrapin works by altering or corrupting information transmitted in the SSH data stream during the handshake -- the earliest stage of a connection, when the two parties negotiate the encryption parameters they will use to establish a secure connection. The attack targets the BPP, short for Binary Packet Protocol, which is designed to ensure that adversaries with an active position can't add or drop messages exchanged during the handshake. Terrapin relies on prefix truncation, a class of attack that removes specific messages at the very beginning of a data stream.

The Terrapin attack is a novel cryptographic attack targeting the integrity of the SSH protocol, the first-ever practical attack of its kind, and one of the very few attacks against SSH at all. The attack exploits weaknesses in the specification of SSH paired with widespread algorithms, namely ChaCha20-Poly1305 and CBC-EtM, to remove an arbitrary number of protected messages at the beginning of the secure channel, thus breaking integrity. In practice, the attack can be used to impede the negotiation of certain security-relevant protocol extensions. Moreover, Terrapin enables more advanced exploitation techniques when combined with particular implementation flaws, leading to a total loss of confidentiality and integrity in the worst case.

China

China Issues Draft Contingency Plan for Data Security Incidents (reuters.com) 5

China on Friday proposed a four-tier classification to help it respond to data security incidents, highlighting Beijing's concern with large-scale data leaks and hacking within its borders. From a report: The plan, which is currently soliciting opinions from the public, proposes a four-tier, colour-coded system depending on the degree of harm inflicted upon national security, a company's online and information network, or the running of the economy.

According to the plan, incidents that involve losses surpassing 1 billion yuan ($141 million) and affect the personal information of over 100 million people, or the "sensitive" information of over 10 million people, will be classed as "especially grave," to which a red warning must be issued. The plan demands that in response to red and orange warnings, the involved companies and relevant local regulatory authorities must establish a 24-hour work rota to address the incident and MIIT must be notified of the data breach within ten minutes of the incident happening, among other measures.

Slashdot Top Deals