The Courts

Supreme Court Rules Against Reexamining Section 230 (theverge.com) 58

Adi Robertson writes via The Verge: The Supreme Court has declined to consider reinterpreting foundational internet law Section 230, saying it wasn't necessary for deciding the terrorism-related case Gonzalez v. Google. The ruling came alongside a separate but related ruling in Twitter v. Taamneh, where the court concluded that Twitter had not aided and abetted terrorism. In an unsigned opinion (PDF) issued today, the court said the underlying complaints in Gonzalez were weak, regardless of Section 230's applicability. The case involved the family of a woman killed in a terrorist attack suing Google, which the family claimed had violated the law by recommending terrorist content on YouTube. They sought to hold Google liable under anti-terrorism laws.

The court dismissed the complaint largely because of its unanimous ruling (PDF) in Twitter v. Taamneh. Much like in Gonzalez, a family alleged that Twitter knowingly supported terrorists by failing to remove them from the platform before a deadly attack. In a ruling authored by Justice Clarence Thomas, however, the court declared that the claims were "insufficient to establish that these defendants aided and abetted ISIS" for the attack in question. Thomas declared that Twitter's failure to police terrorist content failed the requirement for some "affirmative act" that involved meaningful participation in an illegal act. "If aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer," writes Thomas. That includes "those who merely deliver mail or transmit emails" becoming liable for the contents of those messages or even people witnessing a robbery becoming liable for the theft. "There are no allegations that defendants treated ISIS any differently from anyone else. Rather, defendants' relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm's length, passive, and largely indifferent."

For Gonzalez v. Google, "the allegations underlying their secondary-liability claims are materially identical to those at issue in Twitter," says the court. "Since we hold that the complaint in that case fails to state a claim for aiding and abetting ... it appears to follow that the complaint here likewise fails to state such a claim." Because of that, "we therefore decline to address the application of 230 to a complaint that appears to state little, if any, plausible claim for relief." [...] The Gonzalez ruling is short and declines to deal with many of the specifics of the case. But the Twitter ruling does take on a key question from Gonzalez: whether recommendation algorithms constitute actively encouraging certain types of content. Thomas appears skeptical: "To be sure, plaintiffs assert that defendants' 'recommendation' algorithms go beyond passive aid and constitute active, substantial assistance. We disagree. By plaintiffs' own telling, their claim is based on defendants' 'provision of the infrastructure which provides material support to ISIS.' Viewed properly, defendants' 'recommendation' algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS' content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants' passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS."
"The interpretation may deal a blow to one common argument for adding special liability to social media: the claim that recommendation systems go above and beyond simply hosting content and explicitly encourage that content," adds Robertson. "This ruling's reasoning suggests that simply recommending something on an 'agnostic' basis -- as opposed to, in one hypothetical from Thomas, creating a system that 'consciously and selectively chose to promote content provided by a particular terrorist group' -- isn't an active form of encouragement."
EU

'EU's Cyber Resilience Act Contains a Poison Pill for Open Source Developers' (theregister.com) 86

Veteran open source report Steven J. Vaughan-Nichols, writing at The Register: We can all agree that securing our software is a good thing. Thanks to one security fiasco after another -- the SolarWinds software supply chain attack, the perpetual Log4j vulnerability, and the npm maintainer protest code gone wrong -- we know we must secure our code. But the European Union's proposed Cyber Resilience Act (CRA) goes way, way too far in trying to regulate software security. At the top level, it looks good. Brussels states that before "products with digital elements" are allowed on the EU market, manufacturers must follow best practices in four areas. Secure the product over its whole life; follow a coherent cybersecurity framework; show cybersecurity transparency; and ensure customers can use products securely. Sounds great, doesn't it? But the road to hell is paved with good intentions. The devil, as always, is in the details. Some of this has nothing to do with open source software. Good luck creating any program in any way that a clueless user can't screw up.

But the EU commissioners don't have a clue about how open source software works. Or, frankly, what it is. They think that open source is the same as proprietary software with a single company behind it that's responsible for the work and then monetizes it. Nope. Open source, as I've said over and over again, is not a business model. Sure, you can build businesses around it. Who doesn't these days? But just as the AWSes, Googles, and Facebooks of the world depend on open source software, they also use programs written by Tom, Denise, and Harry from around the world. The CRA's underlying assumption is that you can just add security to software, like adding a new color option to your car's paint job. We wish!

Securing software is a long, painful process. Many open source developers have neither the revenue nor resources to secure their programs to a government standard. The notional open source developer in Nebraska, thanklessly maintaining a vital small program, may not even know where Brussels is (it's in Belgium). They can't afford to secure their software to meet EU specifications. They often have no revenue. They certainly have no control over who uses their software. It's open source, for pity's sake! As open source developer Thomas Depierre recently blogged: "We are not suppliers. All the people writing and maintaining these projects, we are not suppliers. We do not have a business relationship with all these organizations. We are volunteers, writing code and putting it online under these Licenses." Exactly.

Crime

Elizabeth Holmes Speaks (yahoo.com) 161

Elizabeth Holmes hasn't spoken to the media since 2016. Now convicted on criminal fraud charges — and counting down the days until she reports for prison — Holmes finally breaks the silence in a profile published today in the New York Times.

"I made so many mistakes," Holmes says, "and there was so much I didn't know and understand, and I feel like when you do it wrong, it's like you really internalize it in a deep way," Billy Evans, Ms. Holmes's partner and the father of their two young children, pushes a stroller with the couple's 20-month-old son, William... At one point, I tell her that I heard Jennifer Lawrence had pulled out of portraying her in a movie. She replied, almost reflectively, "They're not playing me. They're playing a character I created." So, why did she create that public persona? "I believed it would be how I would be good at business and taken seriously and not taken as a little girl or a girl who didn't have good technical ideas," said Ms. Holmes, who founded Theranos at 19. "Maybe people picked up on that not being authentic, since it wasn't..."

Her top lieutenant at Theranos, and much older boyfriend at the time, Ramesh Balwani, was found guilty of 10 counts of wire fraud and two counts of conspiracy to commit wire fraud at Theranos. He began a 13-year prison sentence last month. On Thursday, his legal team filed an appeal with the Ninth Circuit... She said Mr. Balwani did not control her every interaction or statement at Theranos, but she "deferred to him in the areas he oversaw because I believed he knew better than I did," and those areas included the problematic clinical lab... Ms. Holmes's story of how she got here — to the bright, cozy house and the supportive partner and the two babies — feels a lot like the story of someone who had finally broken out of a cult and been deprogrammed. After her relationship with Mr. Balwani ended and Theranos dissolved, Ms. Holmes said, "I began my life again."

But then I remember that Ms. Holmes was running the cult...

What does she think would have happened if she hadn't garnered so much early attention as the second coming of Silicon Valley? Ms. Holmes does not blink: "We would've seen through our vision." In other words, she thinks if she'd spent more time quietly working on her inventions and less time on a stage promoting the company, she would have revolutionized health care by now. This kind of misguided talk is the one consistent thread in my reporting on who Ms. Holmes really is. She repeatedly says that Theranos wasn't a get-rich-quick scheme for her; she never sold her shares and didn't come out of it wealthy. Ms. Holmes's parents said they borrowed $500,000 against their Washington, D.C.-area home to post Ms. Holmes's bond...

She maintains the idealistic delusion of a 19-year-old, never mind that she's 39 with a fraud conviction, telling me she is still working on health care-related inventions and would continue to do so behind bars. "I still dream about being able to contribute in that space," Ms. Holmes said. "I still feel the same calling to it as I always did and I still think the need is there." If your head is exploding at how divorced from reality this sounds, that's kind of the point. When Ms. Holmes uses the messianic vernacular of tech, I get the sense that she truly believes that she could have — and, in fact, she still could — change the world, and she doesn't much care if we believe her or not...

It's this steadfast (or unhinged?) belief that has kept Ms. Holmes fighting, even though a guilty plea would have likely helped her chances of remaining free.

AI

America's FTC Warns Businesses Not to Use AI to Harm Consumers (ftc.gov) 26

America's consumer-protecting federal agency has a division overseeing advertising practices. Its web site includes a "business guidance" section with "advice on complying with FTC law," and this week one of the agency's attorney's warned that the FTC "is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers."

The warning came in a blog post titled "The Luring Test: AI and the engineering of consumer trust." In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person's emotions, and, oops, that's what it did. While the scenario is pure speculative fiction, companies are always looking for new ways — such as the use of generative AI tools — to better persuade people and change their behavior. When that conduct is commercial in nature, we're in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers...

As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people's beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from "automation bias," whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they're conversing with something that understands them and is on their side.

Many commercial actors are interested in these generative AI tools and their built-in advantage of tapping into unearned human trust. Concern about their malicious use goes well beyond FTC jurisdiction. But a key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment. Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers , in-game purchases , and attempts to cancel services . Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don't comprise a class of people protected by anti-discrimination laws.

The FTC attorney also warns against paid placement within the output of a generative AI chatbot. ("Any generative AI output should distinguish clearly between what is organic and what is paid.") And in addition, "People should know if an AI product's response is steering them to a particular website, service provider, or product because of a commercial relationship. And, certainly, people should know if they're communicating with a real person or a machine..."

"Given these many concerns about the use of new AI tools, it's perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering. If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look. "

Thanks to Slashdot reader gluskabe for sharing the post.
AI

Google Has More Powerful AI, Says Engineer Fired Over Sentience Claims (futurism.com) 139

Remember that Google engineer/AI ethicist who was fired last summer after claiming their LaMDA LLM had become sentient?

In a new interview with Futurism, Blake Lemoine now says the "best way forward" for humankind's future relationship with AI is "understanding that we are dealing with intelligent artifacts. There's a chance that — and I believe it is the case — that they have feelings and they can suffer and they can experience joy, and humans should at least keep that in mind when interacting with them." (Although earlier in the interview, Lemoine concedes "Is there a chance that people, myself included, are projecting properties onto these systems that they don't have? Yes. But it's not the same kind of thing as someone who's talking to their doll.")

But he also thinks there's a lot of research happening inside corporations, adding that "The only thing that has changed from two years ago to now is that the fast movement is visible to the public." For example, Lemoine says Google almost released its AI-powered Bard chatbot last fall, but "in part because of some of the safety concerns I raised, they deleted it... I don't think they're being pushed around by OpenAI. I think that's just a media narrative. I think Google is going about doing things in what they believe is a safe and responsible manner, and OpenAI just happened to release something." "[Google] still has far more advanced technology that they haven't made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They've had that technology for over two years. What they've spent the intervening two years doing is working on the safety of it — making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that. That's what they spent those two years doing...

"And in those two years, it wasn't like they weren't inventing other things. There are plenty of other systems that give Google's AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal — not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it. That's the one that I was like, "you know this thing, this thing's awake." And they haven't let the public play with that one yet. But Bard is kind of a simplified version of that, so it still has a lot of the kind of liveliness of that model...

"[W]hat it comes down to is that we aren't spending enough time on transparency or model understandability. I'm of the opinion that we could be using the scientific investigative tools that psychology has come up with to understand human cognition, both to understand existing AI systems and to develop ones that are more easily controllable and understandable."

So how will AI and humans will coexist? "Over the past year, I've been leaning more and more towards we're not ready for this, as people," Lemoine says toward the end of the interview. "We have not yet sufficiently answered questions about human rights — throwing nonhuman entities into the mix needlessly complicates things at this point in history."
NASA

NASA Seeks 'Citizen Scientists' to Listen to Space Noises (nasa.gov) 22

"Earth's magnetic environment is filled with a symphony of sound that we cannot hear," NASA wrote this month. When solar winds approach earth, "it causes the magnetic field lines and plasma around Earth to vibrate like the plucked strings of a harp, producing ultralow-frequency waves... a cacophonous operetta portraying the dramatic relationship between Earth and the Sun."

So NASA is now announcing "a new NASA-funded citizen science project called HARP — or Heliophysics Audified: Resonances in Plasmas " that has "turned those once-unheard waves into audible whistles, crunches, and whooshes..." Or, as the Washington Post puts it, "NASA wants your help listening in on the universe."

From NASA's news release: In 2007, NASA launched five satellites to fly through Earth's magnetic "harp" — its magnetosphere — as part of the THEMIS mission (Time History of Events and Macroscale Interactions during Substorms). Since then, THEMIS has been gathering a bounty of information about plasma waves across Earth's magnetosphere. "THEMIS can sample the whole harp," said Michael Hartinger, a heliophysicist at the Space Science Institute in Colorado. "And it's been out there a long time, so it has collected a lot of data."

The frequencies of the waves THEMIS measures are too low for our ears to hear, however. So the HARP team sped them up to convert them to sound waves. By using an interactive tool developed by the team, you can listen to these waves and pick out interesting features you hear in the sounds... Preliminary investigations with HARP have already started revealing unexpected features, such as what the team calls a "reverse harp" — frequencies changing in the opposite way than what scientists anticipated...

"Data sonification provides human beings with an opportunity to appreciate the naturally occurring music of the cosmos," said Robert Alexander, a HARP team member from Auralab Technologies in Michigan. "We're hearing sounds that are literally out of this world, and for me that's the next best thing to floating in a spacesuit."

To start exploring these sounds, visit the HARP website.

"Think listening to years' worth of wave patterns is a job for artificial intelligence? Think again," writes the Washington Post. In a news release, HARP team member Martin Archer of Imperial College London says humans are often better at listening than machines. "The human sense of hearing is an amazing tool," Archer says. "We're essentially trained from birth to recognize patterns and pick out different sound sources. We can innately do some pretty crazy analysis that outperforms even some of our most advanced computer algorithms."
Government

'Delete Act' Seeks To Give Californians More Power To Block Data Tracking (kqed.org) 62

On Tuesday, the Senate Judiciary Committee in Sacramento is expected to consider a new bill called "The Delete Act," or SB 362, which aims to give Californians the power to block data tracking. "The onus is on individuals to try to protect their data from an estimated 2,000-4,000 data brokers worldwide -- many of which have no other relationship with consumers beyond the trade in their data," reports KQED. "This lucrative trade is also known as surveillance advertising, or the 'ad tech' industry." From the report: EFF supports The Delete Act, or SB 362, by state Sen. Josh Becker, who represents the Peninsula. "I want to be able to hit that delete button and delete my personal information, delete the ability of these data brokers to collect and track me," said Becker, of his second attempt to pass such a bill. "These data brokers are out there analyzing, selling personal information. You know, this is a way to put a stop to it."

Tracy Rosenberg, a data privacy advocate with Media Alliance and Oakland Privacy, said she anticipates a lot of pushback from tech companies, because "making [the Delete Act] workable probably destroys their businesses as most of us, by now, don't really see the value in the aggregating and sale of our data on the open market by third parties... "It is a pretty basic-level philosophical battle about whether your personal information is, in fact, yours to share as you see appropriate and when it is personally beneficial to you, or whether it is property to be bought and sold," Rosenberg said.

AI

What Happens When You Put 25 ChatGPT-Backed Agents Into an RPG Town? (arstechnica.com) 52

"A group of researchers at Stanford University and Google have created a miniature RPG-style virtual world similar to The Sims," writes Ars Technica, "where 25 characters, controlled by ChatGPT and custom code, live out their lives independently with a high degree of realistic behavior." "Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day," write the researchers in their paper... To pull this off, the researchers relied heavily on a large language model for social interaction, specifically the ChatGPT API. In addition, they created an architecture that simulates minds with memories and experiences, then let the agents loose in the world to interact.... To study the group of AI agents, the researchers set up a virtual town called "Smallville," which includes houses, a cafe, a park, and a grocery store.... Interestingly, when the characters in the sandbox world encounter each other, they often speak to each other using natural language provided by ChatGPT. In this way, they exchange information and form memories about their daily lives.

When the researchers combined these basic ingredients together and ran the simulation, interesting things began to happen. In the paper, the researchers list three emergent behaviors resulting from the simulation. None of these were pre-programmed but rather resulted from the interactions between the agents. These included "information diffusion" (agents telling each other information and having it spread socially among the town), "relationship memory" (memory of past interactions between agents and mentioning those earlier events later), and "coordination" (planning and attending a Valentine's Day party together with other agents).... "Starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party," the researchers write, "the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time...."

To get a look at Smallville, the researchers have posted an interactive demo online through a special website, but it's a "pre-computed replay of a simulation" described in the paper and not a real-time simulation. Still, it gives a good illustration of the richness of social interactions that can emerge from an apparently simple virtual world running in a computer sandbox.

Interstingly, the researchers hired human evaluators to gauge how well the AI agents produced believable responses — and discovered they were more believable than when supplied their own responses.

Thanks to long-time Slashdot reader Baron_Yam for sharing the article.
Red Hat Software

Biggest Linux Company of Them All Still Pushing To Become Cloud Power (theregister.com) 23

An anonymous reader shares a report: For Red Hat, which turned 30 on March 27, it was a cause for celebration. From a business that got started in one of its co-founder's wife's sewing room, it became the first billion-dollar pure-play open-source company and then the engine driving IBM. It has been a long strange trip. Sure, today, the tech world is dominated by Linux and open source software, but in 1993, Linux was merely an obscure operating system known only to enthusiasts. Red Hat played a significant role in transforming the "just a hobby" operating system into today's major IT powerhouse. Red Hat co-founder Bob Young, who previously ran a rental typewriter business, was one of those who became intrigued by Linux. In 1993, he established ACC Corporation, a catalog company that distributed Slackware Linux CDs and open-source software.

[...] In 2003, Paul Cormier, then Red Hat's vice president of engineering and now the company's chairman, spearheaded the shift from the inexpensive prosumer Red Hat Linux distribution to the full business-oriented Red Hat Enterprise Linux (RHEL). At the time, many Linux users hated the idea. Even inside Red Hat, Cormier said that many engineers were initially opposed to the new business model, causing some to leave the company while others stayed. The change also upset many users who felt Red Hat was abandoning its original customers. However, enterprise clients had a different perspective. Whitehurst, who became Red Hat CEO in 2008, said, "Once RHEL was in the market, we had to fully support it to make it truly consumable for the enterprise." They succeeded, and Red Hat continued to grow. This is the model that turned Red Hat into the first billion-dollar-a-quarter pure open-source company. Impressive for a business built around an operating system once considered suitable only for the "lunatic fringe." Then, in 2018, IBM acquired Red Hat for a cool $34 billion. There was nothing crazy about that move.

[...] Another change that was already present in Red Hat, a shift towards supporting the cloud, has accelerated. Today, while RHEL remains the heart of the business, the Linux-powered cloud has become increasingly important. In particular, Red Hat OpenShift, its Kubernetes-powered hybrid cloud application platform, is more important than ever. Where does Red Hat go from here? When I last talked to Cormier and Red Hat's latest CEO, Matt Hicks, they told me that they'd keep moving forward with the hybrid cloud. After all, as Cormier pointed out, "the cloud wouldn't be here" without Linux and open source. As for Red Hat's relationship with IBM, Cormier said, "The red lines were red, and the blue lines were blue, and that will stay the same."

IT

The Problem With Weather Apps (theatlantic.com) 57

An anonymous reader shares a report:Weather apps are not all the same. There are tens of thousands of them, from the simply designed Apple Weather to the expensive, complex, data-rich Windy.App. But all of these forecasts are working off of similar data, which are pulled from places such as the National Oceanic and Atmospheric Administration (NOAA) and the European Centre for Medium-Range Weather Forecasts. Traditional meteorologists interpret these models based on their training as well as their gut instinct and past regional weather patterns, and different weather apps and services tend to use their own secret sauce of algorithms to divine their predictions. On an average day, you're probably going to see a similar forecast from app to app and on television. But when it comes to how people feel about weather apps, these edge cases -- which usually take place during severe weather events -- are what stick in a person's mind. "Eighty percent of the year, a weather app is going to work fine," Matt Lanza, a forecaster who runs Houston's Space City Weather, told me. "But it's that 20 percent where people get burned that's a problem."

No people on the planet have a more tortured and conflicted relationship with weather apps than those who interpret forecasting models for a living. "My wife is married to a meteorologist, and she will straight up question me if her favorite weather app says something different than my forecast," Lanza told me. "That's how ingrained these services have become in most peoples' lives." The basic issue with weather apps, he argues, is that many of them remove a crucial component of a good, reliable forecast: a human interpreter who can relay caveats about models or offer a range of outcomes instead of a definitive forecast. [...] What people seem to be looking for in a weather app is something they can justify blindly trusting and letting into their lives -- after all, it's often the first thing you check when you roll over in bed in the morning. According to the 56,400 ratings of Carrot in Apple's App Store, its die-hard fans find the app entertaining and even endearing. "Love my psychotic, yet surprisingly accurate weather app," one five-star review reads. Although many people need reliable forecasting, true loyalty comes from a weather app that makes people feel good when they open it.

Businesses

The Biggest EV Battery Recycling Plant In the US Is Open For Business (canarymedia.com) 62

Ascend Elements opened a recycling plant in Covington, Georgia in late March that it says is the largest electric-vehicle battery recycling facility in North America. "It can process 30,000 metric tons of input each year, breaking down old batteries and prepping the most valuable materials inside to be processed and turned into new batteries," reports Canary Media. "That capacity equates to breaking down the battery packs from 70,000 electric vehicles annually, said Ascend CEO Mike O'Kronley." From the report: Recycling can deliver new battery materials without the expense and environmental impact of new mining. It is extremely hard to develop new mines in the U.S., but the federal government is lavishing funds on new battery recycling plants. The revamped EV tax credits also call for increasing shares of domestically sourced batteries and battery materials. Those market and policy shifts made recycling sufficiently desirable that Ascend is paying other companies for their old batteries. At the moment, those deals are mostly with EV or battery makers that have high volumes to get rid of.

"Paying for these spent batteries keeps them from going into the landfill," O'Kronley told Canary Media. "It's better to get paid for it rather than throw them away." Ascend also accepts used consumer electronics from battery-collection programs, such as Call2Recycle. That's not to say there are enough old batteries coming in to fill the factory. Currently, 80 to 90 percent of what's going into Ascend's Covington facility is scrap materials from battery factories, including SK Battery America's plant in Commerce, Georgia.

That relationship influenced Ascend's choice of location: Covington sits in the emerging "Battery Belt," a swath of new battery factories and electric-vehicle plants opening up across the Midwest and the Carolinas, Georgia, Tennessee and Kentucky (look for all the blue icons in this White House map of new industrial investments). Fellow battery-recycling startup Redwood Materials also chose South Carolina for a forthcoming $3.5 billion recycling facility. "There will need to be a recycling plant within about an hour's drive of every single one of those [new battery gigafactories]," O'Kronley said. "You don't want to be [long-distance] shipping these very large, heavy EV batteries that are classified as Class 9 hazardous materials."
The report notes that the company's second commercial-scale facility in Hopkinsville, Kentucky will "introduce a brand-new technique for efficiently extracting cathode materials from black mass, which Ascend has dubbed 'hydro to cathode.'"
Technology

Russia Supplies Iran With Cyber Weapons as Military Cooperation Grows (wsj.com) 50

Russia is helping Iran gain advanced digital-surveillance capabilities as Tehran seeks deeper cooperation on cyberwarfare, WSJ reported Tuesday, citing people familiar with the matter said, adding another layer to a burgeoning military alliance that the U.S. sees as a threat. From the report: The potential for cyberwarfare collaboration comes after Iran has, according to U.S. and Iranian officials, sold Russia drones for use in Ukraine, agreed to provide short-range missiles to Moscow and shipped tank and artillery rounds to the battlefield. Tehran is seeking the cyber help along with what U.S. and Iranian officials have said are requests for dozens of elite Russian attack helicopters and jet fighters and aid with its long-range missile program.

Russia and Iran both have sophisticated cyber capabilities and have long collaborated with each other, signing a cyber-cooperation agreement two years ago that analysts said focused mostly on cyber-defense networks. Moscow has long resisted sharing digital-offensive capabilities with Iran in the past, for fear they will end up being sold later on the dark web, the people said. Since the start of the war in Ukraine, Russia has provided Iran with communication-surveillance capabilities as well as eavesdropping devices, advanced photography devices and lie detectors, people familiar with the matter said. Moscow has likely already shared with Iran more advanced software that would allow it to hack the phones and systems of dissidents and adversaries, the people said. Russian authorities have determined that the benefits of advancing the military relationship with Iran outweigh any downsides, the people said.

AI

Google Partners With AI Startup Replit To Take on Microsoft's GitHub (bloomberg.com) 15

Alphabet's Google is striking a partnership to combine its artificial intelligence language models with software from startup Replit that helps computer programmers write code, a bid to compete with a similar product from Microsoft's GitHub and OpenAI. From a report: Replit's Ghostwriter, which has 20 million users, will rely on Google's language-generation AI to improve its ability to suggest blocks of code, complete programs and answer developer questions. Google Cloud Vice President June Yang declined to specify which language AI products Replit will use, noting that it's a customized combination of systems that address different tasks like chat and code-generation. Previously, Replit built the product with its own AI. Google "has much better technology than most people know," Replit Chief Executive Officer Amjad Masad said in an interview. The startup will also expand its use of Google's cloud services and hopes the relationship with the tech giant will help it win over larger corporate customers -- right now Replit's clients are largely individual developers and startups. Google also will distribute Replit's software as part of the partnership. GitHub, which is wholly owned by Microsoft, last year released a product called Copilot, which suggests blocks of code as a software developer types, speeding up the process and automating rote or finicky coding tasks.
Government

Instead of Banning TikTok, Should We Regulate It Aggressively? (msnbc.com) 88

"TikTok CEO Shou Zi Chew testified before the House Energy and Commerce Committee Thursday about safety and national security concerns surrounding his social media behemoth," writes MSNBC, adding "He was not well received." Given what we know about how Big Tech abuses data, about how China's authoritarian government systematically embraces surveillance as a tool of social control, and about the increasingly adversarial geopolitical relationship between the U.S. and China, it's not sinophobic to ask questions about how to guard against TikTok's misuse. It's common sense. While a ban is probably too drastic and may fail to solve all the issues at hand, regulating the company is sensible. Fortunately, one of the key ways to address some of the concerns posed by TikTok — restricting all companies' capacity to collect data on Americans — could help us solve problems with online life that extends well beyond this social media platform....

[Evan Greer, the director at Fight for the Future, a digital rights organization], believes members of Congress laser focused on TikTok are "on a sidequest" in the scheme of a bigger crisis of surveillance of online life; Greer points to the American Data Privacy and Protection Act as a potential solution. That law would put in place strong data minimization policies, strictly limiting how and how much data companies can collect on people online. It also would deal a huge blow to the power of the algorithms of TikTok and other social media apps because their content recommendation relies on collecting huge amounts of data about its users. The passage of that act would force any company operating in the U.S., not just TikTok, to collect far less data — and reduce all social media companies' capacities to shape the flow of information through algorithmic amplification.

In addition to privacy legislation, the Federal Trade Commission could play a more aggressive role in creating and enforcing rules around commercial surveillance, Greer pointed out. TikTok raises legitimately tricky questions about national security. But it's not the only social media company that does, and national security concerns aren't the only reason to rethink the freedom we've given to social media companies in our society. Any time a powerful actor has vast control over the flow of information, it should be scrutinized as a possible source of exploitation, censorship and manipulation — and, when appropriate, regulated. TikTok should serve as the springboard for that conversation, not the beginning and ending of it.

CNN points out that TikTok isn't the only Chinese-owned platform finding viral success in America. "Of the top 10 most popular free apps on Apple's U.S. app store, four were developed with Chinese technology." Besides TikTok, there's also shopping app Temu, fast fashion retailer Shein and video editing app CapCut, which is also owned by ByteDance.
Duncan Clark, chairman and founder of investment advisory BDA China, tells CNN that these apps could be next.

But writing in the New York Times, the executive director of the Knight First Amendment Institute at Columbia argues that "it's difficult to see how a ban could survive First Amendment review." The Supreme Court and lower courts have held repeatedly that the mere invocation of national security is insufficient to justify the suppression of First Amendment rights. In court, the government will have to introduce evidence that the threats it is addressing are real, not merely conjectural, and that the proposed ban would address those threats. The evidence assembled so far is not likely to be sufficient. All of this will no doubt be frustrating to some policymakers, including to some who are commendably focused on the very real risks that social media companies' practices pose to Americans' privacy and security. But the legitimacy of our democracy depends on the free trade of information and ideas, including across international borders.
Education

Should Schools Makes CS/Cybersecurity a High School Graduation Requirement? 128

Long-time Slashdot reader theodp notes Microsoft's friendly relationship with North Dakota, pointing out that in 2017 Microsoft's president Brad Smith said the company would provide the state "cash grants, technology, curriculum and resources to nonprofits" and also "partner with schools to strengthen their ability to offer digital skills and computer science education to the youth they serve." "We just have such a good relationship with the community. We were also excited about Doug Burgum's election as governor. We had confidence that Doug, as governor, would bring a real focus on innovation that would focus on both changes in government and changes in technology." Before being elected Governor in 2016 (with the endorsement of Microsoft CEO Satya Nadella and financial backing from Bill Gates), former Microsoft exec Burgum sold his Fargo-based Great Plains Software business to Microsoft in 2002 for $1.1 billion and joined the software giant, where he reported directly to Steve Ballmer (a college friend) and managed Nadella (who became chief of Microsoft Business Solutions after Burgum's 2007 departure).

"We need a national movement for coding and computer science in our public schools [...] We need to influence, we need to support, we need to reform public policy as we're seeing here in North Dakota," Microsoft's Smith exhorted to TEDxFargo attendees in his return to North Dakota. "We need to make sure that computer science counts towards high school graduation." Mission accomplished. On Friday, North Dakota's governor Doug Burgum and School Superintendent Kirsten Baesler celebrated the governor's signing of HB1398, the Microsoft-supported bill which requires the teaching of computer science and cybersecurity and the integration of these content standards into school coursework from kindergarten through 12th grade. (Two of the ten members of North Dakota's K-12 CS and Cybersecurity Standards Review Committee were from Microsoft).

The superintendent said North Dakota is the first state in the nation to approve legislation requiring cybersecurity education. "Today is the culmination of years of work by stakeholders from all sectors to recognize and promote the importance of cybersecurity and computer science education in our elementary, middle and high schools," superintendent Baesler said at Friday's bill signing ceremony.

Baesler said EduTech, a division of bill supporter North Dakota Information Technology that provides IT support and professional development for K-12 educators, will be developing examples of cybersecurity and computer science education integration plans that may be used to assist local schools develop their own plans. EduTech is a Regional Partner of tech-backed nonprofit Code.org, which also voiced its support for HB1398. Code.org's Board of Directors include Microsoft President Brad Smith and CTO Kevin Scott.

Burgum, who joined Code.org's Governors Partnership for K-12 Computer Science in 2017, was also among 45 of the nation's State Governors who last July signed a Compact To Expand K-12 Computer Science Education in their states in response to a public letter from the CEOs for CS (including Microsoft's Nadella and Smith), part of a campaign organized by Code.org that called for state governments and education leaders to bring more CS to K-12 students to meet the future demands of the American workforce. Code.org has set a goal to make CS a high school graduation requirement for every student in all 50 states by the end of the decade.
AI

Panera Bread Begins Scanning Its Customers' Palms (cbsnews.com) 123

Slashdot reader quonset writes: In an effort to more personalize a customer's experience, the U.S. restaurant chain Panera Bread is rolling out palm-scanning technology which will link the palm print with the customer's loyalty program. According to Panera Bread CEO Niren Chaudhary, the move will allow a "frictionless, personalized, and convenient" evolution of Panera's loyalty program, which boasts 52 million members. The claim is this will allow the company to offer menu choices based on a customer's order history, allow staff to personally greet the customer, and offer further suggestions.

Privacy advocates are not so sure. From the story:

Panera says the technology will securely store its customers' biometric data. However, digital rights activists worry that information could be tapped by federal agencies or accessed by hackers.

"Federal agencies like Customs and Border Protection have experienced devastating hacks where large databases of biometric information have been stolen," Fight for the Future told CBS MoneyWatch in an email. "Do we really expect Amazon, or Panera, to have better cybersecurity practices?"

The scanners are already installed at locations in St. Louis, Panera announced Wednesday, and scanners will "expand to additional locations in the coming months." (Panera has 2,113 locations in 48 states.) "After a simple scan of the palm, Panera associates will be able to greet guests by name, communicate their available rewards, reorder their favorite menu items, or take another order of their choice," the announcement gushes, "extending the guest experience into a true and meaningful relationship.

"When they are done ordering, guests can simply scan their palm again to pay."
It's funny.  Laugh.

Startup Invents Long-Distance Kissing Machine (theguardian.com) 50

A Chinese startup has invented a long-distance kissing machine that transmits users' kiss data collected through motion sensors hidden in silicon lips, which simultaneously move when replaying kisses received. From a report: MUA -- named after the sound people commonly make when blowing a kiss -- also captures and replays sound and warms up slightly during kissing, making the experience more authentic, said Beijing-based Siweifushe. Users can even download kissing data submitted via an accompanying app by other users. The invention was inspired by lockdown isolation. At their most severe, China's lockdowns saw authorities forbid residents to leave their apartments for months on end. "I was in a relationship back then, but I couldn't meet my girlfriend due to lockdowns," said inventor Zhao Jianbo.

Then a student at the Beijing Film Academy, he focused his graduate project on the lack of physical intimacy in video calls. He later set up Siweifushe which released MUA, its first product, on 22 January. The device is priced at 260 yuan ($38). In the two weeks after its release, the firm sold over 3,000 kissing machines and received about 20,000 orders, he said. The MUA resembles a mobile stand with colourless pursed lips protruding from the front. To use it, lovers must download an app on to their smartphones and pair their kissing machines. When they kiss the device, it kisses back.

United States

US Strengthens Tech Ties With India But Doesn't Seek Decoupling From China, Raimondo Says (techcrunch.com) 26

The U.S. government is not seeking to "decouple" from China, nor is it seeking "technological decoupling," but Washington "would like to see India achieve its aspirations to play a larger role in the electronics supply chain," U.S. Commerce Secretary Gina Raimondo said on Friday. From a report: On its part, the U.S. signed a memorandum of understanding with India on Friday to cooperate in the semiconductor sector. The semiconductor industries in both the nations are beginning to assess the resiliency and gaps in the supply chain network, said Raimondo, whose department is overseeing pouring of about $52 billion into the U.S. semiconductor industry. [...] But even as India and the U.S. tighten their tech ties, Washington is not looking to cut reliance on China, she insisted. "We see India as a trusted technology partner and we want to continue to deepen our technological relationship with India. But I also want to make it clear that the United States doesn't seek to decouple from China."
Security

Data Breach Hits 'Hundreds' of Lawmakers And Staff On Capitol Hill (nbcnews.com) 24

A top House official said that a "significant data breach" at the health insurance marketplace for Washington, D.C., on Tuesday potentially exposed personal identifiable information of hundreds of lawmakers and staff. NBC News reports: In a letter obtained by NBC News, Chief Administrative Officer Catherine L. Szpindor said Wednesday that the U.S. Capitol Police and the FBI had alerted her to a data breach at DC Health Link, the Affordable Care Act online marketplace that administers health care plans for members of Congress and certain Capitol Hill staff. "Currently, I do not know the size and scope of the breach, but have been informed by the Federal Bureau of Investigation (FBI) that account information and [personally identifiable information] of hundreds of Member and House staff were stolen," Szpindor said. "I expect to have access to the list of impacted enrollees later today and will notify you directly if your information was compromised." Szpindor added that it did not appear that House lawmakers were "the specific target of the attack" on DC Health Link.

Out of an "abundance of caution," Szpindor said, lawmakers may opt to freeze family credit at three major credit bureaus, Equifax, Experian and Transunion. The data breach has also affected Senate offices, according to an email sent to Senate offices Wednesday afternoon that said the Senate Sergeant at Arms was informed by law enforcement about a data breach. The notice said that the "data included the full names, date of enrollment, relationship (self, spouse, child), and email address, but no other Personally Identifiable Information (PII)."

Privacy

AllTrails Data Exposes Precise Movements of Former Top Biden Official (vice.com) 47

An anonymous reader quotes a report from Motherboard: A security researcher appears to have tracked the physical location of a former top Biden administration official through his apparent usage of AllTrails, a popular hiking app with more than 30 million registered users. The AllTrails records appear to show the official visiting sensitive locations such as the White House, and also suggests the specific house where he or his family lives. By default, AllTrails users' activity is public for anyone to view, including completed trails, maps, and activities. But that convenience and focus on providing a social network style experience comes with potential risks around national security or privacy, depending on the particular user. Whether a public figure like a government official or celebrity, or someone at risk of stalking in general such as someone in an abusive relationship, AllTrails' privacy settings may be something users should consider.

"I found interesting results by searching near the Pentagon, NSA, CIA or White House and then looking at the user's other activity," Wojciech, the security researcher, told Motherboard in an email. Wojciech said they used their own open source intelligence platform as part of the investigative process. They said the tool supports Strava and another app called SportsTracker, and will include AllTrails itself soon. Wojciech sent Motherboard a link to what they believed to be the AllTrails profile of the former top Biden official. Motherboard is not naming the official because they did not respond to requests for comment, and their profile is still publicly accessible.

One trip to the White House in December recorded in AllTrails also shows a nearby apartment building he ended his journey at. More trips recorded that month show the official's other movements throughout Washington D.C. Much of the AllTrails activity relates to when this official was part of the administration. Motherboard searched through the official's AllTrails activity and found multiple hikes starting from the same location. Motherboard then queried public records and found this location was a house registered to the official's family, meaning AllTrails had helped identify where the official or his family may have been living. Motherboard also verified that the official does have an account on AllTrails by attempting to sign up to the service with the official's personal email address. This was not possible because the address was already registered to an account.

Slashdot Top Deals