AI

Senate Introduces Bill To Setup Legal Framework For Ethical AI Development (techspot.com) 48

Last week, the U.S. Senate introduced a new bill to outlaw the unethical use of AI-generated content and deepfake technology. Called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), the bill would "set new federal transparency guidelines for marking, authenticating and detecting AI-generated content, protect journalists, actors and artists against AI-driven theft, and hold violators accountable for abuses." TechSpot reports: Proposed and sponsored by Democrats Maria Cantwell of Washington and Martin Heinrich of New Mexico, along with Republican Marsha Blackburn of Tennessee, the aims to establish enforceable transparency standards in AI development [such a through watermarking]. The legislation also wants to curb unauthorized data use in training models. The senators intend to task the National Institutes of Standards and Technology with developing sensible transparency guidelines should the bill pass. [...] The senators feel that clarifying and defining what is okay and what is not regarding AI development is vital in protecting citizens, artists, and public figures from the harm that misuse of the technology could cause, particularly in creating deepfakes. The text of the bill can be read here.
The Courts

Oregon County Seeks To Hold Fossil Fuel Companies Accountable For Extreme Heat 220

An anonymous reader quotes a report from Ars Technica: Northwest Oregon had never seen anything like it. Over the course of three days in June 2021, Multnomah County -- the state's most populous county, which rests in the swayback along Oregon's northern border -- recorded highs of 108, 112, and 116 degrees Fahrenheit. Temperatures were so hot that the metal on cable cars melted and the asphalt on roadways buckled. Nearly half the homes in the county lacked cooling systems because of Oregon's typically gentle summers, where average highs top out at 81 degrees. Sixty-nine people perished from heat stroke, most of them in their homes. When scientific studies showed that the extreme temperatures were caused by heat domes, which experts say are influenced by climate change, county officials didn't just chalk it up to a random weather occurrence. They started researching the large fossil fuel companies whose emissions are driving the climate crisis -- including ExxonMobil, Shell, and Chevron -- and sued them (PDF).

"This catastrophe was not caused by an act of God," said Jeffrey B. Simon, a lawyer for the county, "but rather by several of the world's largest energy companies playing God with the lives of innocent and vulnerable people by selling as much oil and gas as they could." Now, 11 months after the suit was filed, Multnomah County is preparing to move forward with the case in Oregon state court after a federal judge in June settled (PDF) a monthslong debate over where the suit should be heard. About three dozen lawsuits have been filed by states, counties, and cities seeking damages from oil and gas companies for harms caused by climate change. Legal experts said the Oregon case is one of the first focused on public health costs related to high temperatures during a specific occurrence of the "heat dome effect." Most of the other lawsuits seek damages more generally from such ongoing climate-related impacts as sea level rise, increased precipitation, intensifying extreme weather events, and flooding. [...]

The Multnomah County lawsuit says that Exxon, Shell, Chevron, and others engaged in a range of improper practices, including negligence, creating a public nuisance, fraud, and deceit. The suit alleges that the companies were aware of the harms of fossil fuels and engaged in a "scheme to rapaciously sell fossil fuel products and deceptively promote them as harmless to the environment, while they knew that carbon pollution emitted by their products into the atmosphere would likely cause deadly extreme heat events like that which devastated Multnomah County." "We know that climate-induced weather events like the 2021 Heat Dome harm the residents of Multnomah County and cause real financial costs to our local government," Multnomah County Chair Jessica Vega Pederson said in a statement. "The Court's decision to hear this lawsuit in State Court validates our assertion that the case should be resolved here -- it's an important win for this community."
In the suit, officials in Portland's Multnomah County said that they will ultimately incur costs in excess of $1.5 billion to deal with the effects of the 2021 heat dome.

"We allege that this is just like any other kind of public health crisis and mass destruction of property that is caused by corporate wrongdoing," said Simon, partner in the law firm of Simon Greenstone Panatier. "We contend that these companies polluted the atmosphere with carbon from the burning of fossil fuels; that they foresaw that extreme environmental harm would be caused by it; that some of them, we contend, deliberately misled the public about that."
Earth

Sharp Rise in Number of Climate Lawsuits Against Companies, Report Says (theguardian.com) 44

The number of climate lawsuits filed against companies around the world is rising swiftly, a report has found, and a majority of cases that have concluded have been successful. From a report: About 230 climate-aligned lawsuits have been filed against corporations and trade associations since 2015, two-thirds of which have been initiated since 2020, according to the analysis published on Thursday by the Grantham Research Institute on Climate Change and the Environment. One of the most rapidly growing forms of litigation is over "climate-washing" -- when companies are accused of misrepresenting their progress towards environmental targets -- and the analysis found that 47 such cases were filed against companies and governments in 2023.

As climate communications are increasingly scrutinised, there has been arise in climate-washing litigation, often with positive outcomes for those bringing the cases. Of the 140 climate-washing cases reviewed between 2016 and 2023, 77 have officially concluded, 54 of which ended with a ruling in favour of the claimant. More than 30 cases in 2023 concerned the "polluter pays" principle, whereby companies are held accountable for climate damage caused by high greenhouse gas emissions. The authors also highlighted six "turning off the taps" cases, which challenge the flow of finance to areas which hinder climate goals.

AI

AI Researcher Warns Data Science Could Face a Reproducibility Crisis (beabytes.com) 56

Long-time Slashdot reader theodp shared this warning from a long-time AI researcher arguing that data science "is due" for a reckoning over whether results can be reproduced. "Few technological revolutions came with such a low barrier of entry as Machine Learning..." Unlike Machine Learning, Data Science is not an academic discipline, with its own set of algorithms and methods... There is an immense diversity, but also disparities in skill, expertise, and knowledge among Data Scientists... In practice, depending on their backgrounds, data scientists may have large knowledge gaps in computer science, software engineering, theory of computation, and even statistics in the context of machine learning, despite those topics being fundamental to any ML project. But it's ok, because you can just call the API, and Python is easy to learn. Right...?

Building products using Machine Learning and data is still difficult. The tooling infrastructure is still very immature and the non-standard combination of data and software creates unforeseen challenges for engineering teams. But in my views, a lot of the failures come from this explosive cocktail of ritualistic Machine Learning:

- Weak software engineering knowledge and practices compounded by the tools themselves;
- Knowledge gap in mathematical, statistical, and computational methods, encouraged black boxing API;
- Ill-defined range of competence for the role of data scientist, reinforced by a pool of candidates with an unusually wide range of backgrounds;
- A tendency to follow the hype rather than the science.


- What can you do?

- Hold your data scientists accountable using Science.
- At a minimum, any AI/ML project should include an Exploratory Data Analysis, whose results directly support the design choices for feature engineering and model selection.
- Data scientists should be encouraged to think outside-of-the box of ML, which is a very small box - Data scientists should be trained to use eXplainable AI methods to provide context about the algorithm's performance beyond the traditional performance metrics like accuracy, FPR, or FNR.
- Data scientists should be held at similar standards than other software engineering specialties, with code review, code documentation, and architectural designs.

The article concludes, "Until such practices are established as the norm, I'll remain skeptical of Data Science."
AI

OpenAI Employees Want Protections To Speak Out on 'Serious Risks' of AI (bloomberg.com) 36

A group of current and former employees from OpenAI and Google DeepMind are calling for protection from retaliation for sharing concerns about the "serious risks" of the technologies these and other companies are building. From a report: "So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public," according to a public letter, which was signed by 13 people who've worked at the companies, seven of whom included their names. "Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues."

In recent weeks, OpenAI has faced controversy about its approach to safeguarding artificial intelligence after dissolving one of its most high-profile safety teams and being hit by a series of staff departures. OpenAI employees have also raised concerns that staffers were asked to sign nondisparagement agreements tied to their shares in the company, potentially causing them to lose out on lucrative equity deals if they speak out against the AI startup. After some pushback, OpenAI said it would release past employees from the agreements.

The Courts

Arizona Accuses Amazon of Unfair, Deceptive Business Practices (courthousenews.com) 12

Arizona Attorney General Kris Mayes filed two lawsuits Wednesday against the international online retail giant Amazon.com, accusing it of deceptive and unfair business practices. Courthouse News Service: The two lawsuits, filed in state court, say Amazon's Prime cancellation process and the algorithm that decides whether a product is offered through a "buy now" or "add to cart" option violate the Arizona Consumer Fraud Act and the Arizona Uniform State Antitrust Act. Mayes, a Democrat, accuses Amazon of artificially inflating prices and boxing our third-party retailers that rely on the site for business. "Amazon must be held accountable for these violations of our state laws," Mayes said in a statement. "No matter how big and powerful, all businesses must play by the same rules and follow the same laws as everyone else."
Government

Has Section 230 'Outlived Its Usefulness'? (thehill.com) 278

In an op-ed for The Wall Street Journal, Representatives Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr (D-N.J.) made their case for why Section 230 of the 1996 Communications Decency Act has "outlived its usefulness." Section 230 of the Communications Decency Act protects online platforms from liability for user-generated content, allowing them to moderate content without being treated as publishers.

"Unfortunately, Section 230 is now poisoning the healthy online ecosystem it once fostered. Big Tech companies are exploiting the law to shield them from any responsibility or accountability as their platforms inflict immense harm on Americans, especially children. Congress's failure to revisit this law is irresponsible and untenable," the lawmakers wrote. The Hill reports: Rodgers and Pallone argued that rolling back the protections on Big Tech companies would hold them accountable for the material posted on their platforms. "These blanket protections have resulted in tech firms operating without transparency or accountability for how they manage their platforms. This means that a social-media company, for example, can't easily be held responsible if it promotes, amplifies or makes money from posts selling drugs, illegal weapons or other illicit content," they wrote.

The lawmakers said they were unveiling legislation (PDF) to sunset Section 230. It would require Big Tech companies to work with Congress for 18 months to "evaluate and enact a new legal framework that will allow for free speech and innovation while also encouraging these companies to be good stewards of their platforms." "Our bill gives Big Tech a choice: Work with Congress to ensure the internet is a safe, healthy place for good, or lose Section 230 protections entirely," the lawmakers wrote.

Communications

FCC Fines Wireless Carriers $200 Million For Sharing Customer Data (lightreading.com) 20

The Federal Communications Commission has fined the nation's largest wireless carriers for illegally sharing access to customers' location information without consent and without taking reasonable measures to protect that information against unauthorized disclosure. From a report: Sprint and T-Mobile -- which have merged since the investigation began -- face fines of more than $12 million and $80 million, respectively. AT&T is fined more than $57 million, and Verizon is fined almost $47 million. "Our communications providers have access to some of the most sensitive information about us. These carriers failed to protect the information entrusted to them. Here, we are talking about some of the most sensitive data in their possession: customers' real-time location information, revealing where they go and who they are," said FCC Chairwoman Jessica Rosenworcel. "As we resolve these cases" which were first proposed by the last Administration -- the Commission remains committed to holding all carriers accountable and making sure they fulfill their obligations to their customers as stewards of this most private data."
Earth

Only 57 Companies Produced 80% of Global Carbon Dioxide (carbonmajors.org) 167

Last year was the hottest on record and the Earth is headed towards a global warming of 2.7 degrees, yet top fossil fuel and cement producers show a disregard for climate change and actively make things worse. From a report: A new Carbon Majors Database report found that just 57 companies were responsible for 80 percent of the global carbon dioxide emissions between 2016 and 2022. Thirty-eight percent of total emissions during this period came from nation-states, 37 percent from state-owned entities and 25 percent from investor-owned companies.

Nearly 200 parties adopted the 2015 Paris Agreement, committing to reduce greenhouse gas emissions. However, 58 of the 100 state- and investor-owned companies in the Carbon Majors Database have increased their production in the years since (The Climate Accountability Institute launched Carbon Majors in 2013 to hold fossil fuel producers accountable and is hosted by InfluenceMap). This number represents producers worldwide, including 87 percent of those assessed in Asia, 57 percent in Europe and 43 percent in North America.

It's not a clear case of things slowly turning around, either. The International Energy Agency found coal consumption increased by eight percent over the seven years to 8.3 billion tons -- a record high. The report names state-owned Coal India as one of the top three carbon dioxide producers. Russia's state-owned energy company Gazprom and state-owned oil firm Saudi Aramco rounded out the trio of worst offenders.

China

EFF Opposes America's Proposed TikTok Ban (eff.org) 67

A new EFF web page is urging U.S. readers to "Tell Congress: Stop the TikTok Ban," arguing the bill will "do little for its alleged goal of protecting our private information and the collection of our data by foreign governments." Tell Congress: Instead of giving the President the power to ban entire social media platforms based on their country of origin, our representatives should focus on what matters — protecting our data no matter who is collecting it... It's a massive problem that current U.S. law allows for all the big social media platforms to harvest and monetize our personal data, including TikTok. Without comprehensive data privacy legislation, this will continue, and this ban won't solve any real or perceived problems. User data will still be collected by numerous platforms and sold to data brokers who sell it to the highest bidder — including governments of countries such as China — just as it is now.

TikTok raises special concerns, given the surveillance and censorship practices of the country that its parent company is based in, China. But it's also used by hundreds of millions of people to express themselves online, and is an instrumental tool for community building and holding those in power accountable. The U.S. government has not justified silencing the speech of Americans who use TikTok, nor has it justified the indirect speech punishment of a forced sale (which may prove difficult if not impossible to accomplish in the required timeframe). It can't meet the high constitutional bar for a restriction on the platform, which would undermine the free speech and association rights of millions of people. This bill must be stopped.

Apple

Apple Reinstates Epic Developer Account After Public Backlash for Retaliation (epicgames.com) 41

Epic Games, in a blog post: Apple has told us and committed to the European Commission that they will reinstate our developer account. This sends a strong signal to developers that the European Commission will act swiftly to enforce the Digital Markets Act and hold gatekeepers accountable. We are moving forward as planned to launch the Epic Games Store and bring Fortnite back to iOS in Europe. Epic CEO Tim Sweeney adds: The DMA went through its first major challenge with Apple banning Epic Games Sweden from competing with the App Store, and the DMA just had its first major victory. Following a swift inquiry by the European Commission, Apple notified the Commission and Epic that it would relent and restore our access to bring back Fortnite and launch Epic Games Store in Europe under the DMA law.
Crime

Man Charged With Smuggling Greenhouse Gases Into US (cnn.com) 94

In a first-of-its-kind prosecution, a California man was arrested and charged Monday with allegedly smuggling potent, greenhouse gases from Mexico. From a report: Michael Hart, a 58-year-old man from San Diego, pleaded not guilty to smuggling hydrofluorocarbons, or HFCs -- commonly used in air conditioning and refrigeration -- and selling them for profit, in a federal court hearing Monday. According to the indictment, Hart allegedly purchased the HFCs in Mexico and smuggled them into the US in the back of his truck, concealed under a tarp and tools. He is then alleged to have sold them for a profit on sites including Facebook Marketplace and OfferUp. [...] Hart has pleaded not guilty to 13 charges including conspiracy, importation contrary to law and sale of merchandise imported contrary to law. The charges carry potential prison sentences ranging from five to 20 years.

HFCs, which are also used in building insulation, fire extinguishing systems and aerosols, are banned from import into the US without permission from the Environmental Protection Agency. These greenhouse gases are short-lived in the atmosphere," but powerful -- some are thousands of times more potent than carbon dioxide in the near-term. "The illegal smuggling of hydrofluorocarbons, a highly potent greenhouse gas, undermines international efforts to combat climate change," said David M. Uhlmann, the assistant administrator for the EPA's Office of Enforcement and Compliance Assurance. "Anyone who seeks to profit from illegal actions that worsen climate change must be held accountable," he added.
"Today is a significant milestone for our country," said US Attorney Tara McGrath in a statement. "This is the first time the Department of Justice is prosecuting someone for illegally importing greenhouse gases, and it will not be the last."
AI

Gartner Predicts Search Engine Volume Will Drop 25% by 2026, Due To AI Chatbots and Other Virtual Agents 93

Gartner: By 2026, traditional search engine volume will drop 25%, with search marketing losing market share to AI chatbots and other virtual agents, according to Gartner. "Organic and paid search are vital channels for tech marketers seeking to reach awareness and demand generation goals," said Alan Antin, Vice President Analyst at Gartner. "Generative AI (GenAI) solutions are becoming substitute answer engines, replacing user queries that previously may have been executed in traditional search engines. This will force companies to rethink their marketing channels strategy as GenAI becomes more embedded across all aspects of the enterprise."

With GenAI driving down the cost of producing content, there is an impact around activities including keyword strategy and website domain authority scoring. Search engine algorithms will further value the quality of content to offset the sheer amount of AI-generated content, as content utility and quality still reigns supreme for success in organic search results. There will also be a greater emphasis placed on watermarking and other means to authenticate high-value content. Government regulations across the globe are already holding companies accountable as they begin to require the identification of marketing content assets that AI creates. This will likely play a role in how search engines will display such digital content.
Canada

Canada To Compel Digital Platforms To Remove Harmful Content (marketscreener.com) 81

According to the Wall Street Journal (paywalled), Canada has proposed new rules that would compel digital platforms to remove online content that features the sexual exploitation of children or intimate images without consent of the individuals involved. From a report: The rules were years in the making, and represent the third and possibly final installment of measures aimed at regulating digital platforms. Measures introduced since 2022 aim to increase the amount of domestic, Canadian-made content on streaming services, such as Netflix, and require digital platforms to help Canadian news-media outlets finance their newsroom operations. The legislation needs to be approved by Canada's Parliament before it takes effect.

Canada said its rules are based on concepts introduced by the European Union, the U.K. and Australia. Canadian officials say the proposed measures would apply to social-media platforms, adult-entertainment sites where users can upload content, and live-streaming services. These services, officials said, are expected to expeditiously remove two categories of content: That which sexually exploits a child or an abuse survivor, and intimate content broadcast without an individual's consent. The latter incorporates so-called revenge porn, or the nonconsensual posting or dissemination of intimate images, often after the end of a romantic relationship. Officials said private and encrypted messaging services are excluded from the proposed regulations.

Canadian officials said platforms will have a duty to either ensure the material is not published, or take it down once notified. Canada also intends to set up a new agency, the Digital Safety Commission, to enforce the rules, order harmful content taken down, and hold digital services accountable. Platforms that violate the rules could face a maximum penalty of up to 25 million Canadian dollars, or the equivalent of $18.5 million, officials said.

Transportation

Boeing Removes Head of Its 737 Max Program After January's 'Door Bolts' Incident (cnn.com) 52

On Wednesday Boeing "removed executive Ed Clark, the head of its 737 Max passenger jet program," reports CNN, "after a dramatic — and terrifying — midair blowout in January underscored ongoing problems with the jet." A preliminary report by the National Transportation Safety Board found that the four bolts that should have held the door plug in place were missing when the plane left Boeing's factory. The NTSB report did not assess blame for the missing bolts and the accident but in a statement to investors before the findings were released, Boeing CEO Dave Calhoun assumed responsibility for the incident. "We caused the problem, and we understand that," he told investors during a call after reporting the latest quarterly loss at the company. "Whatever conclusions are reached, Boeing is accountable for what happened."

Clark, who had been at Boeing for 18 years, had only been in charge of the Max program since March of 2021, assuming that title after the jets had been returned to service following the crashes. But he had previously held roles related to the 737 Max, including as chief engineer and chief 737 mechanic.

With the news of Clark's departure, Boeing also announced a shuffling of a number of executives in its Boeing Commercial Airplanes unit. It created a new executive position, Senior Vice President for BCA Quality, and named Elizabeth Lund to that position.

AI

Will 'Precision Agriculture' Be Harmful to Farmers? (substack.com) 61

Modern U.S. farming is being transformed by precision agriculture, writes Paul Roberts, the founder of securepairs.org and Editor in Chief at Security Ledger.

Theres autonomous tractors and "smart spraying" systems that use AI-powered cameras to identify weeds, just for starters. "Among the critical components of precision agriculture: Internet- and GPS connected agricultural equipment, highly accurate remote sensors, 'big data' analytics and cloud computing..." As with any technological revolution, however, there are both "winners" and "losers" in the emerging age of precision agriculture... Precision agriculture, once broadly adopted, promises to further reduce the need for human labor to run farms. (Autonomous equipment means you no longer even need drivers!) However, the risks it poses go well beyond a reduction in the agricultural work force. First, as the USDA notes on its website: the scale and high capital costs of precision agriculture technology tend to favor large, corporate producers over smaller farms. Then there are the systemic risks to U.S. agriculture of an increasingly connected and consolidated agriculture sector, with a few major OEMs having the ability to remotely control and manage vital equipment on millions of U.S. farms... (Listen to my podcast interview with the hacker Sick Codes, who reverse engineered a John Deere display to run the Doom video game for insights into the company's internal struggles with cybersecurity.)

Finally, there are the reams of valuable and proprietary environmental and operational data that farmers collect, store and leverage to squeeze the maximum productivity out of their land. For centuries, such information resided in farmers' heads, or on written or (more recently) digital records that they owned and controlled exclusively, typically passing that knowledge and data down to succeeding generation of farm owners. Precision agriculture technology greatly expands the scope, and granularity, of that data. But in doing so, it also wrests it from the farmer's control and shares it with equipment manufacturers and service providers — often without the explicit understanding of the farmers themselves, and almost always without monetary compensation to the farmer for the data itself. In fact, the Federal Government is so concerned about farm data they included a section (1619) on "information gathering" into the latest farm bill.

Over time, this massive transfer of knowledge from individual farmers or collectives to multinational corporations risks beggaring farmers by robbing them of one of their most vital assets: data, and turning them into little more than passive caretakers of automated equipment managed, controlled and accountable to distant corporate masters.

Weighing in is Kevin Kenney, a vocal advocate for the "right to repair" agricultural equipment (and also an alternative fuel systems engineer at Grassroots Energy LLC). In the interview, he warns about the dangers of tying repairs to factory-installed firmware, and argues that its the long-time farmer's "trade secrets" that are really being harvested today. The ultimate beneficiary could end up being the current "cabal" of tractor manufacturers.

"While we can all agree that it's coming...the question is who will own these robots?" First, we need to acknowledge that there are existing laws on the books which for whatever reason, are not being enforced. The FTC should immediately start an investigation into John Deere and the rest of the 'Tractor Cabal' to see to what extent farmers' farm data security and privacy are being compromised. This directly affects national food security because if thousands- or tens of thousands of tractors' are hacked and disabled or their data is lost, crops left to rot in the fields would lead to bare shelves at the grocery store... I think our universities have also been delinquent in grasping and warning farmers about the data-theft being perpetrated on farmers' operations throughout the United States and other countries by makers of precision agricultural equipment.
Thanks to long-time Slashdot reader chicksdaddy for sharing the article.
The Courts

NYC Sues Social Media Companies Over Youth Mental Health Crisis (abc7ny.com) 63

New York City Mayor Eric Adams announced a lawsuit against four of the nation's largest social media companies, accusing them of fueling a "national youth mental health crisis." From a report: The lawsuit was filed to hold TikTok, Instagram, Facebook, Snapchat, and YouTube Accountable for their damaging influence on the mental health of children, Adams said. The lawsuit, filed in California Superior Court, alleged the companies intentionally designed their platforms to purposefully manipulate and addict children and teens to social media applications. The lawsuit pointed to the use of algorithms to generate feeds that keep users on the platforms longer and encourage compulsive use.

"Over the past decade, we have seen just how addictive and overwhelming the online world can be, exposing our children to a non-stop stream of harmful content and fueling our national youth mental health crisis," Adams said. "Our city is built on innovation and technology, but many social media platforms end up endangering our children's mental health, promoting addiction, and encouraging unsafe behavior." The lawsuit accused the social media companies of manipulating users by making them feel compelled to respond to one positive action with another positive action.

"These platforms take advantage of reciprocity by, for example, automatically telling the sender when their message was seen or sending notifications when a message was delivered, encouraging teens to return to the platform again and again and perpetuating online engagement and immediate responses," the lawsuit said. The city is joining hundreds of school districts across the nation in filing litigation to force the tech companies to change their behavior and recover the costs of addressing the public health threat.

AI

Recycling Plants Start Installing Trash-Spotting AI Systems (yahoo.com) 60

The world's biggest builder of recycling plants has teamed with a startup to install AI-powered systems for sorting recycling, reports the Washington Post. And now over the next few years, "The companies plan to retrofit thousands of recycling facilities around the world with computers that can analyze and identify every item that passes through a waste plant, they said Wednesday." "[S]orted" recyclables, particularly plastic, wind up contaminated with other forms of trash, according to Lokendra Pal, a professor of sustainable materials engineering at North Carolina State University... [W]aste plants don't catch everything. [AI startup] Greyparrot has already installed over 100 of its AI trash spotters in about 50 sorting facilities around the world, and [co-founder Ambarish] Mitra said as much as 30 percent of potentially recyclable material winds up getting lumped in with the trash that's headed for the landfill. Failing to recycle means companies have to make more things from scratch, including a lot of plastic from fossil fuels. Also, more waste ends up in landfills and incinerators, which belch greenhouse gases into the atmosphere and pollute their surroundings.

Mitra said putting Greyparrot's AI tools in thousands of waste plants around the world can raise the percentage of glass, plastic, metal and paper that makes it to recycling facilities. "If we can move the needle by even 5 to 10 percent, that would be a phenomenal outcome on a planetary basis for greenhouse gas emissions and environmental impact," he said. Cutting contamination would make recycled materials more valuable and raise the chances that companies would use them to make new products, according to Reck. "If the AI and the robots potentially helped to increase the quality of the recycling stream, that's huge," she said...

Greyparrot's device is, basically, a set of visual and infrared cameras hooked up to a computer, which monitors trash as it passes by on a conveyor belt and labels it under 70 categories, from loose bottle caps (not recyclable!) to books (sometimes recyclable!) to aluminum cans (recyclable!). Waste plants could connect these AI systems to sorting robots to help them separate trash from recyclables more accurately. They could also use the AI as a quality control system to measure how well they're sorting trash from recyclables. That could help plant managers tinker with their assembly lines to recover more recyclables, or verify that a bundle of recyclables is free of contaminants, which would allow them to sell for a higher price.

GreyParrot's co-founder said their trash-spotting computers "could one day help regulators crack down on companies that produce tsunamis of non-recyclable packaging," according to the article.

"The AI systems are so accurate, he said, that they can identify the brands on individual items. 'There could be insights that make them more accountable for ... the commitments they made to the public or to shareholders,' he said."
AI

Microsoft AI Engineer Says Company Thwarted Attempt To Expose DALL-E 3 Safety Problems (geekwire.com) 78

Todd Bishop reports via GeekWire: A Microsoft AI engineering leader says he discovered vulnerabilities in OpenAI's DALL-E 3 image generator in early December allowing users to bypass safety guardrails to create violent and explicit images, and that the company impeded his previous attempt to bring public attention to the issue. The emergence of explicit deepfake images of Taylor Swift last week "is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL-E 3 from public use and reported my concerns to Microsoft," writes Shane Jones, a Microsoft principal software engineering lead, in a letter Tuesday to Washington state's attorney general and Congressional representatives.

404 Media reported last week that the fake explicit images of Swift originated in a "specific Telegram group dedicated to abusive images of women," noting that at least one of the AI tools commonly used by the group is Microsoft Designer, which is based in part on technology from OpenAI's DALL-E 3. "The vulnerabilities in DALL-E 3, and products like Microsoft Designer that use DALL-E 3, makes it easier for people to abuse AI in generating harmful images," Jones writes in the letter to U.S. Sens. Patty Murray and Maria Cantwell, Rep. Adam Smith, and Attorney General Bob Ferguson, which was obtained by GeekWire. He adds, "Microsoft was aware of these vulnerabilities and the potential for abuse."

Jones writes that he discovered the vulnerability independently in early December. He reported the vulnerability to Microsoft, according to the letter, and was instructed to report the issue to OpenAI, the Redmond company's close partner, whose technology powers products including Microsoft Designer. He writes that he did report it to OpenAI. "As I continued to research the risks associated with this specific vulnerability, I became aware of the capacity DALL-E 3 has to generate violent and disturbing harmful images," he writes. "Based on my understanding of how the model was trained, and the security vulnerabilities I discovered, I reached the conclusion that DALL-E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model."

On Dec. 14, he writes, he posted publicly on LinkedIn urging OpenAI's non-profit board to withdraw DALL-E 3 from the market. He informed his Microsoft leadership team of the post, according to the letter, and was quickly contacted by his manager, saying that Microsoft's legal department was demanding that he delete the post immediately, and would follow up with an explanation or justification. He agreed to delete the post on that basis but never heard from Microsoft legal, he writes. "Over the following month, I repeatedly requested an explanation for why I was told to delete my letter," he writes. "I also offered to share information that could assist with fixing the specific vulnerability I had discovered and provide ideas for making AI image generation technology safer. Microsoft's legal department has still not responded or communicated directly with me." "Artificial intelligence is advancing at an unprecedented pace. I understand it will take time for legislation to be enacted to ensure AI public safety," he adds. "At the same time, we need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public. Concerned employees, like myself, should not be intimidated into staying silent."
The full text of Jones' letter can be read here (PDF).
The Courts

eBay To Pay $3 Million Penalty For Employees Sending Live Cockroaches, Fetal Pig To Bloggers (cbsnews.com) 43

E-commerce giant eBay agreed to pay a $3 million penalty for the harassment and stalking of a Massachusetts couple by several of its employees. "The couple, Ina and David Steiner, had been subjected to threats and bizarre deliveries, including live spiders, cockroaches, a funeral wreath and a bloody pig mask in August 2019," reports CBS News. From the report: Thursday's fine comes after several eBay employees ran a harassment and intimidation campaign against the Steiners, who publish a news website focusing on players in the e-commerce industry. "eBay engaged in absolutely horrific, criminal conduct. The company's employees and contractors involved in this campaign put the victims through pure hell, in a petrifying campaign aimed at silencing their reporting and protecting the eBay brand," Levy said. "We left no stone unturned in our mission to hold accountable every individual who turned the victims' world upside-down through a never-ending nightmare of menacing and criminal acts."

The Justice Department criminally charged eBay with two counts of stalking through interstate travel, two counts of stalking through electronic communications services, one count of witness tampering and one count of obstruction of justice. The company agreed to pay $3 million as part of a deferred prosecution agreement. Under the agreement, eBay will be required to retain an independent corporate compliance monitor for three years, officials said, to "ensure that eBay's senior leadership sets a tone that makes compliance with the law paramount, implements safeguards to prevent future criminal activity, and makes clear to every eBay employee that the idea of terrorizing innocent people and obstructing investigations will not be tolerated," Levy said.

Former U.S. Attorney Andrew Lelling said the plan to target the Steiners, which he described as a "campaign of terror," was hatched in April 2019 at eBay. Devin Wenig, eBay's CEO at the time, shared a link to a post Ina Steiner had written about his annual pay. The company's chief communications officer, Steve Wymer, responded: "We are going to crush this lady." About a month later, Wenig texted: "Take her down." Prosecutors said Wymer later texted eBay security director Jim Baugh. "I want to see ashes. As long as it takes. Whatever it takes," Wymer wrote. Investigators said Baugh set up a meeting with security staff and dispatched a team to Boston, about 20 miles from where the Steiners live. "Senior executives at eBay were frustrated with the newsletter's tone and content, and with the comments posted beneath the newsletter's articles," the Department of Justice wrote in its Thursday announcement.
Two former eBay security executives were sentenced to prison over the incident.

Slashdot Top Deals