Graphics

Artist Appeals Copyright Denial For Prize-Winning AI-Generated Work (arstechnica.com) 75

An anonymous reader quotes a report from Ars Technica: Jason Allen-a synthetic media artist whose Midjourney-generated work "Theatre D'opera Spatial" went viral and incited backlash after winning a state fair art competition-is not giving up his fight with the US Copyright Office. Last fall, the Copyright Office refused to register Allen's work, claiming that almost the entire work was AI-generated and insisting that copyright registration requires more human authorship than simply plugging a prompt into Midjourney. Allen is now appealing (PDF) that decision, asking for judicial review and alleging that "the negative media attention surrounding the Work may have influenced the Copyright Office Examiner's perception and judgment." He claims that the Examiner was biased and considered "improper factors" such as the public backlash when concluding that he had "no control over how the artificial intelligence tool analyzed, interpreted, or responded to these prompts."

As Allen sees it, a rule establishing a review process requiring an Examiner to determine which parts of the work are human-authored seems "entirely arbitrary" since some Copyright Examiners "may not even be able to distinguish an artwork that used AI tools to assist in the creation from one which does not use any computerized tools." Further, Allen claims that the denial of copyright for his work has inspired confusion about who owns rights to not just Midjourney-generated art but all AI art, and as AI technology rapidly improves, it will only become harder for the Copyright Office to make those authorship judgment calls. That becomes an even bigger problem if the Copyright Office gets it wrong too often, Allen warned, running the risk of turning every artist registering works into a "suspect" and potentially bogging courts down with copyright disputes. Ultimately, Allen is hoping that a jury reviewing his appeal will reverse the denial, arguing that there is more human authorship in his AI-generated work than the Copyright Office considered when twice rejecting his registration.

Crime

Google Wins Lawsuit Against Scammers Who 'Weaponized' DMCA Takedowns (torrentfreak.com) 63

Google has obtained (PDF) a default judgment against two men who abused its DMCA takedown system to falsely target 117,000 URLs of competitors' online stores. With none of the defendants showing up in court, a California federal court sided with the search engine. Through an injunction, the men are now prohibited from sending false takedown notices and creating new Google accounts. TorrentFreak reports: Last November, Google decided to take action against the rampant DMCA abuse. In a lawsuit filed at a federal court in California, it accused Nguyen Van Duc and Pham Van Thien of sending over 100,000 fraudulent takedown requests. Many of these notices were allegedly filed against third-party T-shirt shops. [...] Following the complaint, the defendants, who are believed to reside in Vietnam, were summoned via their Gmail accounts and SMS. However, the pair remained quiet and didn't respond in court. Without the defendants representing themselves, Google requested a default judgment. According to the tech giant, it's clear that the duo violated the DMCA with their false takedown notices. In addition, they committed contract breach under California law.

Google said that, absent a default judgment, the defendants would continue to harm consumers and third-party businesses. These actions, in turn, will damage Google's reputation as a search engine. In July, U.S. Magistrate Judge Sallie Kim recommended granting Google's motion for default judgment. The recommendation included an injunction that prevents the two men from abusing Google's services going forward. However, the District Judge had the final say. Last Friday, U.S. District Court Judge Edward Davila adopted the recommendations, issuing a default judgment in favor of Google. The order confirms that defendants Nguyen Van Duc and Pham Van Thien violated the DMCA with their false takedown notices. In addition, they committed contract breach under California law.

In typical copyrights-related verdicts, most attention is paid to the monetary damages, but not here. While Google could have requested millions of dollars in compensation, it didn't request a penny. Google's primary goal was to put an end to the abusive behavior, not to seek financial compensation. Therefore, the company asked for an injunction to prohibit the defendants from sending false takedowns going forward. This includes a ban on registering any new Google accounts. The request ticked all the boxes and, without a word from the defendants, Judge Davila granted the default judgment as well as the associated injunction.

Medicine

America's FDA Approves First New Drug for Schizophrenia in Over 30 Years (go.com) 65

Thursday America's Food and Drug Administration approved Cobenfy, "the first new drug to treat people with schizophrenia in more than 30 years," reports ABC News: Most schizophrenia medications, broadly known as antipsychotics, work by changing dopamine levels, a brain chemical that affects mood, motivation, and thinking [according to Jelena Kunovac, MD, a board-certified psychiatrist and adjunct assistant professor at the University of Nevada, Las Vegas, in the Department of Psychiatry]. Cobenfy takes a different approach by adjusting acetylcholine, another brain chemical that aids memory, learning and attention, she said. By focusing on acetylcholine instead of dopamine, Cobenfy may reduce schizophrenia symptoms while avoiding common side effects like weight gain, drowsiness and movement disorders, clinical trials suggest. These side effects often become so severe and unpleasant that, in some studies mirroring real-world challenges, many patients stopped treatment within 18 months of starting it.

In clinical trials, only 6% of patients stopped taking Cobenfy due to side effects, noted Dr. Samit Hirawat, chief medical officer at Bristol Myers Squibb. "That's a significant improvement over the 20-30% seen with older antipsychotic drugs," he added...

Schizophrenia is a mental health disorder that affects about 24 million people worldwide, or roughly one in 300 people, according to the World Health Organization.

"Studies for additional therapeutic uses, including the treatment of Alzheimer's disease and bipolar disorder, are also underway."
AI

Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 123

Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
Power

Paralyzed Jockey Loses Ability To Walk After Manufacturer Refuses To Fix Battery For His $100,000 Exoskeleton 147

An anonymous reader quotes a report from 404 Media: After a horseback riding accident left him paralyzed from the waist down in 2009, former jockey Michael Straight learned to walk again with the help of a $100,000 ReWalk Personal exoskeleton. Earlier this month, that exoskeleton broke because of a malfunctioning piece of wiring in an accompanying watch that makes the exoskeleton work. The manufacturer refused to fix it, saying the machine was now too old to be serviced, and Straight once again couldn't walk anymore. "After 371,091 steps my exoskeleton is being retired after 10 years of unbelievable physical therapy," Straight posted on Facebook on September 16. "The reasons [sic] why it has stopped is a pathetic excuse for a bad company to try and make more money. The reason it stopped is because of a battery in the watch I wear to operate the machine. I called thinking it was no big deal, yet I was told they stopped working on any machine that was 5 years or older. I find it very hard to believe after paying nearly $100,000 for the machine and training that a $20 battery for the watch is the reason I can't walk anymore?"

Straight's experience is a nightmare scenario that highlights what happens when companies decide to stop supporting their products and do not actively support independent repair. It's also what happens without the protection of right to repair legislation that requires manufacturers to make repair parts, guides, and tools available to the general public. Specifically, a connection wire became desoldered from the battery in a watch that connects to the exoskeleton: "It's not the actual battery, but it's the little green connection piece we need to be the right fit and that's been our problem," Straight posted on Facebook. Straight's personal exoskeleton was broken for two months, he said in a video on Facebook. He was eventually able to get the device fixed after attention from an article in the Paulick Report, a website about the horse industry, and a spot on local TV. "It took me two months, and I got no results," he said in the video. With social media and news attention, "it only took you all four days, and look at the results," he said earlier this week while standing in the exoskeleton.
"This is the dystopian nightmare that we've kind of entered in, where the manufacturer perspective on products is that their responsibility completely ends when it hands it over to a customer. That's not good enough for a device like this, but it's also the same thing we see up and down with every single product," Nathan Proctor, head of citizen rights group US PIRG's right to repair project told 404 Media. "People need to be able to fix things, there needs to be a plan in place. A $100,000 product you can only use as long as the battery lasts, that's enraging. We should not have to tolerate a society where this happens."

"We have all this technology we release into the wild and it changes people's lives, but there's no long-term thinking. Manufacturers currently have no legal obligation to support the equipment indefinitely and there's no requirements that they publish sufficient documentation to allow others to do it," Proctor said. "We need to set minimum standards for documentation so that, even if a company goes bankrupt or falls off the face of the earth, a technician with sufficient knowledge can fix it."
NASA

Underfunded, Aging NASA May Be On Unsustainable Path, Report Warns (msn.com) 119

More details on that report about NASA from the Washington Post: NASA is 66 years old and feeling its age. Brilliant engineers are retiring. Others have fled to higher-paying jobs in the private space industry. The buildings are old, their maintenance deferred. The Apollo era, with its huge taxpayer investment, is a distant memory. The agency now pursues complex missions on inadequate budgets. This may be an unsustainable path for NASA, one that imperils long-term success. That is the conclusion of a sweeping report, titled "NASA at a Crossroads," written by a committee of aerospace experts and published Tuesday by the National Academies of Sciences, Engineering and Medicine. The report suggests that NASA prioritizes near-term missions and fails to think strategically. In other words, the space agency isn't sufficiently focused on the future.

NASA's intense focus on current missions is understandable, considering the unforgiving nature of space operations, but "one tends to neglect the probably less glamorous thing that will determine the success in the future," the report's lead author, Norman Augustine, a retired Lockheed Martin chief executive, said Tuesday. He said one solution for NASA's problems is more funding from Congress. But that may be hard to come by, in which case, he said, the agency needs to consider canceling or delaying costly missions to invest in more mundane but strategically important institutional needs, such as technology development and workforce training. Augustine said he is concerned that NASA could lose in-house expertise if it relies too heavily on the private industry for newly emerging technologies. "It will have trouble hiring innovative, creative engineers. Innovative, creative engineers don't want to have a job that consists of overseeing other people's work," he said...

The report is hardly a blistering screed. The tone is parental. It praises the agency — with a budget of about $25 billion — for its triumphs while urging more prudent decision-making and long-term strategizing.

NASA pursues spectacular missions. It has sent swarms of robotic probes across the solar system and even into interstellar space. Astronauts have continuously been in orbit for more than two decades. The most ambitious program, Artemis, aims to put astronauts back on the moon in a few short years. And long-term, NASA hopes to put astronauts on Mars. But a truism in the industry is that space is hard. The new report contends that NASA has a mismatch between its ambitions and its budget, and needs to pay attention to fundamentals such as fixing its aging infrastructure and retaining in-house talent. NASA's overall physical infrastructure is already well beyond its design life, and this fraction continues to grow," the report states.

NASA Administrator Bill Nelson said the report "aligns with our current efforts to ensure we have the infrastructure, workforce, and technology that NASA needs for the decades ahead," according to the article.

Nelson added that the agency "will continue to work diligently to address the committee's recommendations."
AI

Mistral Releases Pixtral 12B, Its First-Ever Multimodal AI Model 8

Mistral AI has launched Pixtral 12B, its first multimodal model with language and vision processing capabilities, positioning it to compete with AI leaders like OpenAI and Anthropic. You can download its source code from Hugging Face, GitHub, or via a torrent link. VentureBeat reports: While the official details of the new model, including the data it was trained upon, remain under wraps, the core idea appears that Pixtral 12B will allow users to analyze images while combining text prompts with them. So, ideally, one would be able to upload an image or provide a link to one and ask questions about the subjects in the file. The move is a first for Mistral, but it is important to note that multiple other models, including those from competitors like OpenAI and Anthropic, already have image-processing capabilities.

When an X user asked [Sophia Yang, the head of developer relations at the company] what makes the Pixtral 12-billion parameter model unique, she said it will natively support an arbitrary number of images of arbitrary sizes. As shared by initial testers on X, the 24GB model's architecture appears to have 40 layers, 14,336 hidden dimension sizes and 32 attention heads for extensive computational processing. On the vision front, it has a dedicated vision encoder with 1024x1024 image resolution support and 24 hidden layers for advanced image processing. This, however, can change when the company makes it available via API.
AI

Senate Leaders Ask FTC To Investigate AI Content Summaries As Anti-Competitive (techcrunch.com) 54

An anonymous reader quotes a report from TechCrunch: A group of Democratic senators is urging the FTC and Justice Department to investigate whether AI tools that summarize and regurgitate online content like news and recipes may amount to anticompetitive practices. In a letter to the agencies, the senators, led by Amy Klobuchar (D-MN), explained their position that the latest AI features are hitting creators and publishers while they're down. As journalistic outlets experience unprecedented consolidation and layoffs, "dominant online platforms, such as Google and Meta, generate billions of dollars per year in advertising revenue from news and other original content created by others. New generative AI features threaten to exacerbate these problems."

The letter continues: "While a traditional search result or news feed links may lead users to the publisher's website, an AI-generated summary keeps the users on the original search platform, where that platform alone can profit from the user's attention through advertising and data collection. [] Moreover, some generative AI features misappropriate third-party content and pass it off as novel content generated by the platform's AI. Publishers who wish to avoid having their content summarized in the form of AI-generated search results can only do so if they opt out of being indexed for search completely, which would result in a materially significant drop in referral traffic. In short, these tools may pit content creators against themselves without any recourse to profit from AI-generated content that was composed using their original content. This raises significant competitive concerns in the online marketplace for content and advertising revenues."

Essentially, the senators are saying that a handful of major companies control the market for monetizing original content via advertising, and that those companies are rigging that market in their favor. Either you consent to having your articles, recipes, stories, and podcast transcripts indexed and used as raw material for an AI, or you're cut out of the loop. The letter goes on to ask the FTC and DOJ to investigate whether these new methods are "a form of exclusionary conduct or an unfair method of competition in violation of the antitrust laws." [...] The letter was co-signed by Senators Richard Blumenthal (D-CT), Mazie Hirono (D-HI), Dick Durbin (D-IL), Sheldon Whitehouse (D-RI), Tammy Duckworth (D-IL), Elizabeth Warren (D-MA), and Tina Smith (D-MN).

The Almighty Buck

Alibaba Now Sells a $200,000 Diamond-Making Machine (arstechnica.com) 78

Ars Technica's Benj Edwards writes: In an age when you can get just about anything online, it's probably no surprise that you can buy a diamond-making machine for $200,000 on Chinese eCommerce site Alibaba. If, like me, you haven't been paying attention to the diamond industry, it turns out that the availability of these machines reflects an ongoing trend toward democratizing diamond production -- a process that began decades ago and continues to evolve. [...] Today, there are two primary methods for creating lab-grown diamonds: the HPHT process and chemical vapor deposition (CVD). Both types of machines are now listed on Alibaba, with prices starting at around $200,000, as pointed out in a Hacker News comment by engineer John Nagle (who goes by "Animats" on Hacker News). A CVD machine we found is more pricey, at around $450,000.

While the idea of purchasing a diamond-making machine on Alibaba might be intriguing, it's important to note that operating one isn't as simple as plugging it in and watching diamonds form. According to Lakha's article, these machines require significant expertise and additional resources to operate effectively. For an HPHT press, you'd need a reliable source of high-quality graphite, metal catalysts like iron or cobalt, and precise temperature and pressure control systems. CVD machines require a steady supply of methane and hydrogen gases, as well as the ability to generate and control microwaves or hot filaments. Both methods need diamond seed crystals to start the growth process. Moreover, you'd need specialized knowledge to manage the growth parameters, handle potentially hazardous materials and high-pressure equipment safely, and process the resulting raw diamonds into usable gems or industrial components. The machines also use considerable amounts of energy and require regular maintenance. Those factors may make the process subject to some regulations that are far beyond the scope of this piece. In short, while these machines are more accessible than ever, turning one into a productive diamond-making operation would still require significant investment in equipment, materials, expertise, and safety measures. But hey, a guy can dream, right?

Privacy

The NSA Has a Podcast (wired.com) 14

Steven Levy, writing for Wired: My first story for WIRED -- yep, 31 years ago -- looked at a group of "crypto rebels" who were trying to pry strong encryption technology from the government-classified world and send it into the mainstream. Naturally I attempted to speak to someone at the National Security Agency for comment and ideally get a window into its thinking. Unsurprisingly, that was a no-go, because the NSA was famous for its reticence. Eventually we agreed that I could fax (!) a list of questions. In return I got an unsigned response in unhelpful bureaucratese that didn't address my queries. Even that represented a loosening of what once was total blackout on anything having to do with this ultra-secretive intelligence agency. For decades after its post-World War II founding, the government revealed nothing, not even the name, of this agency and its activities. Those in the know referred to it as "No Such Agency."

In recent years, the widespread adoption of encryption technology and the vital need for cybersecurity has led to more openness. Its directors began to speak in public; in 2012, NSA director Keith Alexander actually keynoted Defcon. I'd spent the entire 1990s lobbying to visit the agency for my book Crypto; in 2013, I finally crossed the threshold of its iconic Fort Meade Headquarters for an on-the-record conversation with officials, including Alexander. NSA now has social media accounts on Twitter, Instagram, Facebook. And there is a form on the agency website for podcasters to request guest appearances by an actual NSA-ite.

So it shouldn't be a total shock that NSA is now doing its own podcast. You don't need to be an intelligence agency to know that pods are a unique way to tell stories and hold people's attention. The first two episodes of the seven-part season dropped this week. It's called No Such Podcast, earning some self-irony points from the get-go. In keeping with the openness vibe, the NSA granted me an interview with an official in charge of the project -- one of the de facto podcast producers, a title that apparently is still not an official NSA job posting. Since NSA still gotta NSA, I can't use this person's name. But my source did point out that in the podcast itself, both the hosts and the guests -- who are past and present agency officials -- speak under their actual identities.

Medicine

The Rise of DIY, Pirated Medicine (404media.co) 295

An anonymous reader quotes a report from 404 Media, written by Jason Koebler: I've been videochatting with Mixael Swan Laufer for about 30 minutes about an exciting discovery when he points out that to date, the best way he's been able to bring attention to his organization is "the old school method of me performing a bunch of federal felonies on stage in front of a bunch of people." I stop him and ask: "In this case, what are the felonies?" "Well, the list is pretty long," he said. Laufer is the chief spokesperson of Four Thieves Vinegar Collective, an anarchist collective that has spent the last few years teaching people how to make DIY versions of expensive pharmaceuticals at a tiny fraction of the cost.

Four Thieves Vinegar Collective call what they do "right to repair for your body." Laufer has become well known for handing out DIY pills and medicines at hacking conferences, which include, for example, courses of the abortion drug misoprostol that can be manufactured for 89 cents (normal cost: $160) and which has become increasingly difficult to obtain in some states following the Supreme Court decision in Dobbs. In our call, Laufer had just explained that Four Thieves' had made some miscalculations as part of its latest project, to create instructions for replicating sofosbuvir (Sovaldi), a miracle drug that cures hepatitis C, which he planned to explain and reveal at the DEF CON hacking conference. Unlike many other drugs that treat viruses, Sovaldi does not suppress hepatitis C, a virus that kills roughly 250,000 people around the world each year. It cures it. [...]

Crucially, unlike other medical freedom organizations, Four Thieves isn't suggesting people treat COVID with Ivermectin, isn't shilling random supplements, and doesn't have any sort of commercial arm at all. Instead, they are helping people to make their own, identical pirated versions of proven and tested pharmaceuticals by taking the precursor ingredients and performing the chemical reactions to make the medication themselves. "We don't invent anything, really," Laufer said. "We take things that are on the shelf and hijack them. We like to take something established, and be like 'This works, but you can't get it.' Well, here's a way to get it." A slide at his talk reads "Isn't this illegal? Yeah. Grow up."
Four Thieves has developed a suite of open-source tools to help achieve its goal. The core tool, Chemhacktica, is a software platform that uses machine learning to map chemical pathways for synthesizing desired molecules. It suggests potential chemical reactions, identifies precursor materials, and checks their availability for purchase.

The other is Microlab, an open-source controlled lab reactor built from affordable, off-the-shelf components costing between $300 and $500. It uses Chemhacktica's suggested pathways to create medications, and detailed instructions for building and operating the Microlab are provided. Additionally, the company developed a drag-and-drop recipe system called Apothecarium that generates executable files for the Microlab, offering step-by-step guidance on producing specific medications.

Laufer told 404 Media: "I am of the firm belief that we are hitting a watershed where economics and morality are coming to a head, like, 'Look: intellectual property law is based off some ideas that came out of 1400s Venice. They're not applicable and they're being abused and people are dying every day because of it, and it's not OK.'"

Further reading: Meet the Anarchists Making Their Own Medicine (Motherboard; 2018)
United States

US Job Openings Decline To Lowest Level Since January 2021 (yahoo.com) 53

US job openings fell in July to the lowest since the start of 2021 and layoffs rose, consistent with other signs of slowing demand for workers. From a report: Available positions decreased to 7.67 million from a downwardly revised 7.91 million reading in the prior month, the Bureau of Labor Statistics Job Openings and Labor Turnover Survey, known as JOLTS, showed Wednesday. The figure was lower than all estimates in a Bloomberg survey of economists. The decline in openings coincides with recent data that show the labor market is softening, which has raised concern among Federal Reserve officials. Job growth has been slowing, unemployment is rising and jobseekers are having greater difficulty finding work, fueling fears about a potential recession.

Policymakers have made it clear they don't want to see further cooling in the labor market and are widely expected to start lowering interest rates at their next meeting in two weeks. After July's disappointing jobs figures and a large downward revision to payrolls in the past year, Fed officials and market participants are paying close attention to the August employment data due Friday -- especially if another weak report could prompt an outsize rate cut.

Star Wars Prequels

Star Wars Outlaws Is A Crappy Masterpiece (kotaku.com) 99

Kotaku reviews Star Wars Outlaws, Ubisoft's latest AAA title: I was staring at a wall. It was an early mission in Ubisoft's latest behemothic RPG, Star Wars Outlaws, in which I was charged with infiltrating an Empire base to recover some information from a computer, and this wall really caught my attention.

It was a perfect wall. It absolutely captured that late-70s sci-fi aesthetic of dark gray cladding broken up by utilitarian-gray panels covered in dull blinking lights, and I stopped to think about how much work must have gone into that wall. Looking elsewhere on the screen, I was then overwhelmed. This wall was the most bland thing in a vast hanger, where TIE Fighters hung from the ceiling, Stormtroopers wandered in groups below, and even the little white sign with the yellow arrow looked like it was a decade old, meticulously crafted to fit into this universe. I felt sheer astonishment at the achievement of this. Ubisoft, via multiple studios across the whole world, and the work of thousands of deeply talented people, had built this impossibly perfect area for one momentary scene that I was intended to run straight past.

Except I ran past it three times, because the AI kept fucking up and I was restarted at a checkpoint right before that gray wall over and over. I'm struggling to capture the dissonance of this moment. This sense of absolute awe, almost unbelieving admiration that it's even possible to build games at this scale and at this detail, slapped hard around the face by the bewilderingly bad decisions that take place within it all.
Brokerage firm UBS said in a note to clients: Based on the 621 ratings thus far the game has received a score of 4.8 (out of 10). This tracks behind previous blockbuster releases by Ubisoft in Assassin's Creed and Far Cry, behind competing open world games released in 2024 and behind other major recent Star Wars Games released by EA in 2019 and 2023. The user ratings, which are generally unfavourable lag its generally favourable critic reviews (game received a score of 76 by critics).

Early user ratings suggest downside risk to our 10m units forecast for the game: While we previously felt the largely positive critic reviews made our 10m units sold look achievable (a component upon which we forecast +4% FY25 net bookings growth), the user ratings now suggest downside risk to our estimates. Previous Ubisoft games in Assassin's Creed and Far Cry which sold 10m+ units in their first fiscal year all received higher user ratings and were instalments of well entrenched franchises.

Twitter

Brazil Blocks X (apnews.com) 161

A longtime Slashdot reader writes: Regular Slashdot users will certainly be aware of the saga unfolding between the country of Brazil and X. Reuters has already reported that what I have to relay here will come as no surprise to Elon Musk, but reporting on CNN confirms that Brazilian Justice Alexandre de Moraes has ordered X to suspend operations in Brazil until X names a representative to appear on X's behalf in Brazilian Courts.

Is this the end of X or some brilliant Machiavellian ploy on the part of Elon Musk? Only time and the informed and spirited debate of the users here at /. can be sure.
Here's a recap of the saga, as told by X's Grok-2 chatbot: The Beginning: Alexandre de Moraes, a Brazilian Supreme Court Justice with a reputation for tackling misinformation, especially around elections, found himself at odds with Elon Musk, the space-faring, electric-car magnate turned social media mogul. The conflict kicked off when Moraes ordered X to block certain accounts in Brazil, part of his broader crackdown on what he deemed as misinformation.

The Escalation: Musk, never one to shy away from a fight, especially when it involves what he perceives as free speech issues, declared on X that he would not comply with Moraes' orders. This defiance wasn't just a tweet; it was a digital declaration of war. Musk accused Moraes of overstepping his bounds, betraying the constitution, and even likened him to Darth Vader in a less than flattering comparison. Moraes, not amused, opened an investigation into Musk for obstruction of justice, accusing him of inciting disobedience and disrespecting Brazil's sovereignty. The stakes were raised with fines of around $20,000 per day for each reactivated account, and threats of arresting X employees in Brazil.

The Drama Unfolds: The internet, as it does, had a field day. Posts on X ranged from Musk supporters calling Moraes a dictator to others backing Moraes, arguing he was defending democracy against foreign billionaires. The conflict became a global spectacle, with Musk's posts drawing international attention, comparing the situation to a battle for free speech versus censorship. Musk, in true Musk fashion, didn't just stop at defiance. He shared all of Moraes' demands publicly, suggesting users use VPNs, and even hinted at closing X's operations in Brazil, which eventually happened, citing the need to protect staff safety.

The Latest Chapter: Recently, X announced the closure of its operations in Brazil, a move seen as the culmination of this legal and ideological battle. Musk framed it as a stand against what he saw as an assault on free speech, while critics viewed it as an overreaction or a strategic retreat.

Government

California Passes Bill Requiring Easier Data Sharing Opt Outs (therecord.media) 22

Most of the attention today has been focused on California's controversial "kill switch" AI safety bill, which passed the California State Assembly by a 45-11 vote. However, California legislators passed another tech bill this week which requires internet browsers and mobile operating systems to offer a simple tool for consumers to easily opt out of data sharing and selling for targeted advertising. Slashdot reader awwshit shares a report from The Record: The state's Senate passed the landmark legislation after the General Assembly approved it late Wednesday. The Senate then added amendments to the bill which now goes back to the Assembly for final sign off before it is sent to the governor's desk, a process Matt Schwartz, a policy analyst at Consumer Reports, called a "formality." California, long a bellwether for privacy regulation, now sets an example for other states which could offer the same protections and in doing so dramatically disrupt the online advertising ecosystem, according to Schwartz.

"If folks use it, [the new tool] could severely impact businesses that make their revenue from monetizing consumers' data," Schwartz said in an interview with Recorded Future News. "You could go from relatively small numbers of individuals taking advantage of this right now to potentially millions and that's going to have a big impact." As it stands, many Californians don't know they have the right to opt out because the option is invisible on their browsers, a fact which Schwartz said has "artificially suppressed" the existing regulation's intended effects. "It shouldn't be that hard to send the universal opt out signal," Schwartz added. "This will require [browsers and mobile operating systems] to make that setting easy to use and find."

Businesses

Internal AWS Sales Guidelines Spread Doubt About OpenAI's Capabilities (businessinsider.com) 14

An anonymous reader shares a report: OpenAI lacks advanced security and customer support. It's just a research company, not an established cloud provider. The ChatGPT-maker is not focused enough on corporate customers. These are just some of the talking points Amazon Web Services' salespeople are told to follow when dealing with customers using, or close to buying, OpenAI's products, according to internal sales guidelines obtained by Business Insider. Other talking points from the documents include OpenAI's lack of access to third-party AI models and weak enterprise-level contracts. AWS salespeople should dispel the hype around AI chatbots like ChatGPT, and steer the conversation toward AWS's strength of running the cloud infrastructure behind popular AI services, the guidelines added.

[...] The effort to criticize OpenAI is also unusual for Amazon, which often says it's so customer-obsessed that it pays little attention to competitors. This is the latest sign that suggests Amazon knows it has work to do to catch up in the AI race. OpenAI, Microsoft, and Google have taken an early lead and could become the main platforms where developers build new AI products and tools. Though Amazon created a new AGI team last year, the company's existing AI models are considered less powerful than those made by its biggest competitors. Instead, Amazon has prioritized selling AI tools like Bedrock, which gives customers access to third-party AI models. AWS also offers cloud access to in-house AI chips that compete with Nvidia GPUs, with mixed results so far.

Entertainment

Disney Gives Up On Trying To Use Disney+ Excuse To Settle a Wrongful Death Lawsuit (theverge.com) 110

An anonymous reader shares a report: Disney has now agreed that a wrongful death lawsuit should be decided in court following backlash for initially arguing the case belonged in arbitration because the grieving widower had once signed up for a Disney Plus trial. "With such unique circumstances as the ones in this case, we believe this situation warrants a sensitive approach to expedite a resolution for the family who have experienced such a painful loss," chairman of Disney experiences Josh D'Amaro said in a statement to The Verge. "As such, we've decided to waive our right to arbitration and have the matter proceed in court."

The lawsuit was filed in February by Jeffrey Piccolo, the husband of a 42-year-old woman who died last year due to an allergic reaction that occurred after eating at a restaurant in the Disney Springs shopping complex in Orlando. The case gained widespread media attention after Piccolo's legal team challenged Disney's motion to dismiss the case, arguing that a forced arbitration agreement Piccolo signed was effectively invisible.

Power

Data Centers Are Consuming Electricity Supplies - and Possibly Hurting the Environment (yahoo.com) 77

Data center construction "could delay California's transition away from fossil fuels and raise electric bills for everyone else," warns the Los Angeles Times — and also increase the risk of blackouts: Even now, California is at the verge of not having enough power. An analysis of public data by the nonprofit GridClue ranks California 49th of the 50 states in resilience — or the ability to avoid blackouts by having more electricity available than homes and businesses need at peak hours... The state has already extended the lives of Pacific Gas & Electric Co.'s Diablo Canyon nuclear plant as well as some natural gas-fueled plants in an attempt to avoid blackouts on sweltering days when power use surges... "I'm just surprised that the state isn't tracking this, with so much attention on power and water use here in California," said Shaolei Ren, associate professor of electrical and computer engineering at UC Riverside. Ren and his colleagues calculated that the global use of AI could require as much fresh water in 2027 as that now used by four to six countries the size of Denmark.

Driving the data center construction is money. Today's stock market rewards companies that say they are investing in AI. Electric utilities profit as power use rises. And local governments benefit from the property taxes paid by data centers.

The article notes a Goldman Sachs estimate that by 2030, data centers could consume up to 11% of all U.S. power demand — up from 3% now. And it shows how the sprawling build-out of data centers across America is impacting surrounding communities:
  • The article notes that California's biggest concentration of data centers — more than 50 near the Silicon Valley city of Santa Clara — are powered by a utility emitting "more greenhouse gas than the average California electric utility because 23% of its power for commercial customers comes from gas-fired plants. Another 35% is purchased on the open market where the electricity's origin can't be traced." Consumer electric rates are rising "as the municipal utility spends heavily on transmission lines and other infrastructure," while the data centers now consume 60% of the city's electricity.
  • Energy officials in northern Virginia "have proposed a transmission line to shore up the grid that would depend on coal plants that had been expected to be shuttered."
  • "Earlier this year, Pacific Gas & Electric told investors that its customers have proposed more than two dozen data centers, requiring 3.5 gigawatts of power — the output of three new nuclear reactors."

Social Networks

41 Science Professionals Decry Harms and Mistrust Caused By COVID Lab Leak Claim (yahoo.com) 303

In 1999 Los Angeles Times reporter Michael Hiltzik co-authored a Pulitzer Prize-winning story. Now a business columnist for the Times, this week he covers new pushback on the COVID lab leak claim: Here's an indisputable fact about the theory that COVID originated in a laboratory: Most Americans believe it to be true. That's important for several reasons. One is that evidence to support the theory is nonexistent.

Another is that the claim itself has fomented a surge of attacks on science and scientists that threatens to drive promising researchers out of the crucial field of pandemic epidemiology. That concern was aired in a commentary by 41 biologists, immunologists, virologists and physicians published Aug. 1 in the Journal of Virology. The journal probably isn't in the libraries of ordinary readers, but the article's prose is commendably clear and its conclusions eye-opening. "The lab leak narrative fuels mistrust in science and public health infrastructures," the authors observe. "Scientists and public health professionals stand between us and pandemic pathogens; these individuals are essential for anticipating, discovering, and mitigating future pandemic threats. Yet, scientists and public health professionals have been harmed and their institutions have been damaged by the skewed public and political opinions stirred by continued promotion of the lab leak hypothesis in the absence of evidence...."

[O]ne can't advance the lab leak theory without positing a vast conspiracy encompassing scientists in China and the U.S., and Chinese and U.S. government officials. How else could all the evidence of a laboratory event that resulted in more than 7 million deaths worldwide be kept entirely suppressed for nearly five years... "Validating the lab leak hypothesis requires intelligence evidence that the WIV possessed or carried out work on a SARS-CoV-2 precursor virus prior to the pandemic," the Virology paper asserts. "Neither the scientific community nor multiple western intelligence agencies have found such evidence." Despite that, "the lab leak hypothesis receives persistent attention in the media, often without acknowledgment of the more solid evidence supporting zoonotic emergence," the paper says...

I've written before about the smears, physical harassment and baseless accusations of fraud and other wrongdoing that lab leak propagandists have visited upon scientists whose work has challenged their claims; similar attacks have targeted experts who have worked to debunk other anti-science narratives, including those about global warming and vaccines... What's notable about the Virology paper is that it represents a comprehensive and long-overdue pushback by the scientific community against such behavior. More to the point, it focuses on the consequences for public health and the scientific mission from the rise of anti-science propaganda... "Scientists have withdrawn from social media platforms, rejected opportunities to speak in public, and taken increased safety measures to protect themselves and their families," the authors report. "Some have even diverted their work to less controversial and less timely topics. We now see a long-term risk of having fewer experts engaged in work that may help thwart future pandemics...."

Thanks in part to social media, anti-science has become more virulent and widespread, the Virology authors write.

Politics

OpenAI Says Iranian Group Used ChatGPT To Try To Influence US Election (axios.com) 27

An anonymous reader quotes a report from the Washington Post: Artificial intelligence company OpenAI said Friday that an Iranian group had used its ChatGPT chatbot to generate content to be posted on websites and social media (Warning: source is paywalled; alternative source) seemingly aimed at stirring up polarization among American voters in the presidential election. The sites and social media accounts that OpenAI discovered posted articles and opinions made with help from ChatGPT on topics including the conflict in Gaza and the Olympic Games. They also posted material about the U.S. presidential election, spreading misinformation and writing critically about both candidates, a company report said. Some appeared on sites that Microsoft last week said were used by Iran to post fake news articles intended to amp up political division in the United States, OpenAI said.

The AI company banned the ChatGPT accounts associated with the Iranian efforts and said their posts had not gained widespread attention from social media users. OpenAI found "a dozen" accounts on X and one on Instagram that it linked to the Iranian operation and said all appeared to have been taken down after it notified those social media companies. Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said the activity was the first case of the company detecting an operation that had the U.S. election as a primary target. "Even though it doesn't seem to have reached people, it's an important reminder, we all need to stay alert but stay calm," he said.

Slashdot Top Deals