AI

From Sci-Fi To State Law: California's Plan To Prevent AI Catastrophe (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: California's "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall "safety" of large artificial intelligence models. But critics are concerned that the bill's overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today. SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to "safety incidents."

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of "critical harms" that an AI system might enable. That includes harms leading to "mass casualties or at least $500 million of damage," such as "the creation or use of chemical, biological, radiological, or nuclear weapon" (hello, Skynet?) or "precise instructions for conducting a cyberattack... on critical infrastructure." The bill also alludes to "other grave harms to public safety and security that are of comparable severity" to those laid out explicitly. An AI model's creator can't be held liable for harm caused through the sharing of "publicly accessible" information from outside the model -- simply asking an LLM to summarize The Anarchist's Cookbook probably wouldn't put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with "novel threats to public safety and security." More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI "autonomously engaging in behavior other than at the request of a user" while acting "with limited human oversight, intervention, or supervision."

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must "implement the capability to promptly enact a full shutdown" and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require "intent, recklessness, or gross negligence" if performed by a human, suggesting a degree of agency that does not exist in today's large language models.
The bill's supporters include AI experts Geoffrey Hinton and Yoshua Bengio, who believe the bill is a necessary precaution against potential catastrophic AI risks.

Bill critics include tech policy expert Nirit Weiss-Blatt and AI community voice Daniel Jeffries. They argue that the bill is based on science fiction fears and could harm technological advancement. Ars Technica contributor Timothy Lee and Meta's Yann LeCun say that the bill's regulations could hinder "open weight" AI models and innovation in AI research.

Instead, some experts suggest a better approach would be to focus on regulating harmful AI applications rather than the technology itself -- for example, outlawing nonconsensual deepfake pornography and improving AI safety research.
Social Networks

Tumblr Is Never Going Back To Porn (theverge.com) 99

An anonymous reader quotes a report from The Verge: Automattic CEO Matt Mullenweg would like you to please stop asking Tumblr to bring back porn because it isn't going to happen. After widespread and inaccurate speculation that Tumblr would lift its ban on adult content, Mullenweg posted a long explanation yesterday of why Tumblr will never go back to the old days. Or, in his words: "the casually porn-friendly era of the early internet is currently impossible." That doesn't mean Tumblr's policies will stay the same. Mullenweg has said before that Automattic (which bought Tumblr in 2019) wants to loosen the rules its old owner Verizon implemented in 2018, and he reiterated that here, echoing comments he made earlier this week. Verizon's ban "took out not only porn but also a ton of art and artists," Mullenweg wrote in his post. "This policy is currently still in place, though the Tumblr and Automattic teams are working to make it more open and common-sense." Tumblr is supposed to implement those policies soon, putting the site more in line with Automattic's WordPress.com blogging platform.

"That said, no modern internet service in 2022 can have the rules that Tumblr did in 2007," Mullenweg wrote, quoting Tumblr's old liberal policy slogan. (If you're wondering, it was "go nuts, show nuts.") "I agree with 'go nuts, show nuts' in principle, but the casually porn-friendly era of the early internet is currently impossible." On Tumblr, that era helped produce a lot of unique, often queer, blogs with sexual content. The 2018 ban changed the tenor of the site for good -- and this week, many users were enthusiastically but prematurely celebrating its end. Why is returning to that era impossible? For now, it's largely because of intermediaries that play a massive role in how people access the web. Payment processors have long been leery of adult content, and they've stepped up enforcement in recent years, in part because of concerns about child abuse and nonconsensual pornography. Apple's iOS App Store has been staunchly opposed to it since launch. And without those two pieces of infrastructure, running a for-profit site is incredibly difficult. "If Apple permanently banned Tumblr from the App Store, we'd probably have to shut the service down," Mullenweg noted. Some nonprofit sites that do allow things like explicit artwork -- primarily the Archive of Our Own fanworks site -- have remained persistently web-only despite years of requests for apps. [...]

If you reached this article through Twitter or Reddit, you might have a fairly obvious question right now, and Mullenweg raises it: why can both those platforms, fairly unusually for modern social networks, allow a lot of porn? "Ask Apple, because I don't know," says Mullenweg. He speculates that Tumblr and Reddit are both too big to ban -- although Apple has forced moderation changes even for giant services like Facebook. The overall upshot, to Mullenweg, is this: "If you wanted to start an adult social network in 2022, you'd need to be web-only on iOS and side-load on Android, take payment in crypto, have a way to convert crypto to fiat for business operations without being blocked, do a ton of work in age and identity verification and compliance so you don't go to jail, protect all of that identity information so you don't dox your users, and make a ton of money. I do hope that a dedicated service or company is started that will replace what people used to get from porn on Tumblr. It may already exist and I don't know about it. They'll have an uphill battle under current regimes, and if you think that's a bad thing please try to change the regimes. Don't attack companies following legal and business realities as they exist."

The Internet

Should Some Sites Be Liable For The Content They Host? (nytimes.com) 265

America's lawmakers are scrutinizing the blanket protections in Section 230 of the Communications Decency Act, which lets online companies moderate their own sites without incurring legal liability for everything they host.

schwit1 shared this article from the New York Times: Last month, Senator Ted Cruz, Republican of Texas, said in a hearing about Google and censorship that the law was "a subsidy, a perk" for big tech that may need to be reconsidered. In an April interview, Speaker Nancy Pelosi of California called Section 230 a "gift" to tech companies "that could be removed."

"There is definitely more attention being paid to Section 230 than at any time in its history," said Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy and the author of a book about the law, The Twenty-Six Words That Created the Internet .... Mr. Wyden, now a senator [and a co-author of the original bill], said the law had been written to provide "a sword and a shield" for internet companies. The shield is the liability protection for user content, but the sword was meant to allow companies to keep out "offensive materials." However, he said firms had not done enough to keep "slime" off their sites. In an interview with The New York Times, Mr. Wyden said he had recently told tech workers at a conference on content moderation that if "you don't use the sword, there are going to be people coming for your shield."

There is also a concern that the law's immunity is too sweeping. Websites trading in revenge pornography, hate speech or personal information to harass people online receive the same immunity as sites like Wikipedia. "It gives immunity to people who do not earn it and are not worthy of it," said Danielle Keats Citron, a law professor at Boston University who has written extensively about the statute. The first blow came last year with the signing of a law that creates an exception in Section 230 for websites that knowingly assist, facilitate or support sex trafficking. Critics of the new law said it opened the door to create other exceptions and would ultimately render Section 230 meaningless.

The article notes that while lawmakers from both parties are challenging the protections, "they disagree on why," with Republicans complaining that the law has only protected some free speech while still leaving conservative voices open to censorship on major platforms.

The Times also notes that when Wyden co-authored the original bill in 1996, Google didn't exist yet, and Mark Zuckerberg was 11 years old.
Communications

The Consequences of Indecency (techcrunch.com) 502

Ron Wyden, a senior U.S. Senator from Oregon, argues there should be consequences for internet companies that refuse to remove hate speech from their platforms. An anonymous reader shares an excerpt from a report Wyden wrote via TechCrunch: I wrote the law that allows sites to be unfettered free speech marketplaces. I wrote that same law, Section 230 of the Communications Decency Act, to provide vital protections to sites that didn't want to host the most unsavory forms of expression. The goal was to protect the unique ability of the internet to be the proverbial marketplace of ideas while ensuring that mainstream sites could reflect the ethics of society as a whole. In general, this has been a success -- with one glaring exception. I never expected that internet CEOs would fail to understand one simple principle: that an individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children, is far more indecent than an individual posting pornography.

Social media cannot exist without the legal protections of Section 230. That protection is not constitutional, it's statutory. Failure by the companies to properly understand the premise of the law is the beginning of the end of the protections it provides. I say this because their failures are making it increasingly difficult for me to protect Section 230 in Congress. Members across the spectrum, including far-right House and Senate leaders, are agitating for government regulation of internet platforms. Even if government doesn't take the dangerous step of regulating speech, just eliminating the 230 protections is enough to have a dramatic, chilling effect on expression across the internet. Were Twitter to lose the protections I wrote into law, within 24 hours its potential liabilities would be many multiples of its assets and its stock would be worthless. The same for Facebook and any other social media site. Boards of directors should have taken action long before now against CEOs who refuse to recognize this threat to their business.
In an interview with Recode, Wyden said that platforms should be punished for hosting content that goes against "common decency." "I think what the Alex Jones case shows, we're gonna really be looking at what the consequences are for just leaving common decency in the dust," Wyden told Recode's Kara Swisher. "...What I'm gonna be trying to do in my legislation is to really lay out what the consequences are when somebody who is a bad actor, somebody who really doesn't meet the decency principles that reflect our values, if that bad actor blows by the bounds of common decency, I think you gotta have a way to make sure that stuff is taken down."
Bitcoin

Child Abuse Imagery Found Within Bitcoin's Blockchain (theguardian.com) 321

German researchers have discovered unknown persons are using bitcoin's blockchain to store and link to child abuse imagery, potentially putting the cryptocurrency in jeopardy. From a report: The blockchain is the open-source, distributed ledger that records every bitcoin transaction, but can also store small bits of non-financial data. This data is typically notes about the trade of bitcoin, recording what it was for or other metadata. But it can also be used to store links and files. Researchers from the RWTH Aachen University, Germany found that around 1,600 files were currently stored in bitcoin's blockchain. Of the files least eight were of sexual content, including one thought to be an image of child abuse and two that contain 274 links to child abuse content, 142 of which link to dark web services. "Our analysis shows that certain content, eg, illegal pornography, can render the mere possession of a blockchain illegal," the researchers wrote. "Although court rulings do not yet exist, legislative texts from countries such as Germany, the UK, or the USA suggest that illegal content such as [child abuse imagery] can make the blockchain illegal to possess for all users. This especially endangers the multi-billion dollar markets powering cryptocurrencies such as bitcoin."
Science

The Environmental Cost of Internet Porn (theatlantic.com) 302

An anonymous reader shares a report (condensed for space): Online streaming is a win for the environment. Streaming music eliminates all that physical material -- CDs, jewel cases, cellophane, shipping boxes, fuel -- and can reduce carbon-dioxide emissions by 40 percent or more. Scientists who analyze the environmental impact of the internet tout the benefits of this "dematerialization," observing that energy use and carbon-dioxide emissions will drop as media increasingly can be delivered over the internet. But this theory might have a major exception: porn. Since the turn of the century, the pornography industry has experienced two intense hikes in popularity. In the early 2000s, broadband enabled higher download speeds. Then, in 2008, the advent of so-called tube sites allowed users to watch clips for free, like people watch videos on YouTube. Adam Grayson, the chief financial officer of the adult company Evil Angel, calls the latter hike "the great mushroom-cloud porn explosion of 2008." Precise numbers don't exist to quantify specifics, but the impression across the industry is that viewership is way, way up. Pornhub, the world's most popular porn site, provides some of the only accessible data on its yearly web-traffic report. The first Year In Review post in 2013 tabulated that 14.7 billion people visited the site. By 2016, the number of visitors had almost doubled, to 23 billion, and those visitors watched more than 4.59 billion hours of porn. And Pornhub is just one site. Using a formula that Netflix published on its blog in 2015, Nathan Ensmenger, a professor at Indiana University who is writing a book about the environmental history of the computer, calculates that if Pornhub streams video as efficiently as Netflix (0.0013 kWh per streaming hour), it used 5.967 million kWh in 2016. For comparison, that's about the same amount of energy 11,000 light bulbs would use if left on for a year. And operating with Netflix's efficiency would be a best-case scenario for the porn site, Ensmenger believes.
Censorship

Starbucks and McDonald's Announce Porn Blocks On Their Wi-Fi Networks (cnn.com) 284

An anonymous reader quotes a report from CNN Money: Anti-pornography groups have succeeded in their efforts to get Starbucks and McDonald's to block porn on the chains' Wi-Fi networks..."We had not heard from our customers that this was an issue, but we saw an opportunity that is consistent with our goal of providing an enjoyable experience for families," McDonald's said in a statement... Starbucks said Friday it's will do so the same thing at its company-owned stores around the globe as well. "Once we determine that our customers can access our free Wi-Fi in a way that also doesn't involuntarily block unintended content, we will implement this in our stores," said a Starbucks spokesperson. "In the meantime, we reserve the right to stop any behavior that interferes with our customer experience, including what is accessed on our free Wi-Fi..."
Meanwhile, this week, the Republican Party officially added the "public health crisis" of porn to its platform.
Government

Mozilla Fights FBI In Court For Details On Tor Browser Hack (helpnetsecurity.com) 58

An anonymous reader writes from a report on Help Net Security: Mozilla has asked a Washington State District Court to compel FBI investigators to provide details about a vulnerability in the Tor Browser hack with them, before they share it with the defendant in a lawsuit, so that they could fix it before the knowledge becomes public. The lawsuit in question is against Jay Michaud, a Vancouver (Wa.) teacher that stands accused of accessing and downloading child pornography from a website on the Dark Web. The FBI used a "network investigative technique" (NIT) to discover the IP address and identity of the defendant, which was only possible from a vulnerability in the Tor Browser. Why does Mozilla care to learn about the vulnerability? "The Tor Browser is partially based on our Firefox browser code. Some have speculated, including members of the defense team, that the vulnerability might exist in the portion of the Firefox browser code relied on by the Tor Browser," Denelle Dixon-Thayer, Chief Legal and Business Officer at Mozilla Corporation, explained.
The Internet

Crowdsourcing Confirms: Websites Inaccessible on Comcast 349

Bennett Haselton writes with a bit of online detective work done with a little help from some (internet-distributed) friends: "A website that was temporarily inaccessible on my Comcast Internet connection (but accessible to my friends on other providers) led me to investigate further. Using a perl script, I found a sampling of websites that were inaccessible on Comcast (hostnames not resolving on DNS) but were working on other networks. Then I used Amazon Mechanical Turk to pay volunteers 25 cents apiece to check if they could access the website, and confirmed that (most) Comcast users were blocked from accessing it while users on other providers were not. The number of individual websites similarly inaccessible on Comcast could potentially be in the millions." Read on for the details.
Censorship

Sites Blocked By Smartfilter, Censored in Saudi Arabia 112

Slashdot contributor Bennett Haselton writes: "Internet users in Saudi Arabia, along with most users in the United Arab Emirates, are blocked by their respective government censors from accessing the websites of the Trinity Davison Lutheran Church, Deliverance Tabernacle Ministries in Pittsburgh, the Amitayu Buddhist Society of Taiwan, and GayFaith.org. An attempt to access any of those websites yields an error page like this one. However, the sites are not blocked because they conflict with the religions beliefs of those countries' governments. Rather, they are blocked because Smartfilter -- the American-made blocking program sold by McAfee, and used for state-mandated Internet censorship in those countries -- classifies those sites as "pornography". You can see the screen shots here, here, here and here." Read on for the rest of Bennett's thoughts.
Encryption

Seeking Fifth Amendment Defenders 768

Bennett Haselton writes with his take on a case going back and forth in U.S. courts right now about whether a defendant can be ordered to decrypt his own hard drives when they may incriminate him. "A Wisconsin defendant in a criminal child-pornography case recently invoked his Fifth Amendment right to avoid giving the FBI the password to decrypt his hard drive. At the risk of alienating fellow civil-libertarians, I admit I've never seen the particular value of the Fifth Amendment right against self-incrimination. So I pose this logical puzzle: come up with a specific, precisely defined scenario, where the Fifth Amendment makes a positive difference." Read on for the rest of Bennett's thoughts.
Privacy

Ontario Court Wrong About IP Addresses, Too 258

Frequent Slashdot contributor Bennett Haselton comments on a breaking news story out of the Canadian courts: "An Ontario Superior Court Justice has ruled that Canadian police can obtain the identities of Internet users without a warrant, writing that there is 'no reasonable expectation of privacy' for a user's online identity, and drawing the analogy that 'One's name and address or the name and address of your spouse are not biographical information one expects would be kept private from the state.' But why in the world is it valid to compare an IP address with a street address in the phone book?" Read on for Bennett's analysis.
Censorship

Verizon Cutting Access To Entire Alt.* Usenet Hierarchy 579

modemac writes "Verizon has declared it will no longer offer access to the entire alt.* hierarchy of Usenet newsgroups to its customers. This stems from last week's agreement for major ISPs to cut off access to 'newsgroups and Web sites' that make child pornography available. The story notes, 'No law requires Verizon to do this. Instead, the company (and, to varying extents, Time Warner Cable and Sprint) agreed to restrictions on Usenet in response to political strong-arming by New York State Attorney General Andrew Cuomo, a Democrat. Cuomo claimed that his office found child porn on 88 newsgroups — out of roughly 100,000 newsgroups that exist.' In response, Verizon will cut its customers off from a large portion of Usenet, as it will only carry newsgroups in the Big 8."
The Courts

Court Rules Burning Porn = Making Porn 887

An anonymous reader writes "An appeals court has upheld the prosecution of a Michigan man who was charged with production of child pornography after downloading and burning pornographic pictures from the Internet. The pictures were created by a Russian website that the man was not affiliated with in any form. From the court decision (PDF): 'After reviewing the dictionary definition of the word make, the circuit court stated that the bottom line was that, following the mechanical and technical act of burning images onto the CD-Rs, something new was created or made that did not previously exist.' Is this simply a court's overreaction to a scumbag pedophile? And how does this affect the lawsuits by the BSA, RIAA, and MPAA?"

Bush, Kerry, and Nader Respond to Youth Voter Questions 1312

Slashdot readers both contributed and helped moderate questions for the New Voters Project Presidential Youth Debate. You can read the answers below, but if you'd like to see an expanded introduction, thumbnails of the candidates, and different formatting, go to the Youth Debate page. And that's not all: We're supposed to get candidates' rebuttals on or about October 17, so don't touch that dial!
The Courts

ACLU and ALA Victorious in CIPA Challenge 352

Several people have submitted this news blurb about a victory in the CIPA case. If CIPA doesn't ring a bell, my earlier summary should help, or see this article from last month when the suit was heard in court. The ALA's CIPA page has more information, or read the lengthy decision. This is a rather surprising bit of good news; while the government often has great discretion in deciding how funds are spent (read my summary above for how the law worked), the judges in this case accepted the argument that requiring censoring software automatically lead to censoring things that weren't obscene, or child pornography, or "harmful to minors", and that that wasn't acceptable. I've reproduced the first part of the decision below. The government may choose to (and probably will) appeal to the Supreme Court.

Preserve Your Rights Online - Act Now 583

Imagine Slashdot closing its Your Rights Online section because you no longer have any rights online, and find many of your other rights severely curtailed, too. Saturday a small group of people, including U.S. Representative Lynn Rivers, from Michigan's 13th Congressional District, met in the University of Maryland Baltimore County [UMBC] library to discuss ways to maintain Americans' civil liberties despite major pressure to curtail them in the name of "fighting terrorism." The government does listen, you know, if you speak to the right people in the right way. So here's a guide, a HOWTO, if you will, that will teach you how to lobby effectively for your Constitutional rights.
Censorship

Censorware Flaws Shown To COPA Commission 147

At 11:30 AM PDT today, Bennett Haselton of Peacefire is scheduled to begin speaking to the COPA Commission. The occasion is their third and final hearing on the subject of blocking software, aka censorware. Our highly hilarious report on the second hearing may still be fresh in your memory; this time around, Bennett takes on the products FamilyClick, CyberSentinel, and SurfWatch.
News

Answers From Sealand: CTO Ryan Lackey Responds 151

A few weeks ago, you asked questions of Ryan Lackey, CTO for HavenCo, a company dedicated to providing secure off-shore data hosting from Sealand, a principality off the coast of England. Ryan has lately survived dental emergencies, the loss of a laptop (it dropped into the North Sea -- how many people can say that?) and other stresses, but he's followed through with some interesting answers. He even has some ideas for how you can make a lot of money, and lists the tools you need to start your own data haven. Kudos to Ryan for taking the time to answer so thoroughly.
The Internet

FreeNet's Ian Clarke Answers Privacy Questions 218

On April 5th you asked Ian Clarke of FreeNet many questions about this new project, which is designed to permit almost totally anonymous Internet posting of almost any kind of material. Here are his answers.

Slashdot Top Deals