Patents

Amazon Says Germany Customers Won't Lose Amazon Prime As a Result of Nokia Patent Win 12

A German court has ruled that Amazon's Prime Video service violates a Nokia-owned patent, ordering Amazon to stop streaming in its current form or face fines of 250,000 euros per violation. However, Amazon assured customers in a statement on Friday that there is no risk of losing access to Prime Video because the decision affects only a limited functionality related to casting videos between devices.

"Prime Video will comply with this local judgement and is currently considering next steps. However, there is absolutely no risk at all for customers losing access to Prime Video," Amazon's Prime Video spokesperson told Reuters. Meanwhile, Nokia's chief licensing officer, Arvin Patel, said: "...the innovation ecosystem breaks down if patent holders are not fairly compensated for the use of their technologies, as it becomes much harder for innovators to fund the development of next generation technologies."
Youtube

YouTube's Ad Blocker Crackdown Now Includes Third-Party Apps (theverge.com) 205

YouTube has updated its policies to no longer allow "third-party apps to turn off ads." The Verge reports: This appears to target mobile ad blockers like AdGuard, which lets you open YouTube within the ad blocking app, where you'll get to view videos interruption-free. "We only allow third-party apps to use our API when they follow our API Services Terms of Service," YouTube says. "When we find an app that violates these terms, we will take appropriate action to protect our platform, creators, and viewers." To get around this, YouTube once again suggests signing up for the ad-free YouTube Premium.
Media

YouTube Is Getting Serious About Blocking Ad Blockers (theverge.com) 286

Emma Roth reports via The Verge: YouTube is broadening its efforts to crack down on ad blockers. The platform has "launched a global effort" to encourage users to allow ads or try YouTube Premium, YouTube communications manager Christopher Lawton says in a statement provided to The Verge. If you run into YouTube's block, you may see a notice that says "video playback is blocked unless YouTube is allowlisted or the ad blocker is disabled." It also includes a prompt to allow ads or try YouTube Premium. You may get prompts about YouTube's stance on ad blockers but still be able to watch a video, though, for one Verge staffer, YouTube now fully blocks them nearly every time.

YouTube confirmed that it was disabling videos for users with ad blockers in June, but Lawton described it as only a "small experiment globally" at the time. Now, YouTube has expanded this effort. Over the past several weeks, more users with ad blockers installed have found themselves unable to watch YouTube videos, with a post from Android Authority highlighting the increase in reports. Lawton maintains that the "use of ad blockers" violates the platform's terms of service, adding that "ads support a diverse ecosystem of creators globally and allow billions to access their favorite content on YouTube."

Google

Google Violated Its Standards in Ad Deals, Research Finds (wsj.com) 19

Google violated its promised standards when placing video ads on other websites, according to new research that raises questions about the transparency of the tech giant's online-ad business. From a report: Google's YouTube runs ads on its own site and app. But the company also brokers the placement of video ads on other sites across the web through a program called Google Video Partners. Google charges a premium, promising that the ads it places will run on high-quality sites, before the page's main video content, with the audio on, and that brands will only pay for ads that aren't skipped.

Google violates those standards about 80% of the time, according to research from Adalytics, a company that helps brands analyze where their ads appear online. The firm accused the company of placing ads in small, muted, automatically-played videos off to the side of a page's main content, on sites that don't meet Google's standards for monetization, among other violations. Adalytics compiled its data by observing campaigns from more than 1,100 brands that got billions of ad impressions between 2020 and 2023. The company shared its findings with The Wall Street Journal. In a statement, Google said the report "makes many claims that are inaccurate and doesn't reflect how we keep advertisers safe."

Youtube

YouTube Tells Open-Source Privacy Software 'Invidious' to Shut Down (vice.com) 42

YouTube has sent a cease-and-desist letter to Invidious, an open-source "alternative front-end" to the website which allows users to watch videos without having their data tracked, claiming it violates YouTube's API policy and demanding that it be shut down within seven days. From a report: "We recently became aware of your product or service, Invidious," reads the letter, which was posted on the Invidious GitHub last week. "Your Client appears to be in violation of the YouTube API Services Terms of Service and Developer Policies." The letter then delineates the policies which Invidious is accused of having violated, such as not displaying a link to YouTube's Terms of Service or "clearly" explaining what it does with user information. Invidious is open-source software licensed under AGPL-3.0, and it markets itself as a way for users to interact with YouTube without allowing the site to collect their data, or having to make an account. "Invidious protects you from the prying eyes of Google," its homepage reads. "It won't track you either!" Invidious also allows users to watch videos without being interrupted by "annoying ads," which is how YouTube makes most of its money.
Facebook

Has Online Disinformation Splintered and Become More Intractable? (yahoo.com) 455

Disinformation has "metastasized" since experts began raising alarms about the threat, reports the New York Times.

"Despite years of efforts by the media, by academics and even by social media companies themselves to address the problem, it is arguably more pervasive and widespread today." Not long ago, the fight against disinformation focused on the major social media platforms, like Facebook and Twitter. When pressed, they often removed troubling content, including misinformation and intentional disinformation about the Covid-19 pandemic. Today, however, there are dozens of new platforms, including some that pride themselves on not moderating — censoring, as they put it — untrue statements in the name of free speech....

The purveyors of disinformation have also become increasingly sophisticated at sidestepping the major platforms' rules, while the use of video to spread false claims on YouTube, TikTok and Instagram has made them harder for automated systems to track than text.... A report last month by NewsGuard, an organization that tracks the problem online, showed that nearly 20 percent of videos presented as search results on TikTok contained false or misleading information on topics such as school shootings and Russia's war in Ukraine. "People who do this know how to exploit the loopholes," said Katie Harbath, a former director of public policy at Facebook who now leads Anchor Change, a strategic consultancy.

With the [U.S.] midterm elections only weeks away, the major platforms have all pledged to block, label or marginalize anything that violates company policies, including disinformation, hate speech or calls to violence. Still, the cottage industry of experts dedicated to countering disinformation — think tanks, universities and nongovernment organizations — say the industry is not doing enough. The Stern Center for Business and Human Rights at New York University warned last month, for example, that the major platforms continued to amplify "election denialism" in ways that undermined trust in the democratic system.

AI

New Internal Documents Contradict Facebook's Claims that AI Can Enforce Its Rules (livemint.com) 71

Today in the Wall Street Journal, Facebook's head of integrity, Guy Rosen, admitted that from April to June of this year, one in every 2,000 content views on Facebook still contained hate speech. (Alternate URL here, with shorter versions here and here.)

Head of integrity Rosen was calling that figure an improvement over mid-2020, when one in every 1,000 content views on Facebook were hate speech. But at that same moment in time Mark Zuckerberg was telling the U.S. Congress that "In terms of fighting hate, we've built really sophisticated systems!" "Facebook Inc. executives have long said that artificial intelligence would address the company's chronic problems keeping what it deems hate speech and excessive violence as well as underage users off its platforms," reports the Wall Street Journal.

"That future is farther away than those executives suggest, according to internal documents reviewed by The Wall Street Journal. Facebook's AI can't consistently identify first-person shooting videos, racist rants and even, in one notable episode that puzzled internal researchers for weeks, the difference between cockfighting and car crashes." On hate speech, the documents show, Facebook employees have estimated the company removes only a sliver of the posts that violate its rules — a low-single-digit percent, they say. When Facebook's algorithms aren't certain enough that content violates the rules to delete it, the platform shows that material to users less often — but the accounts that posted the material go unpunished.

The employees were analyzing Facebook's success at enforcing its own rules on content that it spells out in detail internally and in public documents like its community standards. The documents reviewed by the Journal also show that Facebook two years ago cut the time human reviewers focused on hate-speech complaints from users and made other tweaks that reduced the overall number of complaints. That made the company more dependent on AI enforcement of its rules and inflated the apparent success of the technology in its public statistics.

According to the documents, those responsible for keeping the platform free from content Facebook deems offensive or dangerous acknowledge that the company is nowhere close to being able to reliably screen it. "The problem is that we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas," wrote a senior engineer and research scientist in a mid-2019 note. He estimated the company's automated systems removed posts that generated just 2% of the views of hate speech on the platform that violated its rules. "Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term," he wrote.

This March, another team of Facebook employees drew a similar conclusion, estimating that those systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, and 0.6% of all content that violated Facebook's policies against violence and incitement.

Facebook does also take some other additional steps to reduce views of hate speech (beyond AI screening), they told the Journal — also arguing that the internal Facebook documents the Journal had reviwed were outdated. But one of those documents showed that in 2019 Facebook was spending $104 million a year to review suspected hate speech, with a Facebook manager noting that "adds up to real money" and proposing "hate speech cost controls."

Facebook told the Journal the saved money went to better improving their algorithms. But the Journal reports that Facebook "also introduced 'friction' to the content reporting process, adding hoops for aggrieved users to jump through that sharply reduced how many complaints about content were made, according to the documents."

Facebook told the Journal that "some" of that friction has since been rolled back.
EU

What Happened When Germany Tried to Fight Online Hate Speech? (msn.com) 236

"Harassment and abuse are all too common on the modern internet," writes the New York Times. "Yet it was supposed to be different in Germany." In 2017, the country enacted one of the world's toughest laws against online hate speech. It requires Facebook, Twitter and YouTube to remove illegal comments, pictures or videos within 24 hours of being notified about them or risk fines of up to 50 million euros, or $59 million. Supporters hailed it as a watershed moment for internet regulation and a model for other countries. But an influx of hate speech and harassment in the run-up to the German election, in which the country will choose a new leader to replace Angela Merkel, its longtime chancellor, has exposed some of the law's weaknesses...

Some critics of the law say it is too weak, with limited enforcement and oversight. They also maintain that many forms of abuse are deemed legal by the platforms, such as certain kinds of harassment of women and public officials. And when companies do remove illegal material, critics say, they often do not alert the authorities or share information about the posts, making prosecutions of the people publishing the material far more difficult. Another loophole, they say, is that smaller platforms like the messaging app Telegram, popular among far-right groups, are not subject to the law. Free-expression groups criticize the law on other grounds. They argue that the law should be abolished not only because it fails to protect victims of online abuse and harassment, but also because it sets a dangerous precedent for government censorship of the internet.

To address concerns that companies were not alerting the authorities to illegal posts, German policymakers this year passed amendments to the law. They require Facebook, Twitter and YouTube to turn over data to the police about accounts that post material that German law would consider illegal speech. The Justice Ministry was also given more powers to enforce the law... Facebook and Google have filed a legal challenge to block the new rules, arguing that providing the police with personal information about users violates their privacy.

An activist for the Electronic Frontier Foundation in Berlin tells the Times the law could encourage companies to remove offensive-but-legal speech. And Twitter shared a statement with additional concerns. "Threats, abusive content and harassment all have the potential to silence individuals. However, regulation and legislation such as this also has the potential to chill free speech by emboldening regimes around the world to legislate as a way to stifle dissent and legitimate speech."

Yet Germany's experience may ultimately influence policy across Europe, the Times points out, since German officials "are playing a key role in drafting one of the world's most anticipated new internet regulations, a European Union law called the Digital Services Act, which will require Facebook and other online platforms to do more to address the vitriol, misinformation and illicit content on their sites."
Social Networks

Facebook: Some High-Profile Users 'Allowed To Break Platform's Rules' (theguardian.com) 74

An anonymous reader quotes a report from The Guardian: Facebook gives high-profile users special treatment, which includes immunity from its rules for some, and allowed Brazilian footballer Neymar to post nude pictures of a woman who had accused him of rape, according to a report. The XCheck or "CrossCheck" system steers reviews of posts by well-known users such as celebrities, politicians and journalists into a separate system, according to an investigation by the Wall Street Journal. Under the program, some users are "whitelisted" -- not subject to enforcement action -- while others are allowed to post material that violates Facebook rules, pending content reviews that often do not take place.

People are placed on the XCheck list -- where they are given special scrutiny -- if they meet criteria such as being "newsworthy," "influential or popular" or "PR risky." Names on the XCheck program included Donald Trump, US senator Elizabeth Warren and Facebook founder Mark Zuckerberg, although the report does not state whether those names were whitelisted at any point. By 2020 there were 5.8 million users on the XCheck list, the Wall Street Journal said. In one example cited by the WSJ, Brazilian football star Neymar responded to a rape accusation in 2019 by posting Facebook and Instagram videos defending himself, which included showing viewers his WhatsApp correspondence with his accuser. The WhatsApp clips included the accuser's name and nude photos of her. Instagram and WhatsApp are owned by Facebook. Instead of immediately deleting the material, which is Facebook's procedure for "nonconsensual intimate imagery," moderators were blocked for more than a day from removing the video, according to the WSJ.

The WSJ investigation details the process known as "whitelisting," where some high-profile accounts are not subject to enforcement at all. An internal review in 2019 stated that whitelists "pose numerous legal, compliance, and legitimacy risks for the company and harm to our community." The review found favoritism to those users to be both widespread and "not publicly defensible." "We are not actually doing what we say we do publicly," said the confidential review. It called the company's actions "a breach of trust" and added: "Unlike the rest of our community, these people can violate our standards without any consequences." According to another internal document, enforcement procedures and rule-drafting were subject to interventions from members of Facebook's public-policy team and senior executives. One 2020 memo from a Facebook data scientist added: "Facebook routinely makes exceptions for powerful actors." The WSJ also reported that the system suffered from enforcement delays that allowed posts to stay up before they were eventually prohibited. In 2020, posts being reviewed by XCheck were viewed at least 16.4 billion times before being removed.
A Facebook spokesperson said in a statement: "A lot of this internal material is outdated information stitched together to create a narrative that glosses over the most important point: Facebook itself identified the issues with cross check and has been working to address them. We've made investments, built a dedicated team, and have been redesigning cross check to improve how the system operates."
Medicine

Calls Grow to Discipline Doctors Spreading Virus Misinformation Online (nytimes.com) 450

The New York Times tells the story of an Indiana physician spreading misinformation about the pandemic. Public health officials say statements like his have contributed to America's vaccine hesitancy and resistance to mask-wearing, exacerbating the pandemic. His videos "have amassed nearly 100 million likes and shares on Facebook, 6.2 million views on Twitter, at least 2.8 million views on YouTube and over 940,000 video views on Instagram." His talk's popularity points to one of the more striking paradoxes of the pandemic. Even as many doctors fight to save the lives of people sick with Covid-19, a tiny number of their medical peers have had an outsize influence at propelling false and misleading information about the virus and vaccines.

Now there is a growing call among medical groups to discipline physicians spreading incorrect information. The Federation of State Medical Boards, which represents the groups that license and discipline doctors, recommended last month that states consider action against doctors who share false medical claims, including suspending or revoking medical licenses. The American Medical Association says spreading misinformation violates the code of ethics that licensed doctors agree to follow.

"When a doctor speaks, people pay attention," said Dr. Humayun Chaudhry, president of the Federation of State Medical Boards. "The title of being a physician lends credibility to what people say to the general public. That's why it is so important that these doctors don't spread misinformation."

China

New Chinese Browser Offers a Glimpse Beyond the Great Firewall -- With Caveats (techcrunch.com) 23

An anonymous reader quotes a report from TechCrunch: China now has a tool that lets users access YouTube, Facebook, Twitter, Instagram, Google, and other internet services that have otherwise long been banned in the country. Called Tuber, the mobile browser recently debuted on China's third-party Android stores, with an iOS launch in the pipeline. The landing page of the app features a scrolling feed of YouTube videos, with tabs at the bottom that allow users to visit other mainstream Western internet services.

While some celebrate the app as an unprecedented "opening up" of the Chinese internet, others quickly noticed the browser comes with a veil of censorship. YouTube queries for politically sensitive keywords such as "Tiananmen" and "Xi Jinping" returned no results on the app, according to tests done by TechCrunch. Using the app also comes with liabilities. Registration requires a Chinese phone number, which is tied to a person's real identity. The platform could suspend users' accounts and share their data "with the relevant authorities" if they "actively watch or share" content that breaches the constitution, endangers national security and sovereignty, spreads rumors, disrupts social orders, or violates other local laws, according to the app's terms of service.

Privacy

Homeland Security Details New Tools For Extracting Device Data at US Borders (cnet.com) 113

Travelers heading to the US have many reasons to be cautious about their devices when it comes to privacy. A report released Thursday from the Department of Homeland Security provides even more cause for concern about how much data border patrol agents can pull from your phones and computers. From a report: In a Privacy Impact Assessment dated July 30, the DHS detailed its US Border Patrol Digital Forensics program, specifically for its development of tools to collect data from electronic devices. For years, DHS and border agents were allowed to search devices without a warrant, until a court found the practice unconstitutional in November 2019. In 2018, the agency searched more than 33,000 devices, compared to 30,200 searches in 2017 and just 4,764 searches in 2015. Civil rights advocates have argued against this kind of surveillance, saying it violates people's privacy rights.

The report highlights the DHS' capabilities, and shows that agents can create an exact copy of data on devices when travelers cross the border. According to the DHS, extracted data from devices can include: Contacts, call logs/details, IP addresses used by the device, calendar events, GPS locations used by the device, emails, social media information, cell site information, phone numbers, videos and pictures, account information (user names and aliases), text/chat messages, financial accounts and transactions, location history, browser bookmarks, notes, network information, and tasks list. The policy to retain this data for 75 years still remains, according to the report.

The Courts

LGBT Video-Makers Sue YouTube Claiming Discrimination (bbc.com) 176

AmiMoJo shares a report from the BBC: A group of YouTube video-makers is suing it and parent company Google, claiming both discriminate against LGBT-themed videos and their creators. The group claims YouTube restricts advertising on LGBT videos and limits their reach and discoverability. But YouTube said sexual orientation and gender identity played no role in deciding whether videos could earn ad revenue or appear in search results. A group is hoping a jury will hear its case in California.

The legal action makes a wide range of claims, including that YouTube:
- Removes advertising from videos featuring "trigger words" such as "gay" or "lesbian"
- Often labels LGBT-themed videos as "sensitive" or "mature" and restricts them from appearing in search results or recommendations
- Does not do enough to filter harassment and hate speech in the comments section
"Our policies have no notion of sexual orientation or gender identity and our systems do not restrict or demonetize videos based on these factors or the inclusion of terms like 'gay' or 'transgender,'" spokesman Alex Joseph said. "In addition, we have strong policies prohibiting hate speech and we quickly remove content that violates our policies and terminate accounts that do so repeatedly."
Youtube

YouTube Under Federal Investigation Over Allegations it Violates Children's Privacy (washingtonpost.com) 48

The U.S. government is in the late stages of an investigation into YouTube for its handling of children's videos, The Washington Post reported on Wednesday, citing four people familiar with the matter, a probe that threatens the company with a potential fine and already has prompted the tech giant to reevaluate some of its business practices. From the report: The Federal Trade Commission launched the investigation after numerous complaints from consumer groups and privacy advocates, according to the four people, who requested anonymity because such probes are supposed to be confidential. The complaints contended that YouTube, which is owned by Google, failed to protect kids who used the streaming-video service and improperly collected their data. As the investigation has progressed, YouTube executives in recent months have accelerated internal discussions about broad changes to how the platform handles children's videos, according to a person familiar with the company's plans. That includes potential changes to its algorithm for recommending and queuing up videos for users, including kids, part of an ongoing effort at YouTube over the past year and a half to overhaul its software and policies to prevent abuse.
The Internet

Pornhub Hasn't Been Actively Enforcing Its Deepfake Ban (engadget.com) 97

Pornhub said in February that it was banning AI-generated deepfake videos, but BuzzFeed News found that it's not doing a very good job at enforcing that policy. The media company found more than 70 deepfake videos -- depicting graphic fake sex scenes with Emma Watson, Scarlett Johanson, and other celebrities -- were easily searchable from the site's homepage using the search term "deepfake." From the report: Shortly after the ban in February, Mashable reported that there were dozens of deepfake videos still on the site. Pornhub removed those videos after the report, but a few months later, BuzzFeed News easily found more than 70 deepfake videos using the search term "deepfake" on the site's homepage. Nearly all the videos -- which included graphic and fake depictions of celebrities like Katy Perry, Scarlett Johansson, Daisy Ridley, and Jennifer Lawrence -- had the word "deepfake" prominently mentioned in the title of the video and many of the names of the videos' uploaders contained the word "deepfake." Similarly, a search for "fake deep" returned over 30 of the nonconsensual celebrity videos. Most of the videos surfaced by BuzzFeed News had view counts in the hundreds of thousands -- one video featuring the face of actor Emma Watson garnered over 1 million views. Some accounts posting deepfake videos appeared to have been active for as long as two months and have racked up over 3 million video views. "Content that is flagged on Pornhub that directly violates our Terms of Service is removed as soon as we are made aware of it; this includes non-consensual content," Pornhub said in a statement. "To further ensure the safety of all our fans, we officially took a hard stance against revenge porn, which we believe is a form of sexual assault, and introduced a submission form for the easy removal of non-consensual content." The company also provided a link where users can report any "material that is distributed without the consent of the individuals involved."
Youtube

YouTube Is Illegally Collecting Data From Children, Say Advocacy Groups (gizmodo.com) 69

Nearly two-dozen privacy and children's advocacy groups have filed a Federal Trade Commission complaint against YouTube, alleging the platform of illegally collecting data from children. From a report: The groups, led by the Campaign for a Commercial-Free Childhood (CCFC), allege YouTube is violating the Children's Online Privacy Protection Act (COPPA) by collecting data from children under 13 without parents' permission.

"It's just fundamentally unfair," Josh Golin, executive director of the CCFC, told Gizmodo, "to use Google's powerful behavioral targeting on a child that doesn't yet understand what's going on." COPPA requires platforms "give parents notice of its data collection practices, and obtain verifiable parental consent before collecting the data." But, as Golin argues, YouTube violates COPPA because it doesn't differentiate between videos marketed to children and the rest of the site.

AI

Pornhub Is Banning AI-Generated 'Deepfakes' Porn Videos (vice.com) 124

On Tuesday, Pornhub told Motherboard that it considers deepfakes to be nonconsensual porn and that it will ban these videos. "Deepfakes" is a community originally named after a Redditor who enjoys face-swapping celebrity faces onto porn performers' bodies using a machine learning algorithm. Motherboard reports: "We do not tolerate any nonconsensual content on the site and we remove all said content as soon as we are made aware of it," a spokesperson told me in an email. "Nonconsensual content directly violates our TOS [terms of service] and consists of content such as revenge porn, deepfakes or anything published without a person's consent or permission." Pornhub previously told Mashable that it has removed deepfakes that are flagged by users. Pornhub's position on deepfakes is similar to statements made by Discord and Gfycat, and in line with its existing terms of service, which prohibit content that "impersonates another person or falsely state or otherwise misrepresent your affiliation with a person."
Music

Stock Music Artists Aren't Always Happy About How Their Music Is Used (wired.com) 147

mirandakatz writes: If you're a stock music composer, you sign over the rights to whatever music you put up on a variety of hosting sites. That can get complicated -- especially when your music winds up being used to soundtrack hate speech. At Backchannel, Pippa Biddle dives into the knotty world of stock music, writing that stock music is 'a quick way for a talented musician to make a small buck. But there's a hidden cost: You lose control over where your work ends up. In hundreds, if not thousands, of cases, a tune becomes the backing track to hate speech or violent videos. Often such use violates the license the buyer agrees to when purchasing the track. But nobody reads the licenses -- and, more importantly, no one enforces them.'
Youtube

Google Announces New Measures To Fight Extremist YouTube Videos (cnet.com) 286

An anonymous reader quotes CNET: YouTube will take new steps to combat extremist- and terrorist-related videos, parent company Google said Sunday. "While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now," Kent Walker, Google's general counsel, said in an op-ed column in the London-based Financial Times.
Here's CNET's summary of the four new measure Google is implementing:
  • Use "more engineering resources to apply our most advanced machine learning research to train new 'content classifiers' to help us more quickly identify and remove such content."
  • Expand YouTube's Trusted Flagger program by adding 50 independent, "expert" non-governmental organizations to the 63 groups already part of it. Google will offer grants to fund the groups.
  • Take a "tougher stance on videos that do not clearly violate our policies -- for example, videos that contain inflammatory religious or supremacist content." Such videos will "appear behind a warning" and will not be "monetized, recommended or eligible for comments or user endorsements."
  • Expand YouTube's efforts in counter-radicalization. "We are working with Jigsaw to implement the 'redirect method' more broadly. ... This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining."

Communications

PewDiePie Calls Out the 'Old-School Media' For Spiteful Dishonesty 920

New submitter Shane_Optima writes: After losing his Youtube Red show and his contract with Disney, the owner of the most subscribed channel on Youtube, Felix Arvid Ulf Kjellberg (aka "PewDiePie"), has released a video response to the Wall Street Journal and other mainstream news outlets, who have labeled his comedy videos variously as racist, fascist or anti-semitic. In it, he accuses the mainstream media of deliberately fabricating and misrepresenting the evidence used against him because they are afraid of independent content producers such as himself. In the video, PewDiePie discusses the recent actions of the Wall Street Journal, whose reporters sent nine cherry-picked and edited videos to Disney, which led directly to Disney's decision to terminate their relationship with him. These video clips and others used to "prove" PewDiePie's guilt have been edited (he claims) to remove all context, to the extent of using a pose of him pointing at something as a Nazi salute and using a clip where other players are creating swastikas in a game and editing out the part where he is asking them to stop. The most-cited video in the controversy involves seeing if he can use the site Fiverr to hire someone to create a video containing an over-the-top message for a mere $5. After a couple of laughing males unfurl a sign saying "Death to All Jews," he recoils with widened eyes and sits, apparently dumbfounded, for another thirty seconds before the video ends, without him uttering another word.

PewDiePie's video comes several days after a Tumblr post where he attempted to clarify that the videos were intended to be comedy showing "how crazy the modern world is." He has not yet used the phrase "fake news" in his response to the controversy, but given the current trends surrounding that phrase, it isn't surprising that his supporters are resorting to it frequently. Is this all just another unfortunate instance of collateral damage in the war against far-right political movements, is it a campaign of malicious retaliation by old media that is terrified of new media (as Felix claims), or was J.K. Rowling correct when she called out PewDiePie as a Death Eater? Err, I mean, ...as a fascist?

Update: Apparently, canceling his Youtube Red series was deemed an insufficient response. Youtube has now removed the mirror of PewDiePie's "Death to All Jews" video because it "violates Youtube's policy on hate speech." The original posting of the video had already been marked private by PewDiePie shortly after the controversy erupted. A quick check of Vimeo and Daily Motion came up empty, so you're on your own if you wish to find out for yourself what the controversy was all about.

Slashdot Top Deals