Advertising

Facebook Enabled Advertisers To Reach 'Jew Haters' (propublica.org) 253

ProPublica is reporting that Facebook "enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of 'Jew hater,' 'How to burn jews,' or, 'History of why jews ruin the world.'" The organization even went so far as to test these ad categories by paying $30 to target those groups with three "promoted posts" -- in which a ProPublica article or post was displayed in their news feeds. Facebook reportedly approved all three ads within 15 minutes. From the report: After we contacted Facebook, it removed the anti-Semitic categories -- which were created by an algorithm rather than by people -- and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers. In all likelihood, the ad categories that we spotted were automatically generated because people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a "field of study." Facebook's algorithm automatically transforms people's declared interests into advertising categories. [ProPublica provides a screenshot of their ad buying process on the company's advertising portal.]

"There are times where content is surfaced on our platform that violates our standards," said Rob Leathern, product management director at Facebook. "In this case, we've removed the associated targeting fields in question. We know we have more work to do, so we're also building new guardrails in our product and review processes to prevent other issues like this from happening in the future."

Databases

Google and ProPublica Team Up To Build a National Hate Crime Database (techcrunch.com) 310

In partnership with ProPublica, Google News Lab is launching a new tool to track hate crimes across America. The "Documenting Hate News Index" is being powered by machine learning to track reported hate crimes across all 50 states, collecting data from February 2017 onward. TechCrunch reports: Data visualization studio Pitch Interactive helped craft the index, which collects Google News results and filters them through Google's natural language analysis to extract geographic and contextual information. Because they are not catalogued in any kind of formal national database, a fact that inspired the creation of the index to begin with, Google calls the project a "starting point" for the documentation and study of hate crimes. While the FBI is legally required to document hate crimes at the federal level, state and local authorities often fail to report their own incidents, making the data incomplete at best.

The initiative is a data-rich new arm of the Documenting Hate project which collects and verifies hate incidents reported by both individual contributors and by news organizations. The Hate News Index will keep an eye out for false positives (casual uses of the word "hate" for example), striking a responsible balance between machine learning and human curation on a very sensitive subject. Hate events will be mapped onto a calendar in the user interface, though users can also use a keyword search or browse through algorithmic suggestions. For anyone who'd like to take the data in a new direction, Google will open sourced its data set, making it available through GitHub.

Medicine

The Myth of Drug Expiration Dates (propublica.org) 316

schwit1 shares a report from ProPublica: Hospitals and pharmacies are required to toss expired drugs, no matter how expensive or vital. Meanwhile the FDA has long known that many remain safe and potent for years longer. The box of prescription drugs had been forgotten in a back closet of a retail pharmacy for so long that some of the pills predated the 1969 moon landing. Most were 30 to 40 years past their expiration dates -- possibly toxic, probably worthless. But to Lee Cantrell, who helps run the California Poison Control System, the cache was an opportunity to answer an enduring question about the actual shelf life of drugs: Could these drugs from the bell-bottom era still be potent?

Gerona and Cantrell, a pharmacist and toxicologist, knew that the term "expiration date" was a misnomer. The dates on drug labels are simply the point up to which the Food and Drug Administration and pharmaceutical companies guarantee their effectiveness, typically at two or three years. But the dates don't necessarily mean they're ineffective immediately after they "expire" -- just that there's no incentive for drugmakers to study whether they could still be usable.

Tests on the decades-old drugs including antihistamines, pain relievers and stimulants. All the drugs tested were in their original sealed containers. The findings surprised both researchers: A dozen of the 14 compounds were still as potent as they were when they were manufactured, some at almost 100 percent of their labeled concentrations. Experts say the United States might be squandering a quarter of the money spent on health care. That's an estimated $765 billion a year.

AI

Artificial Intelligence Has Race, Gender Biases (axios.com) 465

An anonymous reader shares a report: The ACLU has begun to worry that artificial intelligence is discriminatory based on race, gender and age. So it teamed up with computer science researchers to launch a program to promote applications of AI that protect rights and lead to equitable outcomes. MIT Technology Review reports that the initiative is the latest to illustrate general concern that the increasing reliance on algorithms to make decisions in the areas of hiring, criminal justice, and financial services will reinforce racial and gender biases. A computer program used by jurisdictions to help with paroling prisoners that ProPublica found would go easy on white offenders while being unduly harsh to black ones.
Facebook

Facebook's Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children (propublica.org) 355

Sidney Fussell from Gizmodo summarizes a report from ProPublica, which brings to light dozens of training documents used by Facebook to train moderators on hate speech: As the trove of slides and quizzes reveals, Facebook uses a warped, one-sided reasoning to balance policing hate speech against users' freedom of expression on the platform. This is perhaps best summarized by the above image from one of its training slideshows, wherein Facebook instructs moderators to protect "White Men," but not "Female Drivers" or "Black Children." Facebook only blocks inflammatory remarks if they're used against members of a "protected class." But Facebook itself decides who makes up a protected class, with lots of clear opportunities for moderation to be applied arbitrarily at best and against minoritized people critiquing those in power (particularly white men) at worst -- as Facebook has been routinely accused of. According to the leaked documents, here are the group identifiers Facebook protects: Sex, Religious affiliation, National origin, Gender identity, Race, Ethnicity, Sexual Orientation, Serious disability or disease. And here are those Facebook won't protect: Social class, continental origin, appearance, age, occupation, political ideology, religions, countries. Subsets of groups -- female drivers, Jewish professors, gay liberals -- aren't protected either, as ProPublica explains: White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.
Security

Any Half-Decent Hacker Could Break Into Mar-a-Lago (alternet.org) 327

MrCreosote writes: Properties owned and run by the Trump Organization, including places where Trump spends much of his time and has hosted foreign leaders, are a network security nightmare. From a report via ProPublica (co-published with Gizmodo): "We parked a 17-foot motor boat in a lagoon about 800 feet from the back lawn of The Mar-a-Lago Club in Palm Beach and pointed a 2-foot wireless antenna that resembled a potato gun toward the club. Within a minute, we spotted three weakly encrypted Wi-Fi networks. We could have hacked them in less than five minutes, but we refrained. A few days later, we drove through the grounds of the Trump National Golf Club in Bedminster, New Jersey, with the same antenna and aimed it at the clubhouse. We identified two open Wi-Fi networks that anyone could join without a password. We resisted the temptation. We have also visited two of President Donald Trump's other family-run retreats, the Trump International Hotel in Washington, D.C., and a golf club in Sterling, Virginia. Our inspections found weak and open Wi-Fi networks, wireless printers without passwords, servers with outdated and vulnerable software, and unencrypted login pages to back-end databases containing sensitive information. The risks posed by the lax security, experts say, go well beyond simple digital snooping. Sophisticated attackers could take advantage of vulnerabilities in the Wi-Fi networks to take over devices like computers or smart phones and use them to record conversations involving anyone on the premises."
The Internet

Cloudflare Helps Serve Up Hate Online: Report (cnet.com) 210

An anonymous reader writes: If you've been wondering how hate has proliferated online, especially since the 2016 election, ProPublica has some answers. According to ProPublica, Cloudflare -- a major San Francisco-based internet company -- enables extremist web sites to stay in business by providing them with internet data delivery services. Cloudflare reportedly also keeps to a policy of turning over contact information of anyone who complains to operators of the offending sites, thus exposing the complainants to personal harassment.
Facebook

Facebook Buys Data From Third-Party Brokers To Fill In User Profiles (ibtimes.com) 116

An anonymous reader quotes a report from International Business Times: According to a report from ProPublica, the world's largest social network knows far more about its users than just what they do online. What Facebook can't glean from a user's activity, it's getting from third-party data brokers. ProPublica found the social network is purchasing additional information including personal income, where a person eats out and how many credit cards they keep. That data all comes separate from the unique identifiers that Facebook generates for its users based on interests and online behavior. A separate investigation by ProPublica in which the publication asked users to report categories of interest Facebook assigned to them generated more than 52,000 attributes. The data Facebook pays for from other brokers to round out user profiles isn't disclosed by the company beyond a note that it gets information "from a few different sources." Those sources, according to ProPublica, come from commercial data brokers who have access to information about people that isn't linked directly to online behavior. The social network doesn't disclose those sources because the information isn't collected by Facebook and is publicly available. Facebook does provide a page in its help center that details how to get removed from the lists held by third-party data brokers. However, the process isn't particularly easy. In the case of the Oracle-owned Datalogix, users who want off the list have to send a written request and a copy of a government-issued identification in the mail to Oracle's chief privacy officer. Another data collecting service, Acxiom, requires users provide the last four digits of their social security number to see the information the company has gathered about them.
Facebook

Facebook Users Sue Over Alleged Racial Discrimination In Housing, Job Ads (arstechnica.com) 177

In response to a report from ProPublica alleging that Facebook gives advertisers the ability to exclude specific groups it calls "Ethnic Affinities," three Facebook users have filed a lawsuit against the company. They are accusing the social networking giant of violating the Federal Housing Act of 1964 over its alleged discriminatory policies. Ars Technica reports: ProPublica managed to post an ad placed in Facebook's housing categories that excluded anyone with an "affinity" for African-American, Asian-American, or Hispanic people. When the ProPublica reporters showed the ad to prominent civil rights lawyer John Relman, he described it as "horrifying" and "as blatant a violation of the federal Fair Housing Act as one can find." According to the proposed class-action lawsuit, by allowing such ads on its site, Facebook is in violation of the landmark civil rights legislation, which specifically prohibits housing advertisements to discriminate based on race, gender, color, religion, and other factors. "This lawsuit does not seek to end Facebook's Ad Platform, nor even to get rid of the "Exclude People" mechanism. There are legal, desirable uses for such functionalities. Plaintiffs seek to end only the illegal proscribed uses of these functions," the lawyers wrote in the civil complaint, which was filed last Friday. The proposed class, if approved by a federal judge in San Francisco, would include any Facebook user in the United States who has "not seen an employment- or housing-related advertisement on Facebook within the last two years because the ad's buyer used the Ad Platform's 'Exclude People' functionality to exclude the class member based on race, color, religion, sex, familial status, or national origin."
Advertising

Facebook Lets Advertisers Exclude Users By Race (propublica.org) 197

schwit1 quotes a report from ProPublica: Imagine if, during the Jim Crow era, a newspaper offered advertisers the option of placing ads only in copies that went to white readers. That's basically what Facebook is doing nowadays. The ubiquitous social network not only allows advertisers to target users by their interests or background, it also gives advertisers the ability to exclude specific groups it calls "Ethnic Affinities." Ads that exclude people based on race, gender and other sensitive factors are prohibited by federal law in housing and employment. You can view a screenshot of a housing advertisement that ProPublica's Julia Angwin and Terry Parris Jr. purchased from Facebook's self-service advertising portal here. The report adds: "The ad we purchased was targeted to Facebook members who were house hunting and excluded anyone with an "affinity" for African-American, Asian-American or Hispanic people. (Here's the ad itself.) The Fair Housing Act of 1968 makes it illegal "to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin." Violators can face tens of thousands of dollars in fines. The Civil Rights Act of 1964 also prohibits the "printing or publication of notices or advertisements indicating prohibited preference, limitation, specification or discrimination" in employment recruitment. Facebook's business model is based on allowing advertisers to target specific groups -- or, apparently to exclude specific groups -- using huge reams of personal data the company has collected about its users. Facebook's micro-targeting is particularly helpful for advertisers looking to reach niche audiences, such as swing-state voters concerned about climate change. Facebook says its policies prohibit advertisers from using the targeting options for discrimination, harassment, disparagement or predatory advertising practices.
Advertising

Google Has Quietly Dropped Ban On Personally Identifiable Web Tracking (propublica.org) 155

Fudge Factor 3000 writes: Google has quietly changed its privacy policy to allow it to associate web tracking, which is supposed to remain anonymous, with personally identifiable user data. This completely reneges its promise to keep a wall between ad tracking and personally identifiable user data, further eroding one's anonymity on the internet. Google's priorities are clear. All they care about is monetizing user information to rake in the big dollars from ad revenue. Think twice before you purchase the premium priced Google Pixel. Google is getting added value from you as its product without giving you part of the revenue it is generating through tracking through lower prices. The crossed-out section in its privacy policy, which discusses the separation of information as mentioned above, has been followed with this statement: "Depending on your account settings, your activity on other sites and apps may be associated with your personal information in order to improve Google's services and the ads delivered by Google." ProPublica reports: "The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change this summer. The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on your name and other information Google knows about you. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct. The move is a sea change for Google and a further blow to the online ad industry's longstanding contention that web tracking is mostly anonymous. In recent years, Facebook, offline data brokers and others have increasingly sought to combine their troves of web tracking data with people's real names. But until this summer, Google held the line." You can choose to opt in or out of the personalized ads here.
Businesses

Amazon Says It Puts Customers First - But Its Pricing Algorithm Doesn't (propublica.org) 110

ProPublica has a report today in which it warns Amazon shoppers about the results that they see on the shopping portal. It notes that people often hope that the results that come up first after a search are the best deals, and that's what Amazon will have you believe, but its algorithm doesn't work that way. In what may surprise many, in more than 80 percent of cases, Amazon ranks its own products, or those of its affiliate partners higher. From the report: Amazon does give customers a chance to comparison shop, with a listing that ranks all vendors of the same item by "price + shipping." It appears to be the epitome of Amazon's customer-centric approach. But there, too, the company gives itself an oft-decisive advantage. Its rankings omit shipping costs only for its own products and those sold by companies that pay Amazon for its services. Erik Fairleigh, a spokesman for Amazon, said the algorithm that selects which product goes into the "buy box" accounts for a range of factors beyond price. "Customers trust Amazon to have great prices, but that's not all -- vast selection, world-class customer service and fast, free delivery are critically important," he said in an e-mailed statement. "These components, and more, determine our product listings."
Earth

Five Solomon Islands Disappear Into The Pacific Ocean As A Result Of Climate Change (go.com) 287

An anonymous reader writes: Climate change strikes again. A paper published in the journal Environmental Research Letters says five of the Solomon Islands have completely submerged underwater due to man-made climate change, and six more have experienced a dramatic reduction in shoreline. The Solomon Islands has a population of a little more than 500,000 people, many of whom have been adversely affected by rising sea levels in recent years. NASA scientist James Hansen estimated that seas could rise by seven meters within the next century. In 2014, Losing Ground issued a report that shows how large areas of the Louisiana coastline are being lost to rising sea levels. A 2011 study conducted by the U.S. Geological Survey determined that the state's wetlands were being lost at a rate of "a football field per hour." Michael Edison Hayden writes from ABC News, "The Solomon Islands provides a preview of how sea-level rise could affect other coastal communities in the coming years, according to the study, largely because the speed which erosion is taking place has been accelerated by a "synergistic interaction" with the waves that surround it.
Government

TSA Body Scanner Opt-out No Longer Guaranteed (slashgear.com) 278

codguy writes: Up to now, airline passengers have been able opt out of the TSA's Advanced Imaging Technologies (AIT) whole body scanners, and request a physical pat-down for their security check. But ProPublica journalist Julia Angwin points out that a rule change on December 18, 2015 now allows the TSA to compel some passengers to use these scanners instead of giving them a pat-down. The updated rule says, "While passengers may generally decline AIT screening in favor of physical screening, TSA may direct mandatory AIT screening for some passengers," (PDF source). Of course, the criteria for when this can happen is completely unspecified, and one can easily imagine them abusing this by deciding to compel anyone who requests a pat-down to go through the scanners for some reasonable cause from their perspective. Guilty until proven innocent?
Advertising

Viewing Data Harvested From Smart TVs Used To Push Ads To Other Screens? (securityledger.com) 148

chicksdaddy writes: In the latest episode of EULA overreach, electronics maker Vizio Holdings has been called out by the non profit investigative reporting outfit ProPublica for an on-by-default feature on its smart TVs called "Smart Interactivity" that analyzes both broadcast and streamed content viewed using the device. ProPublica noted that the company's privacy policy failed to clearly describe the tracking behavior, which included the collection of information such as the date, time, channel and whether the program was viewed live or recorded.

According to ProPublica, the monitoring of viewing information through IP addresses, while it does not identify individuals, can be combined with other data available in commercial databases from brokers such as Experian, creating a detailed picture of an individual or household. Vizio has since updated its privacy policy with a supplement that explains how "Smart Interactivity" works.

The bigger issue may be what that updated privacy policy reveals. As The Security Ledger notes, the updated Vizio privacy policy makes clear that the company will combine "your IP address and other Non-Personal Information in order to inform third party selection and delivery of targeted and re-targeted advertisements." Those advertisements "may be delivered to smartphones, tablets, PCs or other internet-connected devices that share an IP address or other identifier with your Smart TV."

In other words, TV viewing patterns will be used to serve ads to any device user who happens to be connected to the same network as the Vizio Smart TV — an obvious problem for households with a mix of say... adults and children?! Vizio does provide instructions for disabling the Smart Interactivity features and says that "connected" features of the device aren't contingent on monitoring. That's better than some other vendors. In 2014, for example, LG used a firmware update for its smart televisions to link the "smart" features of the device to viewer tracking and monitoring. Viewers who applied the update, but refused to consent to monitoring were not able to use services like Netflix and YouTube.

Verizon

Verizon Is Merging Its Cellphone Tracking Supercookie with AOL's Ad Tracking Network 100

schwit1 writes: ProPublica reports that Verizon is giving a new mission to its controversial hidden identifier that tracks users of mobile devices. Verizon said in a little-noticed announcement that it will soon begin sharing the profiles with AOL's ad network, which in turn monitors users across a large swath of the Internet. That means AOL's ad network will be able to match millions of Internet users to their real-world details gathered by Verizon, including — "your gender, age range and interests." AOL's network is on 40 percent of websites, including on ProPublica.
Privacy

First Library To Support Anonymous Internet Browsing Halts Project After DHS Email 130

An anonymous reader writes with an update to the news we discussed in July that a small library in New Hampshire would be used as a Tor exit relay. Shortly after the project went live, the local police department received an email from the Department of Homeland Security. The police then met with city officials and discussed all the ways criminals could make use of the relay. They ultimately decided to suspend the project, pending a vote of the library board of trustees on Sept. 15. DHS spokesman Shawn Neudauer said the agent was simply providing "visibility/situational awareness," and did not have any direct contact with the Lebanon police or library. "The use of a Tor browser is not, in [or] of itself, illegal and there are legitimate purposes for its use," Neudauer said, "However, the protections that Tor offers can be attractive to criminal enterprises or actors and HSI [Homeland Security Investigations] will continue to pursue those individuals who seek to use the anonymizing technology to further their illicit activity." ...Deputy City Manager Paula Maville said that when she learned about Tor at the meeting with the police and the librarians, she was concerned about the service’s association with criminal activities such as pornography and drug trafficking. "That is a concern from a public relations perspective and we wanted to get those concerns on the table," she said.
The Almighty Buck

How the Red Cross Raised Half a Billion Dollars For Haiti and Built 6 Homes 235

An anonymous reader points out an investigation from NPR and Propublica into how the Red Cross spent the $500 million in relief funds they gathered to help Haiti after the country was devastated by an earthquake in 2010. They found "a string of poorly managed projects, questionable spending and dubious claims of success." While the organization claims to have built homes for 130,000 people, investigators only found six permanent homes they could attribute to the charity. The Red Cross admitted afterward that the 130,000 number included people who had attended a seminar on how to fix their own homes.

"Lacking the expertise to mount its own projects, the Red Cross ended up giving much of the money to other groups to do the work. Those groups took out a piece of every dollar to cover overhead and management. Even on the projects done by others, the Red Cross had its own significant expenses – in one case, adding up to a third of the project’s budget." The Red Cross raised far more money for Haiti than any other charity, but is unwilling to provide details on where the money went. In one case, a brochure that extolled the virtues of one project claimed $24 million had been spent on a particular area — but residents of that area haven't seen any improvement in living conditions, and are unable to get information from the Red Cross. The former director of the Red Cross's shelter program said charity officials had no idea how to spend the money they'd accumulated.
Security

GPG Programmer Werner Koch Is Running Out of Money 222

New submitter jasonridesabike writes "ProPublica reports that Werner Koch, the man behind GPG, is in financial straits: "The man who built the free email encryption software used by whistleblower Edward Snowden, as well as hundreds of thousands of journalists, dissidents and security-minded people around the world, is running out of money to keep his project alive. Werner Koch wrote the software, known as Gnu Privacy Guard, in 1997, and since then has been almost single-handedly keeping it alive with patches and updates from his home in Erkrath, Germany. Now 53, he is running out of money and patience with being underfunded." (You can donate to the project here..)
Privacy

Stanford Promises Not To Use Google Money For Privacy Research 54

An anonymous reader writes Stanford University has pledged not to use money from Google to fund privacy research at its Center for Internet and Society — a move that critics claim poses a threat to academic freedom. The center has long been generously funded by Google but its privacy research has proved damaging to the search giant as of late. Just two years ago, a researcher at the center helped uncover Google privacy violations that led to the company paying a record $22.5 million fine. In 2011-2012, the center's privacy director helped lead a project to create a "Do Not Track" standard. The effort, not supported by Google, would have made it harder for advertisers to track what people do online, and likely would have cut into Google's ad revenue. Both Stanford and Google say the change in funding was unrelated to the previous research.

Slashdot Top Deals