Google

Who Has More of Your Personal Data Than Facebook? Try Google (wsj.com) 149

Facebook may be in the hot seat right now for its collection of personal data without our knowledge or explicit consent, but as The Wall Street Journal points out, "Google is a far bigger threat by many measures: the volume of information it gathers, the reach of its tracking and the time people spend on its sites and apps." From the report (alternative source): It's likely that Google has shadow profiles (data the company gathers on people without accounts) on as at least as many people as Facebook does, says Chandler Givens, CEO of TrackOff, which develops software to fight identity theft. Google allows everyone, whether they have a Google account or not, to opt out of its ad targeting, though, like Facebook, it continues to gather your data. Google Analytics is far and away the web's most dominant analytics platform. Used on the sites of about half of the biggest companies in the U.S., it has a total reach of 30 million to 50 million sites. Google Analytics tracks you whether or not you are logged in. Meanwhile, the billion-plus people who have Google accounts are tracked in even more ways. In 2016, Google changed its terms of service, allowing it to merge its massive trove of tracking and advertising data with the personally identifiable information from our Google accounts.

Google uses, among other things, our browsing and search history, apps we've installed, demographics like age and gender and, from its own analytics and other sources, where we've shopped in the real world. Google says it doesn't use information from "sensitive categories" such as race, religion, sexual orientation or health. Because it relies on cross-device tracking, it can spot logged-in users no matter which device they're on. Google fuels even more data harvesting through its dominant ad marketplaces. There are up to 4,000 data brokers in the U.S., and collectively they know everything about us we might otherwise prefer they didn't -- whether we're pregnant, divorced or trying to lose weight. Google works with some of these brokers directly but the company says it vets them to prevent targeting based on sensitive information. Google also is the biggest enabler of data harvesting, through the world's two billion active Android mobile devices.

Facebook

Facebook is Being Sued Over Housing Discrimination (fastcompany.com) 125

The National Fair Housing Alliance, along with three other nonprofit housing advocacy organizations around the country, has filed a lawsuit against Facebook over its alleged discriminatory advertisements. From a report: The nonprofits, over the last few months, created a fake real estate company and used the Facebook ad platform to place housing ads. According to the lawsuit, the NFHA was able to place advertisements that "[excluded] families with children and women from receiving advertisements, as well as users with interests based on disability and national origin." In the NFHA's press release, the organization writes that "Facebook's advertising platform enables landlords and real estate brokers to exclude families with children, women, and other protected classes of people from receiving housing ads."

The lawsuit follows extensive reporting from ProPublica that investigated these potentially discriminatory practices. For over a year, the journalism outlet tested various ways that landlords could place ads for housing, and found that the targeting allowed for many people to be kept out of the loop. Given Facebook's massive user base of over 2 billion users, the group believes that the social network is in violation of the Fair Housing Act.

Businesses

Cutting 'Old Heads' at IBM (propublica.org) 216

An anonymous reader shares a report: As the world's dominant technology firm, payrolls at International Business Machines swelled to nearly a quarter-million U.S. white-collar workers in the 1980s. Its profits helped underwrite a broad agenda of racial equality, equal pay for women and an unbeatable offer of great wages and something close to lifetime employment, all in return for unswerving loyalty. But when high tech suddenly started shifting and companies went global, IBM faced the changing landscape with a distinction most of its fiercest competitors didn't have: a large number of experienced and aging U.S. employees.

The company reacted with a strategy that, in the words of one confidential planning document, would "correct seniority mix." It slashed IBM's U.S. workforce by as much as three-quarters from its 1980s peak, replacing a substantial share with younger, less-experienced and lower-paid workers and sending many positions overseas. ProPublica estimates that in the past five years alone, IBM has eliminated more than 20,000 American employees ages 40 and over, about 60 percent of its estimated total U.S. job cuts during those years. In making these cuts, IBM has flouted or outflanked U.S. laws and regulations intended to protect later-career workers from age discrimination, according to a ProPublica review of internal company documents, legal filings and public records, as well as information provided via interviews and questionnaires filled out by more than 1,000 former IBM employees.

Facebook

Facebook's Uneven Enforcement of Hate Speech Rules Allows Vile Posts To Stay Up (propublica.org) 171

ProPublica has found inconsistent rulings on hate speech after analyzing more than 900 Facebook posts submitted to them as part of a crowd-sourced investigation into how the world's largest social network implements its hate-speech rules. "Based on this small fraction of Facebook posts, its content reviewers often make different calls on items with similar content, and don't always abide by the company's complex guidelines," reports ProPublica. "Even when they do follow the rules, racist or sexist language may survive scrutiny because it is not sufficiently derogatory or violent to meet Facebook's definition of hate speech." From the report: We asked Facebook to explain its decisions on a sample of 49 items, sent in by people who maintained that content reviewers had erred, mostly by leaving hate speech up, or in a few instances by deleting legitimate expression. In 22 cases, Facebook said its reviewers had made a mistake. In 19, it defended the rulings. In six cases, Facebook said the content did violate its rules but its reviewers had not actually judged it one way or the other because users had not flagged it correctly, or the author had deleted it. In the other two cases, it said it didn't have enough information to respond.

"We're sorry for the mistakes we have made -- they do not reflect the community we want to help build," Facebook Vice President Justin Osofsky said in a statement. "We must do better." He said Facebook will double the size of its safety and security team, which includes content reviewers and other employees, to 20,000 people in 2018, in an effort to enforce its rules better. He added that Facebook deletes about 66,000 posts reported as hate speech each week, but that not everything offensive qualifies as hate speech. "Our policies allow content that may be controversial and at times even distasteful, but it does not cross the line into hate speech," he said. "This may include criticism of public figures, religions, professions, and political ideologies."

Facebook

Dozens of Companies Are Using Facebook To Exclude Older Workers From Job Ads (propublica.org) 340

An anonymous reader quotes a report from ProPublica: Verizon is among dozens of the nation's leading employers -- including Amazon, Goldman Sachs, Target and Facebook itself -- that placed recruitment ads limited to particular age groups, an investigation by ProPublica and The New York Times has found. The ability of advertisers to deliver their message to the precise audience most likely to respond is the cornerstone of Facebook's business model. But using the system to expose job opportunities only to certain age groups has raised concerns about fairness to older workers. Several experts questioned whether the practice is in keeping with the federal Age Discrimination in Employment Act of 1967, which prohibits bias against people 40 or older in hiring or employment. Many jurisdictions make it a crime to "aid" or "abet" age discrimination, a provision that could apply to companies like Facebook that distribute job ads.

Facebook defended the practice. "Used responsibly, age-based targeting for employment purposes is an accepted industry practice and for good reason: it helps employers recruit and people of all ages find work," said Rob Goldman, a Facebook vice president. The revelations come at a time when the unregulated power of the tech companies is under increased scrutiny, and Congress is weighing whether to limit the immunity that it granted to tech companies in 1996 for third-party content on their platforms.

Government

New York City Moves To Create Accountability For Algorithms (propublica.org) 183

The algorithms that play increasingly central roles in our lives often emanate from Silicon Valley, but the effort to hold them accountable may have another epicenter: New York City. From a report: Last week, the New York City Council unanimously passed a bill to tackle algorithmic discrimination -- the first measure of its kind in the country. The algorithmic accountability bill, waiting to be signed into law by Mayor Bill de Blasio, establishes a task force that will study how city agencies use algorithms to make decisions that affect New Yorkers' lives, and whether any of the systems appear to discriminate against people based on age, race, religion, gender, sexual orientation or citizenship status. The task force's report will also explore how to make these decision-making processes understandable to the public. The bill's sponsor, Council Member James Vacca, said he was inspired by ProPublica's investigation into racially biased algorithms used to assess the criminal risk of defendants. "My ambition here is transparency, as well as accountability," Vacca said.
Education

Should Teachers Get $100 For Steering Kids To Google's 'Hour of Code' Lesson? 89

Tomorrow's "Hour of Code" kick-off event features Melinda Gates, Facebook COO Sheryl Sandberg, YouTube CEO Susan Wojcicki, and "multiple state governors," reports theodp -- who has some concerns. With Microsoft boasting that nearly 70 million of its Minecraft Hour of Code sessions have been launched, and tech companies pushing coding and their products into classrooms, it's probably no surprise that the 2017 Hour of Code -- organized by tech-bankrolled Code.org -- seems to have presented a too-hard-to-resist branding opportunity for Google, Microsoft, Apple and Amazon.

And, in what might evoke memories of Dollars for Doctors, some teachers will even be rewarded for steering their kids to Google's Hour of Code lesson. "Thanks to our friends at Google," explains crowdfunding website DonorsChoose.org, "4th-8th grade public school teachers who engage their students in a 'Create your own Google logo' Hour of Code activity can earn a $100 DonorsChoose.org gift code -- and have the opportunity to receive one of five other grand prizes (including $5,000 in DonorsChoose.org credits for your school!)."
Social Networks

Facebook Still Lets Housing Advertisers Exclude Users By Race (arstechnica.com) 197

AmiMoJo writes: In February, Facebook said it would step up enforcement of its prohibition against discrimination in advertising for housing, employment, or credit. Last week, ProPublica bought dozens of rental housing ads on Facebook but asked that they not be shown to certain categories of users, such as African-Americans,mothers of high school kids, people interested in wheelchair ramps, Jews, expats from Argentina, and Spanish speakers. All of these groups are protected under the federal Fair Housing Act. Violators can face tens of thousands of dollars in fines. Every single ad was approved within minutes. The only ad that took longer than three minutes to be approved by Facebook sought to exclude potential renters 'interested in Islam, Sunni Islam, and Shia Islam.' It was approved after 22 minutes.
Advertising

Facebook Enabled Advertisers To Reach 'Jew Haters' (propublica.org) 253

ProPublica is reporting that Facebook "enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of 'Jew hater,' 'How to burn jews,' or, 'History of why jews ruin the world.'" The organization even went so far as to test these ad categories by paying $30 to target those groups with three "promoted posts" -- in which a ProPublica article or post was displayed in their news feeds. Facebook reportedly approved all three ads within 15 minutes. From the report: After we contacted Facebook, it removed the anti-Semitic categories -- which were created by an algorithm rather than by people -- and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers. In all likelihood, the ad categories that we spotted were automatically generated because people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a "field of study." Facebook's algorithm automatically transforms people's declared interests into advertising categories. [ProPublica provides a screenshot of their ad buying process on the company's advertising portal.]

"There are times where content is surfaced on our platform that violates our standards," said Rob Leathern, product management director at Facebook. "In this case, we've removed the associated targeting fields in question. We know we have more work to do, so we're also building new guardrails in our product and review processes to prevent other issues like this from happening in the future."

Databases

Google and ProPublica Team Up To Build a National Hate Crime Database (techcrunch.com) 310

In partnership with ProPublica, Google News Lab is launching a new tool to track hate crimes across America. The "Documenting Hate News Index" is being powered by machine learning to track reported hate crimes across all 50 states, collecting data from February 2017 onward. TechCrunch reports: Data visualization studio Pitch Interactive helped craft the index, which collects Google News results and filters them through Google's natural language analysis to extract geographic and contextual information. Because they are not catalogued in any kind of formal national database, a fact that inspired the creation of the index to begin with, Google calls the project a "starting point" for the documentation and study of hate crimes. While the FBI is legally required to document hate crimes at the federal level, state and local authorities often fail to report their own incidents, making the data incomplete at best.

The initiative is a data-rich new arm of the Documenting Hate project which collects and verifies hate incidents reported by both individual contributors and by news organizations. The Hate News Index will keep an eye out for false positives (casual uses of the word "hate" for example), striking a responsible balance between machine learning and human curation on a very sensitive subject. Hate events will be mapped onto a calendar in the user interface, though users can also use a keyword search or browse through algorithmic suggestions. For anyone who'd like to take the data in a new direction, Google will open sourced its data set, making it available through GitHub.

Medicine

The Myth of Drug Expiration Dates (propublica.org) 316

schwit1 shares a report from ProPublica: Hospitals and pharmacies are required to toss expired drugs, no matter how expensive or vital. Meanwhile the FDA has long known that many remain safe and potent for years longer. The box of prescription drugs had been forgotten in a back closet of a retail pharmacy for so long that some of the pills predated the 1969 moon landing. Most were 30 to 40 years past their expiration dates -- possibly toxic, probably worthless. But to Lee Cantrell, who helps run the California Poison Control System, the cache was an opportunity to answer an enduring question about the actual shelf life of drugs: Could these drugs from the bell-bottom era still be potent?

Gerona and Cantrell, a pharmacist and toxicologist, knew that the term "expiration date" was a misnomer. The dates on drug labels are simply the point up to which the Food and Drug Administration and pharmaceutical companies guarantee their effectiveness, typically at two or three years. But the dates don't necessarily mean they're ineffective immediately after they "expire" -- just that there's no incentive for drugmakers to study whether they could still be usable.

Tests on the decades-old drugs including antihistamines, pain relievers and stimulants. All the drugs tested were in their original sealed containers. The findings surprised both researchers: A dozen of the 14 compounds were still as potent as they were when they were manufactured, some at almost 100 percent of their labeled concentrations. Experts say the United States might be squandering a quarter of the money spent on health care. That's an estimated $765 billion a year.

AI

Artificial Intelligence Has Race, Gender Biases (axios.com) 465

An anonymous reader shares a report: The ACLU has begun to worry that artificial intelligence is discriminatory based on race, gender and age. So it teamed up with computer science researchers to launch a program to promote applications of AI that protect rights and lead to equitable outcomes. MIT Technology Review reports that the initiative is the latest to illustrate general concern that the increasing reliance on algorithms to make decisions in the areas of hiring, criminal justice, and financial services will reinforce racial and gender biases. A computer program used by jurisdictions to help with paroling prisoners that ProPublica found would go easy on white offenders while being unduly harsh to black ones.
Facebook

Facebook's Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children (propublica.org) 355

Sidney Fussell from Gizmodo summarizes a report from ProPublica, which brings to light dozens of training documents used by Facebook to train moderators on hate speech: As the trove of slides and quizzes reveals, Facebook uses a warped, one-sided reasoning to balance policing hate speech against users' freedom of expression on the platform. This is perhaps best summarized by the above image from one of its training slideshows, wherein Facebook instructs moderators to protect "White Men," but not "Female Drivers" or "Black Children." Facebook only blocks inflammatory remarks if they're used against members of a "protected class." But Facebook itself decides who makes up a protected class, with lots of clear opportunities for moderation to be applied arbitrarily at best and against minoritized people critiquing those in power (particularly white men) at worst -- as Facebook has been routinely accused of. According to the leaked documents, here are the group identifiers Facebook protects: Sex, Religious affiliation, National origin, Gender identity, Race, Ethnicity, Sexual Orientation, Serious disability or disease. And here are those Facebook won't protect: Social class, continental origin, appearance, age, occupation, political ideology, religions, countries. Subsets of groups -- female drivers, Jewish professors, gay liberals -- aren't protected either, as ProPublica explains: White men are considered a group because both traits are protected, while female drivers and black children, like radicalized Muslims, are subsets, because one of their characteristics is not protected.
Security

Any Half-Decent Hacker Could Break Into Mar-a-Lago (alternet.org) 327

MrCreosote writes: Properties owned and run by the Trump Organization, including places where Trump spends much of his time and has hosted foreign leaders, are a network security nightmare. From a report via ProPublica (co-published with Gizmodo): "We parked a 17-foot motor boat in a lagoon about 800 feet from the back lawn of The Mar-a-Lago Club in Palm Beach and pointed a 2-foot wireless antenna that resembled a potato gun toward the club. Within a minute, we spotted three weakly encrypted Wi-Fi networks. We could have hacked them in less than five minutes, but we refrained. A few days later, we drove through the grounds of the Trump National Golf Club in Bedminster, New Jersey, with the same antenna and aimed it at the clubhouse. We identified two open Wi-Fi networks that anyone could join without a password. We resisted the temptation. We have also visited two of President Donald Trump's other family-run retreats, the Trump International Hotel in Washington, D.C., and a golf club in Sterling, Virginia. Our inspections found weak and open Wi-Fi networks, wireless printers without passwords, servers with outdated and vulnerable software, and unencrypted login pages to back-end databases containing sensitive information. The risks posed by the lax security, experts say, go well beyond simple digital snooping. Sophisticated attackers could take advantage of vulnerabilities in the Wi-Fi networks to take over devices like computers or smart phones and use them to record conversations involving anyone on the premises."
The Internet

Cloudflare Helps Serve Up Hate Online: Report (cnet.com) 210

An anonymous reader writes: If you've been wondering how hate has proliferated online, especially since the 2016 election, ProPublica has some answers. According to ProPublica, Cloudflare -- a major San Francisco-based internet company -- enables extremist web sites to stay in business by providing them with internet data delivery services. Cloudflare reportedly also keeps to a policy of turning over contact information of anyone who complains to operators of the offending sites, thus exposing the complainants to personal harassment.
Facebook

Facebook Buys Data From Third-Party Brokers To Fill In User Profiles (ibtimes.com) 116

An anonymous reader quotes a report from International Business Times: According to a report from ProPublica, the world's largest social network knows far more about its users than just what they do online. What Facebook can't glean from a user's activity, it's getting from third-party data brokers. ProPublica found the social network is purchasing additional information including personal income, where a person eats out and how many credit cards they keep. That data all comes separate from the unique identifiers that Facebook generates for its users based on interests and online behavior. A separate investigation by ProPublica in which the publication asked users to report categories of interest Facebook assigned to them generated more than 52,000 attributes. The data Facebook pays for from other brokers to round out user profiles isn't disclosed by the company beyond a note that it gets information "from a few different sources." Those sources, according to ProPublica, come from commercial data brokers who have access to information about people that isn't linked directly to online behavior. The social network doesn't disclose those sources because the information isn't collected by Facebook and is publicly available. Facebook does provide a page in its help center that details how to get removed from the lists held by third-party data brokers. However, the process isn't particularly easy. In the case of the Oracle-owned Datalogix, users who want off the list have to send a written request and a copy of a government-issued identification in the mail to Oracle's chief privacy officer. Another data collecting service, Acxiom, requires users provide the last four digits of their social security number to see the information the company has gathered about them.
Facebook

Facebook Users Sue Over Alleged Racial Discrimination In Housing, Job Ads (arstechnica.com) 177

In response to a report from ProPublica alleging that Facebook gives advertisers the ability to exclude specific groups it calls "Ethnic Affinities," three Facebook users have filed a lawsuit against the company. They are accusing the social networking giant of violating the Federal Housing Act of 1964 over its alleged discriminatory policies. Ars Technica reports: ProPublica managed to post an ad placed in Facebook's housing categories that excluded anyone with an "affinity" for African-American, Asian-American, or Hispanic people. When the ProPublica reporters showed the ad to prominent civil rights lawyer John Relman, he described it as "horrifying" and "as blatant a violation of the federal Fair Housing Act as one can find." According to the proposed class-action lawsuit, by allowing such ads on its site, Facebook is in violation of the landmark civil rights legislation, which specifically prohibits housing advertisements to discriminate based on race, gender, color, religion, and other factors. "This lawsuit does not seek to end Facebook's Ad Platform, nor even to get rid of the "Exclude People" mechanism. There are legal, desirable uses for such functionalities. Plaintiffs seek to end only the illegal proscribed uses of these functions," the lawyers wrote in the civil complaint, which was filed last Friday. The proposed class, if approved by a federal judge in San Francisco, would include any Facebook user in the United States who has "not seen an employment- or housing-related advertisement on Facebook within the last two years because the ad's buyer used the Ad Platform's 'Exclude People' functionality to exclude the class member based on race, color, religion, sex, familial status, or national origin."
Advertising

Facebook Lets Advertisers Exclude Users By Race (propublica.org) 197

schwit1 quotes a report from ProPublica: Imagine if, during the Jim Crow era, a newspaper offered advertisers the option of placing ads only in copies that went to white readers. That's basically what Facebook is doing nowadays. The ubiquitous social network not only allows advertisers to target users by their interests or background, it also gives advertisers the ability to exclude specific groups it calls "Ethnic Affinities." Ads that exclude people based on race, gender and other sensitive factors are prohibited by federal law in housing and employment. You can view a screenshot of a housing advertisement that ProPublica's Julia Angwin and Terry Parris Jr. purchased from Facebook's self-service advertising portal here. The report adds: "The ad we purchased was targeted to Facebook members who were house hunting and excluded anyone with an "affinity" for African-American, Asian-American or Hispanic people. (Here's the ad itself.) The Fair Housing Act of 1968 makes it illegal "to make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin." Violators can face tens of thousands of dollars in fines. The Civil Rights Act of 1964 also prohibits the "printing or publication of notices or advertisements indicating prohibited preference, limitation, specification or discrimination" in employment recruitment. Facebook's business model is based on allowing advertisers to target specific groups -- or, apparently to exclude specific groups -- using huge reams of personal data the company has collected about its users. Facebook's micro-targeting is particularly helpful for advertisers looking to reach niche audiences, such as swing-state voters concerned about climate change. Facebook says its policies prohibit advertisers from using the targeting options for discrimination, harassment, disparagement or predatory advertising practices.
Advertising

Google Has Quietly Dropped Ban On Personally Identifiable Web Tracking (propublica.org) 155

Fudge Factor 3000 writes: Google has quietly changed its privacy policy to allow it to associate web tracking, which is supposed to remain anonymous, with personally identifiable user data. This completely reneges its promise to keep a wall between ad tracking and personally identifiable user data, further eroding one's anonymity on the internet. Google's priorities are clear. All they care about is monetizing user information to rake in the big dollars from ad revenue. Think twice before you purchase the premium priced Google Pixel. Google is getting added value from you as its product without giving you part of the revenue it is generating through tracking through lower prices. The crossed-out section in its privacy policy, which discusses the separation of information as mentioned above, has been followed with this statement: "Depending on your account settings, your activity on other sites and apps may be associated with your personal information in order to improve Google's services and the ads delivered by Google." ProPublica reports: "The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change this summer. The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on your name and other information Google knows about you. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct. The move is a sea change for Google and a further blow to the online ad industry's longstanding contention that web tracking is mostly anonymous. In recent years, Facebook, offline data brokers and others have increasingly sought to combine their troves of web tracking data with people's real names. But until this summer, Google held the line." You can choose to opt in or out of the personalized ads here.
Businesses

Amazon Says It Puts Customers First - But Its Pricing Algorithm Doesn't (propublica.org) 110

ProPublica has a report today in which it warns Amazon shoppers about the results that they see on the shopping portal. It notes that people often hope that the results that come up first after a search are the best deals, and that's what Amazon will have you believe, but its algorithm doesn't work that way. In what may surprise many, in more than 80 percent of cases, Amazon ranks its own products, or those of its affiliate partners higher. From the report: Amazon does give customers a chance to comparison shop, with a listing that ranks all vendors of the same item by "price + shipping." It appears to be the epitome of Amazon's customer-centric approach. But there, too, the company gives itself an oft-decisive advantage. Its rankings omit shipping costs only for its own products and those sold by companies that pay Amazon for its services. Erik Fairleigh, a spokesman for Amazon, said the algorithm that selects which product goes into the "buy box" accounts for a range of factors beyond price. "Customers trust Amazon to have great prices, but that's not all -- vast selection, world-class customer service and fast, free delivery are critically important," he said in an e-mailed statement. "These components, and more, determine our product listings."

Slashdot Top Deals