Lauren Weinstein writes: It’s not Google’s fault that these criminals exist. However, given Google’s excellent record at detection and blocking of malware, it is beyond puzzling why Google’s Chrome browser is so ineffective at blocking or even warning about these horrific tech support scams when they hit a user’s browser.
These scam pages should not require massive AI power for Google to target.
And critically, it’s difficult to understand why Chrome still permits most of these crooked pages to completely lock up the user’s browser — often making it impossible for the user to close the related tab or browser through any means that most users could reasonably be expected to know about.
Lauren Weinstein writes: In answer to a question regarding the timing of this proposed transition, Seville noted that the IETF planned to follow the GOP’s healthcare leadership style. “We feel that IPv4 and IPv6 should be immediately repealed, and then we can come up with the IPv7 replacement later.” When asked if this might be disruptive to the communications of Internet users around the world, Mr. Seville chuckled “You’re catching on.” Link to Original Source
Lauren Weinstein writes: Here is my mock-up of one way to label fake news on Google Search Results Pages, in the style of the Google malware site warnings. The warning label link would go to a help page explaining the methodology of the labeling...
Lauren Weinstein writes: This is why I am now convinced that at least the major Web firms must begin moving gradually toward the mandatory use of 2-factor methods for users accessing these sites.
Just as responsible websites won’t permit a user to create an account without a password, and many attempt to prevent users from selecting incredibly weak passwords, we must start the process of requiring 2-factor use on a routine basis, both for the protection of users and of the companies that are serving them — and for the protection of society in a broader sense as well. We can no longer permit this to be simply an optional offering that vast numbers of users ignore.
is a forum to discuss and strategize practical methods to leverage privacy, security, and other technologies in all possible legal ways to slow and/or stop abuses by Donald Trump, his administration, and his supporters. All submissions to this community will be moderated before being published. Let’s get to work saving the USA and the rest of the world from evil.
Lauren Weinstein writes: Controversy continues to rage over how Holocaust denial sites and related YouTube videos have achieved multiple top and highly-ranked search positions on Google for various forms and permutations of the question “Did the Holocaust really happen?” — and what — if anything — Google intends to ultimately do about these outright, racist lies achieving such search results prominence.
If you’re like most Internet users, you’ve been searching on Google and viewing the resulting pages of blue links for many years now.
But here’s something to ponder that you may not have ever really stopped to think about in depth: What does a top or otherwise high search result on Google really mean?
Lauren Weinstein writes: Action Items: What Google, Facebook, and Others Should Be Doing RIGHT NOW About Fake News
Today is action items day, and there isn’t a moment to lose before someone gets killed as a result of the fake news scourge. It nearly happened a couple of days ago, when some wacko invaded a pizza restaurant and shot it up looking for the youthful “sex slaves” that the fake “Pizzagate” story claims exist (a total fabrication created out of whole cloth and part of the complex of fake anti-Hillary sex stories even being promoted by highly-placed wackos in Trump’s White House circle). In fact, there are already new fake stories circulating regarding the shooting itself.
There are some ongoing efforts to begin dealing with fake and false news at the big firms. Facebook appears to be running an experiment asking some users to rate how “misleading” some link titles might be. This will no doubt collect some interesting data and may be a small portion of solutions, but of course cannot alone solve the underlying problems...
to report fake or false news found on traditional websites and/or in social media postings.
Any information submitted via this form may be made public after verification, with the exception of your name and/or email address if provided (which will be kept private and will not be used for any purposes other than this study)...
Lauren Weinstein writes: Two days ago, I uploaded the YouTube video linked below, which recorded the insightful response I received from Google Home to the highly relevant question: “Is Donald Trump Insane?” I noted Google’s accurate appraisal on Google+ and in my various public mailing lists. The next day (yesterday) the response was (and currently is) gone for the same query to Home — replaced by the generic: “I can do a search for that.”...
Lauren Weinstein writes: Labeling, tagging, and downranking of clearly false or fake posts is an approach that can help to reduce the tendency for outright lies to be treated equivalently with truth in social media and search engines. These techniques also avoid invoking the actual removal of lying items themselves and the “censorship” issues that then may come into play (though private firms quite appropriately are indeed free to determine what materials they wish to permit and host — the First Amendment only applies to governmental restraints on speech in the USA).
Lauren Weinstein writes: Lately, Twitter has been taking the brunt of public criticism regarding harassment and hate speech — and their newly announced measures to supposedly combat these problems seem to mostly be potentially counterproductive “ostrich head in the sand” tools that would permit offending tweets to continue largely unabated.
But all social media suffers from these problems to one degree or another, and I feel it is fair to say that no major social media firm really takes hate speech and harassment seriously — or at least as seriously as ethical firms must.
Lauren Weinstein writes: There are times when Google is in the right. There are times when Google is in the wrong. By far, they’re usually on the angels’ side of most issues. But there’s one area where they’ve had consistent problems dating back for years: Cutting off users from those users’ own data when there’s a dispute regarding Google Account status.
Lauren Weinstein writes: Facebook, Twitter, and other social media posts are continually promulgating outright lies about individuals or situations. Via social media personalization and associated posting “surfacing” systems, these lies can reach enormous audiences in a matter of minutes, and even push such completely false materials to the top of otherwise legitimate search engine results.
And once that damage is done, it’s almost impossible to repair. You can virtually never get as many people to see follow-ups that expose the lying posts as who saw the original lies themselves.
Lauren Weinstein writes: Much has recently been written about Google Home, the little vase-like cylinder that started landing in consumers’ hands only a week or so ago. Home’s mandate sounds simple enough in theory — listen to a room for commands or queries, then respond by voice and/or with appropriate actions.
What hasn’t been much discussed however, is how the Home ecosystem is going to change for the better the lives of millions to billions of people over time, in ways that most of us couldn’t even imagine today. It will drastically improve the lives of vast numbers of persons with visual and/or motor impairments, but ultimately will dramatically and positively affect the lives of everyone else as well.