Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
AI

Deep Neural Networks for Bot Detection (arxiv.org) 39

From a research paper on Arxiv: The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper [PDF], we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text.
This discussion has been archived. No new comments can be posted.

Deep Neural Networks for Bot Detection

Comments Filter:
  • That should be a pretty high ranking flag in the algorithm seed data.

  • Wew (Score:5, Insightful)

    by negRo_slim ( 636783 ) <mils_orgen@hotmail.com> on Saturday February 17, 2018 @08:07PM (#56145002) Homepage
    Putting way to much confidence in bots ability to do any of those things listed in the summary.
    • If you read the actual paper you'd know exactly how much confidence one can place. (Hint, its extremely high). 96% on a single tweet text read, up to over 99% once network , metadata and other factors are taken into account.

      6 CONCLUSIONS
      Given the prevalence of sophisticated bots on social media platforms such as Twitter, the need for improved, inexpensive bot detection methods is apparent. We proposed a novel contextual LSTM architecture allowing us to use both tweet content and metadata to
      detect bots at t

      • We should distrust it completely, as the paper gives no examples of any of the tweets or accounts they classified as being "bots". None whatsoever. Lots and lots of stats about their model and many implausible claims of it being perfect, but nothing that could be used to actually verify their claims.

        Indeed their claims are completely implausible. Extraordinary claims require extraordinary evidence and they provide none.

        • We should distrust it completely, as the paper gives no examples of any of the tweets or accounts they classified as being "bots". None whatsoever. Lots and lots of stats about their model and many implausible claims of it being perfect, but nothing that could be used to actually verify their claims.

          Or alternatively you could have read the paper and seen that it used the Cresci/De Pietro/Petrocchi/et..al dataset which is publically available and has been for a while now.

  • by TFlan91 ( 2615727 ) on Saturday February 17, 2018 @08:37PM (#56145098)

    How does this resolve the case of my political uncle posting extreme ideas every week or two.

    Anyone outside the family would rightfully think he's a bot. He isn't, he's just that uncle.

    The first amendment protections required for a system like this would make it far too cumbersome for practical use. Yea, Twitter is proving the opposite case with their manual interventions, but there must be a middle ground

    • Re: (Score:1, Troll)

      by LifesABeach ( 234436 )
      Let me see if I understand this correctly. A bunch H!B lying dumb asses created FaceBook. Now some really bad dudes that would have no second thoughts about deleting Cadet Bone Spurs and anyone else that poses a problem; shows up. When is enough, enough?
    • Then he gets wrongfully accused and his rantings stop. There's no great loss to civilization. It's not worth it to let 100 guilty men go free than accuse a single innocent. Those are Enlightenment values - the same ones that created racism and justified slavery. They're as yesterday's news as your uncle.
    • How does this resolve the case of my political uncle posting extreme ideas every week or two.

      Anyone outside the family would rightfully think he's a bot. He isn't, he's just that uncle.

      The first amendment protections required for a system like this would make it far too cumbersome for practical use. Yea, Twitter is proving the opposite case with their manual interventions, but there must be a middle ground

      This is all about squelching unapproved opinions. Can't have people (or bots) "disparaging" Hillary Clinton, for example. We indict people for that now.

  • Antagonistic neural networks improves the quality of both networks.

    The detector will get better and the fake will get better. Quickly.

  • I always suspected that about CowboyNeal. Now we will know the truth.
  • detecting bots, automated social media accounts governed by software but disguising as human users

    The expression "bot" is used to describe a wide variety of software applications, not just those emulating people in social media. In fact, the most common bots are the ones used by a big number of sites to retrieve information from internet for different purposes (e.g., search engines retrieving what they are showing to their users); they are also called crawlers or spiders. Here [udger.com] you can find a detailed list of active ones (I am the proud father of one of them :)).

    So, a better version of the summary would

"You must have an IQ of at least half a million." -- Popeye

Working...