
Deep Neural Networks for Bot Detection (arxiv.org) 39
From a research paper on Arxiv: The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amount of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper [PDF], we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text.
NO COLLUSION! more than 1 time in a thought.. (Score:1)
That should be a pretty high ranking flag in the algorithm seed data.
Overkill (Score:2)
You don't need deep neural networks when this will do:
egrep 'MAGA|NO COLLUSION||FAKE NEWS|LIBTARD' > /russian_bots.txt
Wew (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
Well...not exactly back to normal. The faker bots will improve their ability to fool people into thinking they're real.
Also, the intent is probably impossible for even a superhuman AI to accomplish (except by judging something like volume of posts, which ordinary recipients don't have access to). A twitter post often doesn't contain enough information to decide whether it was posted by a human or by a bot. As the faker bots improve, they'll be able to handle longer segments of connected text, and possibl
Re: (Score:3)
If you read the actual paper you'd know exactly how much confidence one can place. (Hint, its extremely high). 96% on a single tweet text read, up to over 99% once network , metadata and other factors are taken into account.
Re: (Score:2)
We should distrust it completely, as the paper gives no examples of any of the tweets or accounts they classified as being "bots". None whatsoever. Lots and lots of stats about their model and many implausible claims of it being perfect, but nothing that could be used to actually verify their claims.
Indeed their claims are completely implausible. Extraordinary claims require extraordinary evidence and they provide none.
Re: (Score:2)
Or alternatively you could have read the paper and seen that it used the Cresci/De Pietro/Petrocchi/et..al dataset which is publically available and has been for a while now.
Re: (Score:1)
Interesting...
Hmm (Score:3)
How does this resolve the case of my political uncle posting extreme ideas every week or two.
Anyone outside the family would rightfully think he's a bot. He isn't, he's just that uncle.
The first amendment protections required for a system like this would make it far too cumbersome for practical use. Yea, Twitter is proving the opposite case with their manual interventions, but there must be a middle ground
Re: (Score:1, Troll)
Re: (Score:2)
Re: (Score:2)
How does this resolve the case of my political uncle posting extreme ideas every week or two.
Anyone outside the family would rightfully think he's a bot. He isn't, he's just that uncle.
The first amendment protections required for a system like this would make it far too cumbersome for practical use. Yea, Twitter is proving the opposite case with their manual interventions, but there must be a middle ground
This is all about squelching unapproved opinions. Can't have people (or bots) "disparaging" Hillary Clinton, for example. We indict people for that now.
End result- really good bots. (Score:1)
Antagonistic neural networks improves the quality of both networks.
The detector will get better and the fake will get better. Quickly.
cowboyneal exposed (Score:2)
Re: (Score:2)
Re: (Score:2)
Deep Neural Networks for Bot Detection Evasion (Score:2)
Easy, isn't it?