Two Hat Description
A custom neural network is trained to triage the reported content. Social networks have relied on users for years to report abuse, hate speech and other online harms. Moderation teams receive all reports and review them individually. Many platforms receive thousands upon thousands of reports every day. Most of these reports can be closed without any action. Reports containing time-sensitive information, such as suicide threats, violence and child abuse, can be overlooked or not reviewed until it's too late. There are also legal implications. NetzDG, a German law that requires platforms to remove hate speech and illegal content within 24hrs or face fines up to 50 million Euros. Similar laws regarding reported content are being implemented in France, Australia and the UK. Two Hat's product Predictive Moderation for reported content allows platforms to train an AI model based on the consistent decisions of their moderation team.
Integrations
Company Details
Product Details
Two Hat Features and Options
Two Hat User Reviews
Write a Review- Previous
- Next