Comment Re:AI moderation... what are the alternatives? (Score 1) 45
Meta can use AI to filter and look for patterns, but must review to send less junk to law enforcement, otherwise they are firehosing the cops' review process with garbage, DOSing them.
And I wouldn't put it past Meta senior management to exercise a degree of malicious compliance in this behaviour.
Look at it from the cops side, each of the reports sent from Meta have to be reviewed by at least one person.
And the cops' web review teams don't have $22 billion net income per quarter.
What proportion of the 60 billion artifacts are between a small circle of family friends & work colleagues?
If the exponential social graph makes filtering intractable, then Meta have the technical capability to make it more difficult for people to create links, when the recipient don't have a good idea of what they are opening themselves up to.
e.g. recipients can choose to opt in to get incoming communications from a stranger vetted in a group. recipients can choose to only accept text or audio from a person, only web links to a public group.
There's plenty of history showing that big tech could reduce malicious scams, malware and criminal content from advertisers, that they have underresourced human reporting tools and communication reporting lines, and low effectiveness on acting on the reports.
The techbro AIs are simultaneously "approaching superintelligence", and incapable of spotting scam ads matching the text pattern " don't want you to know about that gives " gains". It's in the corporate interest to wear the dunce cap and profit from inaction.
Even the legit advertisers lose out also as sane people have sensible distrust of the unvetted garbage.
"Internally, Meta estimates that users across its apps in total encounter 15 billion “high risk” scam ads a day. That’s on top of 22 billion organic scam attempts that Meta users are exposed to daily, a 2024 document showed. Last year, the company projected that about $16 billion, which represents about 10 percent of its revenue, would come from scam ads. "
"Meanwhile, Meta acknowledged in documents that its systems helped scammers target users most likely to click on their ads."
https://arstechnica.com/tech-p...
"When it comes to Meta, neither Facebook nor Instagram appear to provide a user-friendly and easily accessible ‘Notice and Action' mechanism for users to flag illegal content, such as child sexual abuse material and terrorist content. The mechanisms that Meta currently applies seems to impose several unnecessary steps and additional demands on users. In addition, both Facebook and Instagram appear to use so-called ‘dark patterns', or deceptive interface designs, when it comes to the 'Notice and Action' mechanisms.
Such practices can be confusing and dissuading. Meta's mechanisms to flag and remove illegal content may therefore be ineffective."
https://ec.europa.eu/commissio...