British newspaper The Guardian, citing a document leaked from Facebook headquarters, offers new details about how the social site is monitoring and dealing with news and posts related to violence, racism, and other topics.
“Saying “#stab and become the fear of the Zionist,” for example, would be considered a credible threat — and Facebook moderators would be able to remove that particular content,” reports eMarketer. “But saying “kick a person with red hair” or “let’s beat up fat kids” is not considered a realistic threat of violence.”
It’s an issue that’s been much in the news, as content behemoths including Facebook (which pledged to hire 3,000 more content reviewers) and Google take a stronger stance on material its advertisers find antithetical to their marketing aims.
On the other hand, controlling too much of the content can turn off users and create dissatisfaction. That’s one reason why, for instance, “videos featuring violent deaths will be marked as disturbing, but will not always be deleted.” Mitigating factors can include recognition that the videos could raise awareness about mental illness.
“Advertisers are demanding more than what these platforms can currently provide,” said Ari Applbaum, vice president of marketing at video advertising platform AnyClip. “Until artificial intelligence solutions are robust enough to provide 100 percent assurance, manual screening of content is replacing AI, and it’s not sustainable in the long run.”
Frankly, it’s hard to guarantee user engagement — after all, they’re creating a lot of the content — while providing 100 percent brand safety.
“Every brand has their specific set of criteria in terms of their own limits and thresholds,” said Marc Goldberg, the CEO of Trust Metrics, a publisher verification firm. “I don’t think this leak will impact Facebook’s business, but it will introduce new conversations around specific concerns and whether the company is doing enough for brands.”