Facebook's Moderation Guidelines Have Leaked and People Aren't Happy

RTE
Facebook's moderation and censorship policies have been the subject of much debate in recent months. Whether it's their failure to curtail the fake news behemoth, the spate of violent and disturbing content which has circulated via Facebook Live or the numerous instances in which they've taken down perfectly innocent, or even educational content, it's done their credibility absolutely no favours.

Past controversy pales in comparison to this latest development. Over the weekend, The Guardian obtained and published a list of guidelines Facebook use in the moderation handbook, as part of a wider investigation into the platform's ethics. You can read the full list here, but here are a few of the most worrying highlights:

"Remarks such as “Someone shoot Trump” should be deleted, because as a head of state he is in a protected category. But it can be permissible to say: “To snap a b***’s neck, make sure to apply all your pressure to the middle of her throat”, or “f*** off and die” because they are not regarded as credible threats."

"Some photos of non-sexual physical abuse and bullying of children do not have to be deleted or “actioned” unless there is a sadistic or celebratory element."

"Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”."

Alongside the other information about Facebook's lenience towards animal cruelty and revenge porn, it paints a rather upsetting picture. Seemingly the platform has an almost zero tolerance attitude towards nudity (up to and including 'digital art') but is more than happy to let violent content circulate freely, with their only concession being to mark the worst stuff as 'disturbing'. As you might expect, these revelations are making people angry.

There were already calls going out for the platform to allow for independent regulation and they're even louder now. Facebook's only real response has been a reiteration of the fact that they're bringing on 3,000 more moderators over the coming months, and that they're still looking at ways to improve their machine learning technology to improve moderation. That response is a far cry from encouraging, given how many prior moderation issues have been down to the AI watchdogs either missing or mistakenly flagging content.

Now we know that Facebook have drawn grey areas in moderation categories that are pretty much black and white. It's hard to see any circumstance in which a video showing child abuse would be in any way admissible, but Facebook don't want to take that risk for fear of their global sharing figures taking a hit, it would seem.

The tepid response seems to suggest that Facebook have no plans to alter their policies, which makes the whole thing even more disturbing, as it demonstrates that Facebook are more concerned with protecting their own interests than recanting regulations which not only acknowledge that cruel and disturbing content is being shared on their platform, but actively allowing it to continue.

Post a Comment

[blogger]

Author Name

Free Gift

Free Gift
Get immediate access to our in depth video training on the click by click steps required to get your successful online business started today

Contact Form

Name

Email *

Message *

Powered by Blogger.