Facebook's AI Monitoring Expands to Suicide Prevention

We've already reported on how Facebook are using AI-fuelled pattern recognition to spot signs of terrorism, and in the statement made by Zuckerberg at the time, suicide and bullying prevention were also mentioned. As far as suicide prevention goes, it looks like inroads are already being made, as more details have been released about how the proposed system will work.

In this case, the algorithms are being designed to home in on particular choices of words and phrasing, as well as the kinds of comments the posts are receiving. Once a particular post has been flagged as a suicide risk, it's placed in the hands of a human team for further evaluation and action. That in mind, you would almost hope that the algorithms will be pretty overzealous, better to falsely flag a bunch of posts than miss a real one.

It goes beyond this, though. Facebook are also developing ways to identify when streamers on Facebook Live might be expressing suicidal thoughts. The aim is to be able to reach out to people like this as the broadcast is going on. This doesn't mean cutting the stream off, but rather trying to engage with the streamer as they're still broadcasting, in the hopes of talking them back around.

Facebook's algorithms have undergone extensive criticism in the past, mostly for mistakenly blacklisting content that should have stayed up, but also occasionally for leaving up content which certainly should have been taken down. In both cases, the content never got anywhere near a human moderator, but in this case it will. Banning a photograph of a naked statue is a very different affair to potentially preventing someone from killing themselves; it simply couldn't be done without actual people being involved.

To some extent, Facebook already laid out the groundwork for this on Instagram, as last year they introduced a system wherein users can report posts which they believe to imply self-harm or suicidal thoughts. The reported user then receives a generic message asking if they would like to receive help. In that instance, the Facebook team merely get the ball rolling; there's no direct human intervention, which could be the key difference between an offer for help being ignored or accepted.

Evidence of an impending suicide on Facebook is, sadly, far from unheard of. In the past few months, there have been two cases of young girls hanging themselves during a livestream - 12-year-old Katelyn Nicole Davis on December 30th, and then 14-year-old Naika Venant on January 23rd. In both cases, the victims had discussed life difficulties and even suicide online prior to the livestream.

Post a comment


Author Name

Free Gift

Free Gift
Get immediate access to our in depth video training on the click by click steps required to get your successful online business started today

Contact form


Email *

Message *

Powered by Blogger.