Instagram Have Introduced Machine Learning Filters to Moderate Comments

Digital Trends
Instagram are taking steps to keep comments clean and wouldn't you know it, they're relying on AI to get the job done. On Thursday, they announced in a blog post that two new machine learning filters were being introduced to better manage the content of comments. The first is designed to seek and remove offensive comments before they go public, whilst the second targets spam.

Previously, Instagram's comment filters could only recognise offensive language which was on a predetermined list, and although users could add their own words and phrases to the list (only applicable to comments on their posts), it was a pretty basic system. With this upgrade, the filters will be continue to learn new offensive words and phrases, as well as identifying what's offensive from context, as well as language.

Additionally, the moderation acts as a kind of closed circuit for the perpetrators - if someone's comment gets removed, they'll still be able to see it, but only them. In that sense, commenters will have no idea that their comment has been removed, and simply think that nobody is acknowledging it. Twitter have been playing around with a similar method for a while now.

Additionally, the filter is being changed from an opt-in feature to a default one, something which may spark a certain amount of controversy. It can still be switched off from within settings, and the custom blocking service is still available, but for obvious reasons it can't operate using the new, more sophisticated system. Currently the system only works in English, but Instagram are in the process of translating it into other languages.

The spam filter has actually been up and running in a more simplified guise for about eight months now with little fanfare. To improve it, Instagram tasked a human team to shift through reams of spam comments to create comprehensive database of key words, phrases and other tells. Instagram still have yet to find an effective way of dealing with spam accounts, but this is certainly a step in the right direction.

With offensive comments, one of the main aims is to reduce false positives. Both Instagram and Facebook have been called out in the past for flagging innocent content as inappropriate, and in many cases it's been because the algorithms haven't been clever enough to differentiate between something that's offensive and something which merely contains a term which would be offensive in other contexts. Same rule applies to images. For this reason, this new AI watchdog is built to have a 1% margin of error, but we won't know exactly how aggressive or lax the system really is until it launches proper.

Post a comment


Author Name

Free Gift

Free Gift
Get immediate access to our in depth video training on the click by click steps required to get your successful online business started today

Contact form


Email *

Message *

Powered by Blogger.