![]() |
catchnews.com |
Facebook has taken a lot of flack for its algorithmic approach to content flagging in the past, so it makes sense that they would want to develop a system which can do the same job in a more sophisticated, significant way. At the moment, the AI is 'in training', it's being shown news reports on terrorism and real terrorist propaganda so that it can learn how to differentiate between the two.
As well as looking for terrorist threats, these same AI watchdogs will allegedly also help bolster the 'tailored content' approach which Facebook are still pushing, as well as vigilantly blocking fake news stories from building any traction. In the letter Zuckerberg wrote to outline his AI plans, he also said that users will soon be asked questions from time to time to help personalise their feeds.
As far as the counter-terrorism side goes, it's a good idea, but it heavily depends on how well this AI system really works. Telling the difference between a news report and propaganda is one thing, but if key words are really such a big part of this, it could easily lead to innocent people being mistakenly put on a terrorist watch list, which wouldn't be much fun.
That in mind, it's encouraging to read that Facebook are playing the long game on this one, the development process for this system is liable to take months, or even over a year before it's ready to implement; the AI obviously has a big study schedule mapped out. Facebook is the largest global community on the internet, and while terrorist groups aren't stupid enough to openly discuss activity on it, they certainly still leave trails. Used right, this technology could save many lives.
Post a Comment