Can AI Filter Hate Talk in Social Media?

AI in Social Media: App Development

Published December 18,2018 3 months ago Posted By Justinas Danis

Reading Time: 3 minutes

If you are present in various social media platforms, you will know there are several people spilling hate talk. Will, it is not a good idea to have an AI that could filter the pattern of hate talk on social media? This would really help web users from trauma and emotional sufferings.

After the recent announcement by Mark Zuckerberg several discussions are back to the debate. It is about artificial intelligence which can filter toxic speech in recent future.  So is it possible for AI to filter toxic conversation?

Well, till now AI doesn’t hold a good record in recognition and refraining hateful speech, but is the scenario changing? Let’s take a look at the capabilities of AI and what are the chances of improvement.

Current AI Potentials

Already Facebook has applied filters against hate speech, like that of pro-terrorism content, through able AI programming, but is that the only way the brand is programming with artificial intelligence?

Facebook also applied AI for facial recognition, advertising, and for farther AI developments.

But how is Facebook making use of AI in the future? If to believe, Zuckerberg’s timeline, machine learning would make more march in the upcoming years. To say, people who work with AI programming are subjected to prejudice, and that gets manifested in AI application. Hence in the future, when machine learning can defeat human prejudices, Facebook must develop innovative programs which can recognize internal biases and this is a real struggle.

The founder of collectively, David Attard, feels that biases in training are an issue but not impossible to develop programs to do away with hate speech. He says that, if efforts are done to develop a good set of training, it would take less than 5 years.

Overtone of Hate Talk

One main hurdle is much of conversation depends on context and how implicated and that is specific to a group using a language.

A sentence might not bother a segment of the human population but could be spiteful to another. To overcome the real problem of hate talk, machine learning must identify the non-verbal cues and interpret the meaning behind. To look through international moral and language codes is not a small task.

Practice of Ethics in Artificial Intelligence

Digital Catapult, an innovate UK based center for digital innovation, has revealed plans of increasing ethics standard in AI.

With advancements in AI filtering system, and complete eradication of hate talk and hurtful media would make a better web experience for all. But, it’s a long way to go yet.

Though technologically, if written words can be filtered eventually, but to overcome other contents like photos and videos, AI must keep evolving.

The best practice could be being ethical while developing AI programs.

Videos and images could continue to be hurdles, not only for Facebook but other social media platforms as well. To completely overcome, hate talk, there is immense need of an evolving AI which can adapt to various changing modes.

If tech companies can attain an effective level of adaptability, several major problems can be resolved. Studies on cyber bully and its endless effect on human emotion can lead to several mental ailments like depression and anxiety. So with being able to filter out patterns of hateful communication on social media, it could prevent emotional damage and long-term trauma.

Conclusion

Now the question is will AI be the only filter against hateful communication in coming years? And how to create programs against hateful contents? Most of the thing is yet not known, and Facebook hasn’t given any solid plan or timeline for advancements. But with more constructive and good discussions among various programming groups or/and communities of Android app developers in Malmo and around, a solution can be found to Zuckerberg’s promise.