New AI algorithm designed to spot online trolls

New artificial intelligence (AI) algorithms have been designed to monitor online social media conversations as they evolve, which could lead to an effective and automated way to spot online trolling in the future. This has been predicted by researchers. Prevention of online harassment requires the rapid detection of offensive, harassing, and negative social media posts, which in turn requires monitoring online interactions.

Currently available methods to obtain such social media data are either fully automated and not interpretable or rely on a static set of keywords, which can quickly become outdated. Neither method is considered very effective by experts. According to Maya Srikanth from California Institute of Technology in the US: "It isn't scalable to have humans try to do this work by hand, and those humans are potentially biased. On the other hand, keyword searching suffers from the speed at which online conversations evolve. New terms crop up and old terms change the meaning, so a keyword that was used sincerely one day might be meant sarcastically the next.”

The team, including Anima Anandkumar from Caltech, used GloVe (Global Vectors for Word Representation) model that uses machine-learning algorithms to discover new and relevant keywords. The GloVe is a word-embedding model, which represents words in a vector space, where the "distance" between two words is a measure of their linguistic or semantic similarity. This approach gives researchers a dynamic and ever-evolving keyword set to search. However, it is not enough just to know whether a certain conversation is related to the topic of interest; context matters.

For that, GloVe shows the extent to which certain keywords are related, providing input on how they are being used. For example, in an online Reddit forum dedicated to misogyny, the word "female" was used in close association with the words "sexual," "negative," and "intercourse." "The field of AI research is becoming more inclusive, but there are always people who resist change," said Anandkumar. "Hopefully, the tools we're developing now will help fight all kinds of harassment in the future," she said.

The research was presented on December 14 last year at the AI for Social Good workshop at the Conference on Neural Information Processing Systems in Vancouver, Canada.

You can share this post!

...