Two Is Better Than One: How Automated & Human Monitoring Can Tackle Hate Speech
Note written by PeaceTech Lab fellow Musa Alhamdu
Online hate speech and incitement are on the rise around the world, and it’s become more and more clear to me that we need to find new approaches to tackle this massive problem. Monitoring hate speech shared online is not new to me personally -- given the scale and prevalence of this issue in Nigeria, it’s something I’ve been working on for a long time to develop innovative solutions to this new-age problem. Unfortunately, the methods and means I was using in the past were neither quick nor efficient enough for the wild fire of hate speech. I knew there had to be something better.
In the past, while working at the Nigeria-based Women and Youths for Justice and Peace Initiative, I’ve monitored online hate speech by randomly combing the web for general hate speech -- a process that is not only slow and time consuming, but also laborious as I, alongside my team, would have to spend hours sitting at monitors, searching the web for words and phrases that we deemed could possibly incite violence or worsen conflict on the ground. This process caused inaccurate data and lost time as my team and I spent hours upon hours deliberating amongst ourselves whether certain terms were indeed considered hate speech or not.
PeaceTech Lab’s Nigeria Lexicon changed all of that for me. Using the Lexicon, I was able to target my search for hate speech across Nigeria to specific terms that the Lab had already done the muscle work in identifying, such as Aboki, Pdpigs and Foolani. This saved a lot of time and effort and led to distinctly more accurate data. During my time as a PeaceTech Fellow, I was also able to learn how to use new tools that helped me conduct my own searches more quickly and efficiently, such as Data Miner which allowed me to gather loads of data from over the past five years on hate speech trends. This then made it possible for me to analyze the context of the hate speech that was used, the number of times it was used, and more. Using all of this, I understood much more clearly what the dynamics surrounding the use of these hate speech terms was, giving me a better idea of how to educate community members on limiting the use of these words across social media platforms.
There are, of course, some limitations that technology faces -- for example, I found that during times of heightened tensions, such as the elections that Nigeria just recently had, new words and phrases develop and spread even faster than I could have imagined -- so quickly that an automated search might not be able to pick up on it. Active online monitoring on forums and social media platforms is still needed, especially across different dialects. And to really tackle this issue head on, we need to consider all of these means -- both the traditional and the new -- so we can limit the use of these terms, and think of counter messages as we sensitize our public on the dangers of a simple word.
My time as a PeaceTech Lab Fellow has shown me that employing human monitors and automated monitoring tools like Crimson Hexagon can provide really useful data without duplicating results. I also learned, as I write this from Nigeria, that we can really utilize effective channels of communicating with the team from different parts of the world to achieve results the world needs. Those results can then be shared, as they recently were at a workshop in South Sudan, or as I plan to in Kano, where I’ll work with a group of youth volunteers who monitor hate speech online on how to use a data miner and the Lexicon for improved accuracy and efficiency.
The bottom line is, we need to fight hate speech with more speech -- better, more positive speech that will push us all in the right direction. And we need to learn and use all of the tools that are at our disposal to win the fight.