1. Insights
  2. Trust & Safety
  3. Article
  • Share on Facebook
  • Share via email

The role of AI in content moderation

Posted February 20, 2020 - Updated November 19, 2021
robot holding magnifying glass with check mark while human holds computer.

Content moderation is no longer solely for social media giants. The proliferation of user-generated content has made it so that every company with an online presence must also have content moderators at work keeping their customers safe and their brand’s reputation intact.

Content moderation is now a key component of great customer experience, but employing thousands of people around the world to review user-generated content can be challenging. That is why many brands are considering the role that technology, and particularly artificial intelligence (AI), can play.

Next-gen tech brings with it many advantages, not the least of which is cost savings. But, it’s also limited in some very important ways that companies should consider before implementing an AI-based content moderation solution.

AI content moderation advantages

According to the World Economic Forum, by 2025, an estimated 463 exabytes of data will be created each day – that’s the equivalent of 2.1 million DVDs. No matter the size or skill of your content moderation team, the sheer quantity of user-generated content makes it difficult for humans to keep pace. There are simply not enough hours in the day!

AI, on the other hand, can identify large quantities of inappropriate content, across multiple channels in near real-time. The sheer size and scale of the data AI can interpret is its biggest benefit when it comes to content moderation but there are certain categories of material that AI truly excels at detecting. For instance, PC Mag notes that content moderation algorithms have been built to successfully detect 99.9% of spam and 99.3% of terrorist propaganda, saving moderators loads of time, effort and energy.

Of course, the technology is not yet 100% perfect, and there are some types of content that are stubbornly difficult to detect. This is why using AI as an aid for human agents, rather than a replacement, is ideal. For example, an AI algorithm can compute a confidence score for material it suspects should be deleted but isn’t completely certain. Content within a certain threshold can then be flagged for human review, to ensure that the material adheres to company standards.

Areas for AI improvement

Speech analytics are a persistent challenge for AI. Algorithms are very good at recognizing the words included in speech, converting audio to text and detecting spam-related messages, but as PC Mag writes, “they fall apart when they’re tasked with detecting hate speech and harassment.”

Indeed, AI can miss content that should be flagged (a false positive), and they can also incorrectly flag content that may be harmless (a false negative). These scenarios are particularly true with speech analytics, given the continued evolution of natural human language. Not only is context difficult for an algorithm to understand but there are also cultural and regional specific nuances that must be considered.

AI also has a less-than-stellar record with images of adult nudity and sexual activity, drugs and firearms. That’s because creating an artificial neural network that detects a given kind of content can require years and years of training, literally requiring millions of examples being fed into the program to become effective. There is also a level of subjectivity that AI cannot yet ‘understand’. For example, it can identify nudity, but not be able to distinguish whether it’s lewd imagery or Renaissance art.

A high-tech, high touch approach is needed

For most brands launching a content moderation operation, it pays to err on the side of caution. While AI has advanced significantly in recent years, it is not going to be taking away human jobs anytime soon. Human oversight remains a critical component to accurately monitor online content.

“At best, what we have and what we’ll continue to have is a hybrid. But over the past few years, all I’ve seen is an increase in hiring not a decrease,” UCLA professor Sarah T. Roberts recently said in The Verge. A successful content moderation strategy must pair the best of both next-gen technology and human agents.


Check out our solutions

Protect the safety and well-being of your user communities to maintain customer trust.

Learn more