1. Insights
  2. AI Data
  3. Article
  • Share on Facebook
  • Share via email

How self-agreement can improve your training data

Posted January 1, 2021 - Updated August 21, 2023
Man typing on laptop with conceptualized data annotation of a street

Finding, creating and annotating training data is one of the most intricate and painstaking tasks in machine learning (ML) model development. Many crowdsourced data annotation solutions often employ inter-annotator agreement checks to make sure their labeling team understands the labeling tasks well and is performing up to the client’s standards. However, some studies have shown that self-agreement checks are as important or even more important than inter-annotator agreement when evaluating your annotation team for quality.

In this article, we will explain what self-agreement is and introduce a machine learning study where self-agreement checks were crucial to the quality of the team training data and the accuracy of their model.

What is self-agreement in machine learning?

Simply put, self-agreement is a quality assurance (QA) protocol you can use in data annotation to evaluate the abilities of individual annotators. Whereas inter-annotator agreement protocols check to see if two or more annotators agree with each other, self-agreement checks whether or not a single annotator is consistent in their own annotations.

For example, a simple inter-annotator agreement workflow would be to send two separate annotators the same piece of data. Then, you would check to see if their annotations are the same. If they are not the same, you could then bring that piece of data to a supervisor to make a ruling on which annotation is correct.

On the other hand, with self-agreement protocols, you would send the same annotator the same piece of data twice to see if they provide the same label both times. For example, if they are tasked with annotating one hundred images, you could set image one and image 35 as the same image, evaluate the result and repeat this process many times. Theoretically, you could send an annotator the same data more than twice, but the effect is minimized because the annotator starts to realize that they’ve seen this data point before.

Why use self-agreement in your data annotation workflow?

The point of self-agreement is to evaluate the abilities of the annotator and make sure that they are annotating each piece of data correctly, and not simply rushing through the project to get it done as quickly as possible. Furthermore, there is concrete proof found in a 2016 study that using self-agreement can help weed out low-quality annotators and improve the quality of your dataset.

Self-agreement tests can improve data quality

In a 2016 sentiment analysis study by researchers from the Jozef Stefan Institute, the team found that the quality of human annotators could play a larger role in the accuracy of the model than the type of model itself.

The team’s goal was to create a sentiment classifier for tweets in multiple languages, so they analyzed 1.6 million tweets in 13 different languages. These tweets were all labeled for sentiment by human annotators. Ultimately, the researchers said: “Our main conclusion is that the choice of a particular classifier type is not so important, but that the training data has a major impact on the results.”

Many of the teams insights were about the relationship between inter-annotator agreement and self-agreement and how those values relate to the quality of the data. Firstly, they found that self-agreement will almost always be higher than inter-annotator agreement.

If you have low self-agreement, you will have even lower inter-annotator agreement in most, if not all, cases. As a result, self-agreement tests may be an easier and quicker way to track the overall quality of your dataset by analyzing the performance of your annotators.

For example, if you are aiming for an inter-annotator agreement Alpha of 0.6, but the self-agreement levels of most of your annotators are at 0.4, chances are you aren’t going to hit the inter-annotator agreement Alpha you were hoping for. Therefore, you may want to focus on raising self-agreement above your desired levels before proceeding with inter-annotator agreement checks.

In the study, one example of this was the low quality of the Spanish tweets sentiment dataset. The researchers found that the self-agreement was at 0.244, while the inter-annotator agreement was at 0.120.

Low quality Spanish annotations degraded the emojis dataset

As part of the overall project, the team created an emoji dataset that included tweets from various languages that had emojis. They collected 70,000 tweets in total across various languages. Around 20,000 of these tweets were from the poorly-annotated Spanish dataset mentioned in the previous section. As a result, the total self-agreement of the emoji dataset was at Alpha 0.544.

As a result, the emoji dataset was the only dataset where the self-agreement was lower than the inter-annotator agreement. However, after removing all of the Spanish tweets from the emoji dataset, the Alpha jumped to 0.720.

This insight reconfirmed the team’s conclusion: “Low quality annotators have to be excluded and their annotations removed from the datasets.” This is especially true when you have a large project using multiple annotators or even multiply annotation teams.

Should you use self-agreement or inter-annotator agreement to improve training data quality?

The safe answer is: both. In most cases, you should not abandon inter-annotator agreement testing entirely. In fact, the research team stated that the two measures can provide you with different insights: “It turns out that the self-agreement is a good measure to identify low-quality annotators, and that the inter-annotator agreement provides a good estimate of the objective difficulty of the task.”

So if the self-agreement of an annotator is extremely low, then they either aren’t prepared for the labeling task or they’re simply the wrong person for your project. If inter-annotator agreement is low, but self-agreement is at acceptable levels, then the task is either too difficult or calls for subjective reasoning, as is often the case with sentiment classification projects.

The key takeaway

There is no one-size-fits-all method when it comes to testing for inter-annotator agreement and self-agreement. It depends on the task and what your acceptable Alpha levels are. Throughout the article, the researchers consistently emphasized that both levels should be constantly tested throughout the training data creation process.

Annotators should be informed when levels drop too low, and actions should be taken to maintain the quality of the training data. Sometimes that will unfortunately mean removing low-quality annotators from your project and labeling their data again with a better annotator.

Hopefully, this guide helped you understand the power of self-agreement checks and how they can improve the quality of your data. If you’re looking to learn more about how to improve your data, contact us today.

If you still have questions about self-agreement protocols, get in touch with our team to learn more about how we run our community-based data collection and annotation projects. At TELUS Digital, we help connect ML research teams with the right groups of people located around the world to build custom datasets for unique use cases.


Check out our solutions

Enrich your data with our range of human-annotation services at scale.

Learn more