You are currently viewing Reduction Of Annotation While Evaluating Al Systems

Reduction Of Annotation While Evaluating Al Systems

  • Post author:

Commercial machine learning applications are designed with the objective of resembling the realities of the external world as nearly as possible. For this purpose, AI models are taught with examples to resemble the physical world. However, major changes made almost every day make the external world something completely different from what it was the previous day. To keep up with this, Artificial Intelligence systems have to be updated and evaluated regularly to inspect how relevant their performance still is and to avoid any decline.

Such evaluations and reevaluations of AI systems imply one thing- manual annotation of data classified by the system to ensure that those classifications are accurate. This can be a tedious job and is therefore dreaded by almost all developers around the world. Annotating the huge amount of files that might have been classified by an AI system can be a really labor-intensive and monotonous task. Therefore, to make the task easier, a way to reduce annotations while evaluating AI systems must be found.

Reducing Annotations

The most effective way of reducing annotations during the evaluation of AI systems is to reduce the number of sample files that have to be reviewed to evaluate the system’s performance. The number of random samples required for this evaluation can be reduced greatly when the fact that most AI systems are ensembles of binary classifiers, is kept in mind. This means that these AI systems are made up of a certain number of binary classifiers which ‘vote’ whether a data input belongs to a particular class or not. These votes are then pooled by the system to ascertain whether the input can be classified into that class.

Read Other Article

Want To Be A Marketing Wizard: Mailchimp Is The Way To Go!

How To Use Facebook For Business? Facebook Marketing Tips And Tricks

Are You Sure Your Transformation Strategy Is A Foolproof One?

Annotate Models & Annotation Tips

While you annotate models, you can make use of duplicates that are common to these binary classifiers to evaluate their performance. This must be fulfilled without introducing any biases in the resultant sample sets owing to random sampling. It can be assumed that even when chosen randomly, samples for separate components of the ensemble inevitably include certain duplicates because of the random selection of the materials. Therefore, most of the samples used for the evaluation of one classifier of the ensemble should remain useful for the other classifiers too. Thus the goal becomes to add sufficient additional samples to enable the evaluation of all different models.

To begin with, form a sample set for the whole ensemble, which you can term the ‘parent’. After finding a set suitable for the evaluation of the entire ensemble, expand it to the individual classifiers one by one. The extent of savings made would depend on the extent of overlap between the judgments of the ensemble and the classifiers. The greater the overlaps, the greater savings you can make in samples. To follow our annotation tips, make use of the precision approach, which entails the use of the percentage of true positives that have been correctly identified by the classifier. Reshuffle all the samples in the combined set to ensure transparency so that the AI system cannot look for shortcuts. This sampling procedure is sure to reduce the sample size for the evaluation of your AI models.