Monitoring

On our monitoring page, you can analyze the quality and distribution of your labels:

  • label distribution: a grouped bar chart showing you which label occurs how many times, grouped by whether the label has been set manually or via weak supervision.
  • confidence distribution: a new chart, which helps you understand how qualitative your weakly supervised labels are. Use this especially in combination with the data browser to identify and share slices that still cause you headaches.
  • confusion matrix: the go-to analysis for prediction quality, but instead of an actual classification model, we compare the manually labeled data with the weakly supervised labels.
  • inter-annotator agreement (only on the managed version): see how annotators agree and disagree, and how this impacts your label quality.

All of these graphs are on a labeling task basis, i.e., you can switch between different tasks.

The label distribution shows which label occurs how many times, grouped by the label source.

Our confidence chart to estimate the data quality in the dataset, and to showcase you where you need to pay more attention.

Our confusion matrix compares the weakly supervised labels with the manually set labels.

In the inter-annotator agreement matrix, you can see the quality of your label agreements, and whether certain users have different understandings of your labeled data.

Analyzing metrics on static slices

You can also reduce the record set that is being analyzed on the overview page by selecting a static data slice in the top right dropdown. This way, all graphs will be filtered, giving you deeper insights into your potential weak spots.

To learn more about data slices, read the next page about data management.