Calibrated confidence estimator
What is it?
The confidence estimator is used to estimate the uncertainty in the predictions of the Basware distillation pipeline, which in turn is used to extract information from commercial invoices. The uncertainty is expressed in the form of a calibrated confidence score for each processed invoice. The calibrated confidence score can be seen as a reliable probabilistic assessment that the information extracted from an invoice conforms to preset quality criteria.
Why is it necessary?
In theory, the calibrated confidence scores could be used as a failure prediction mechanism; only predictions with high enough confidence are to be trusted whilst predictions with insufficient confidence are sent to manual inspection. Furthermore, the calibrated confidence scores can be used in model monitoring in cases where there is no access to ground truth labels after deployment or the labels are obtainable only after an unacceptable lag. In these cases, one can use the calibrated confidence scores to estimate the predictive performance of a deployed machine learning model to detect and alert the user if the predictive performance deteriorates.
How does it work?
The confidence estimator is a complex hybrid machine learning pipeline, which consists of convolutional networks, an XGBoost classifier and a Beta calibration mapping. It gathers latent representations used by the base model during inference and uses convolutional neural networks to extract informative features from these representations. These features are optionally augmented with statistics from the inference process. An XGBoost model uses these augmented features to assign a confidence score for each prediction of the base model. Finally, a Beta calibration mapping is used to calibrate these confidence scores.