Why is it necessary?
AI ethics is currently considered impractical and difficult to put into practice by most organizations. By looking at AI ethics through metrics, we aim to make ethics once again more tangible. In the process we are also able to identify what needs to be done in the future to operationalize AI ethics in relation to metrics.
How does it work?
We are currently looking at existing literature on monitoring in Software Engineering (SE) in general, as well as in ML development specifically. Based on this, we propose a typology for categorizing metrics and provide examples of existing metrics for each category. Then, we look at which existing metrics are related to AI ethics principles, using the ECCOLA method for AI ethics as a frame of reference.
The results will be (1) a list of existing metrics that can be used to measure ethical aspects in ML systems, and (2) an overview of what is currently missing when it comes to measuring ethics in AI/ML.