- Improving the modularity and reuse of development and data artefacts throughout the development process by
- providing datasets and metadata that may serve the training of models in different application contexts,
- pretrained models that are reused as a basis for further training in different application contexts, and
- test patterns and test procedures that allow for standardized test suites to ensure dedi-cated ML specific quality attributes like security, robustness, transparency etc.
- Boosting the automation, interoperability and tool support throughout the whole ML lifecycle. In particular, there is currently a lack of tools that allow for
- automated processing with integrated quality assurance of data in the data preparation pipe-line,
- continuous testing and verification of ML artefacts during development, re-use and deployment,
- versioning and traceability of development and data arte-facts (data sets, models, parameters, test results) in the course of data preparation, training and (d) operations, and systematic surveillance and monitoring of models in the field (monitoring corner cases, model evolution, functional fitness, security etc.) including the ability to intervene in case severe deviations are reported.