OSS MLOps platform

What is it?

A portable, cloud agnostic ML/AI platform that enables the creation and execution of ML/AI services. The platform at its core integrates some of the most popular MLOps technologies and provides an interface to ML practitioners to easily develop, test, deploy and monitor their work.

Why is it necessary?

The introduced MLOps platforms addresses the basic requirement that ML practitioners and stakeholders set to their Enteprises. The OSS platform enables: 

1. Automation when developing, deploying and updating (re-traing) ML models 

2. Versioning of the different artefacts that need to be tracked when working with ML.  

3. Monitoring of the operations of ML/AI services in production  

4. Scallability regarding the orchestration and execution of multiple ML workflows on top of computational clusters.  

5. ML/AI Development environment isolation meaning that different ML practitioners can use the ML frameworks of their choice without creating library conflicts and endangering the development environment of other users.

How does it work?

The basic principle of the introduced OSS MLOps platform is to offer a service-based approach when it comes on developing and operating ML pipelines. The platform has different services for addressing the different aforementioned requirements and all of those services have well defined APIs and programmatic interfaces that someone could exploit.