2021-06-07

Enterprise MLOps in hybrid-cloud scenarios: best practices

Learn about architectural best practices that have recently emerged

The need for efficient allocation of compute resources and planning of utilization capacity is fast becoming a necessity in many Enterprise Machine Learning Operation endeavors. Optimizing resource allocation, from both a cost and technical perspective, is driving many organizations to strongly consider a hybrid-cloud infrastructure setup.

Architectural best practices that have emerged in recent times around ML workflow pipelines, cloud-agnostic model deployment and serving, feature stores, data versioning and more, make it easy for companies looking in this direction to bootstrap and get up and running.

In this webinar, we will cover:

  1. How to effectively bring your models to production across clouds
  2. How to make the best use of feature stores
  3. How to use Kubeflow Pipelines with a feature store
  4. How to use Apache Hudi to unify historical and new data
  5. How to use Kubeflow with Apache Spark operator
  6. How to leverage model-driven operators to deploy and manage your MLOps stack
  7. Storage agnostic best practices (s3, gs, az storage) in the public cloud and (ceph) on-prem