Artificial Intelligence and Machine Learning are hot topics these days, with more enterprises announcing huge investments in powerful computing hardware and new LLM models all over the news. This results in a clear need to optimise and search for machine learning tooling that enables the desired return on investment for all these projects.
Machine learning operations (MLOps) is a set of practices that automate machine learning workflows, helping professionals develop and deploy models in a reproducible way. Which MLOPs tools are most suitable to ship production-grade machine learning models? That’s often a matter of debate.
Started by Google a couple of years ago, Kubeflow is an end-to-end MLOps platform for AI at scale. Canonical has its own distribution, Charmed Kubeflow, which addresses the entire machine-learning lifecycle. Charmed Kubeflow is a suite of tools, such as Notebooks for training, Pipeline for automation, Katib for hyperparameter tuning or KServe for model serving and more. Charmed Kubeflow benefits from a wide range of integrations with other tools such as MLFlow, Spark, Grafana or Prometheus.
MLFLow on the other hand celebrated 10 million downloads last year. It’s a very popular solution when it comes to machine learning. Although it started initially with a core function, the tool has nowadays four conceptions that include model registry or experiment tracking.
So, which one should you choose for Machine Learning Operations?
Join us for a Kubeflow vs MLFLow panel discussion with Maciej Mazur, AI/ML Principal Engineer at Canonical, and Kimonas Sotirchos - Kubeflow Community Working Group Lead and Engineering Manager at Canonical.
The discussion will cover:
- Production-grade MLOps
- Open-source MLOps
- Community-driven ML tooling
- Kubeflow vs MLFlow; Pros and Cons
Register today from the link below.