AI and ML adoption in the enterprise is exploding from Silicon Valley to Wall Street. Ubuntu is the premier platform for these ambitions — from developer workstations, to racks, to clouds and to the edge with smart connected IoT. One of the joys that come with new developer trends are a plethora of new technologies and terminologies to understand.
In this webinar, join Canonical’s Kubernetes Product Manager Carmine Rimi for:
- An introduction to some of the key concepts in Machine Learning
- A look into some examples of how AI applications and their development are reshaping company’s IT
- A deep dive into how enterprises are applying devops practices to their ML infrastructure and workflows
- An introduction to Canonical AI / ML portfolio from Ubuntu to the Canonical Distribution of Kubernetes and and how to get started quickly with your project
And in addition, we’ll be answering some of these questions:
- What do Kubeflow, Tensorflow, Jupyter, and GPGPUs do?
- What’s the difference between AI, ML and DL?
- What is an AI model? How do you train it? How do you develop / improve it? How do you execute it?
And finally, we’ll be taking the time to answer your questions in a Q&A session
With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.
Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like katib or pipelines-ui.
Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.
Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.
Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.
The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple,
portable and scalable.
You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install with MicroK8s on any of these environments and can be scaled to high-availability.