Kubernetes for Data Science: meet Kubeflow

Deep Learning is set to thrive

Data science has exploded as a practice in the past decade and has become an undisputed driver of innovation.

The forcing factors behind the rising interest in Machine Learning, a not so new concept, have consolidated and created an unparalleled capacity for Deep Learning, a subset of Artificial Neural Networks with many hidden layers, to thrive in the years to come.

Deep Learning enabling factors:

  • Computational capacity: increased exponentially (GPGPUs and TPUs)
  • Hardware availability: at low cost (Public & hybrid Clouds, efficient data centres)
  • Data availability: publicly accessible data, low-cost widespread IoT devices
  • Open source community: Libraries – TensorFlow, PyTorch; Competitions – Kaggle

But it still faces many challenges

The common pathway for a data scientist is to start by writing a model on a Jupyter notebook using Python and amazing open source libraries such as TensorFlow, Keras or PyTorch. When starting out, we tend to be focused on the end result of the model, but, there is a lot more.

While trying to bring the model to the hands of users or to edge devices, things get more complicated. In fact, developing the model itself is only a fairly small portion of the effort required to train, deploy and manage an AI project.

There is just a lot of background work to be done:

Area = effort

The typical Machine Learning workflow can look like this:


With these different stages, having diverse requirements, the challenges that arise are threefold:

  1. Composability – The workflow from data ingestion to model serving, monitoring and logging, includes many components spread across multiple systems making it hard to manage, secure and maintain.
  2. Portability – At different stages of the ML process, computation requirements change, and so does the hardware in which your software is running – Laptop, on-prem training rig, public cloud.
  3. Scalability – Computation requirements for AI projects are very dynamic, a training phase is resource intensive, while the inference phase is lightweight and speedy, hence, having elasticity at the infrastructure level is compulsory.

The word that best defines these needs is MLOps.

Kubernetes: Pods, Nodes, Containers e Clusters — Ciência e Dados

Kubernetes can help

Kubernetes (a.k.a. K8s) is an open source system to automate deployment, scaling, and management of containerized applications widely used in the world of DevOps.

For Data Scientists with the above mentioned challenges, this means they can package each step of the process as a container, making it system agnostic (portable) and composable (i.e. modular building blocks), and have Kubernetes handle the deployment and management at scale.

However, why not simply use the great powers of Kubernetes?

The only problem is, we need to become experts in:
– Kubernetes service endpoints
– Immutable deployments
– Persistent volumes
– GPGPU passthrough
– Drivers & the GPL
– Containers
– Cloud APIs
– Packaging
– DevOps
– Scaling
– (…)

kubeflow - Tech Blog

Meet Kubeflow

Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.

Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.

And, it is all open source!

Run it on your workstation, on-premises training rig, or in any hybrid or public cloud, in a new or already running Kubernetes deployment. Within Kubeflow you will find all the open source tools and frameworks you need:

To know more, visit ubuntu.com/kubeflow, or install Kubeflow by following the tutorial Deploy Kubeflow on Ubuntu, Windows and MacOS.

In upcoming posts, we will dive deeper into the technologies that make Kubeflow, and how you can leverage them to enhance your Data Science capabilities. Subscribe to our Cloud and Server newsletter to stay up to date.

Talk to us today

Interested in running Ubuntu Desktop in your organisation?

Newsletter signup

Select topics you’re
interested in

In submitting this form, I confirm that I have read and agree to Canonical’s Privacy Notice and Privacy Policy.

Related posts

Building Kubeflow pipelines: Data science workflows on Kubernetes – Part 2

This blog series is part of the joint collaboration between Canonical and Manceps. Visit our AI consulting and delivery services page to know more....

Data centre automation for HPC

Friction points in HPC DevOps Many High Performance Computing (HPC) setups are still handcrafted configurations where tuning changes can take days or weeks....

Demystifying Kubeflow pipelines: Data science workflows on Kubernetes – Part 1

Kubeflow Pipelines are a great way to build portable, scalable machine learning workflows. It is one part of a larger Kubeflow ecosystem that aims to reduce...