Contributors to Kubeflow
What's inside Kubeflow?
JupyterLab and VSCode
With Kubeflow users can spin-up Jupyter notebook servers but also VSCode directly from the dashboard, allocating the right storage, CPUs and GPUs.
ML libraries & frameworks
Automate your ML workflow into pipelines by containerizing steps as pipeline components and defining inputs, outputs, parameters and generated artifacts. Learn more ›
Experiments, Runs and Recurring Runs
Hyperparameter tuning / AutoML
Kubeflow includes Katib for hyperparameter tuning. Katib runs pipelines with different hyperparameters (e.g. learning rate, # of hidden layers) optimising for the best ML model.
KServe for inference serving
The integrations you need
Save, compare and share generated artifacts - models, images, plots.
Bringing AI solutions to market can involve many steps: data pre-processing, training, model deployment or inference serving at scale... The list of tasks is complex and keeping them in a set of notebooks or scripts is hard to maintain, share and collaborate on, leading to inefficient processes.
In the study, Hidden Technical Debt in Machine Learning Systems, Google describes that only about 20% of the effort and code required to bring AI systems to production is the development of ML code, while the remaining is operations. Standardizing ops in your ML workflows can hence greatly decrease time-to-market and costs for your AI solutions.
Who uses Kubeflow?
Thousands of companies have chosen Kubeflow for their AI/ML stack.
From research institutions like CERN, to transport and logistics companies – Uber, Lyft, GoJek – to financial and media industries with Spotify, Bloomberg, Shopify and PayPal.
Forward-looking enterprises are using Kubeflow to empower their data scientists.