Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Canonical accelerates AI Application Development with NVIDIA AI Enterprise

Tags: AI , GPU , k8s , kubernetes , nvidia , partners

Charmed Kubernetes support comes to NVIDIA AI Enterprise

Canonical’s Charmed Kubernetes is now supported on NVIDIA AI Enterprise 5.0. Organisations using Kubernetes deployments on Ubuntu can look forward to a seamless licensing migration to the latest release of the NVIDIA AI Enterprise software platform providing developers the latest AI models and optimised runtimes.

NVIDIA AI Enterprise 5.0

NVIDIA AI Enterprise 5.0 is supported across workstations, data centres, and cloud deployments, new updates include:

  • NVIDIA NIM microservices is a set of cloud-native microservices developers can use as building blocks to support custom AI application development and speed production AI, and will be supported on Charmed Kubernetes.
  • NVIDIA API catalog: providing quick access for enterprise developers to experiment, prototype and test NVIDIA-optimised foundation models powered by NIM. When ready to deploy, enterprise developers can export the enterprise-ready API and run on a self-hosted system
  • Infrastructure management enhancements include support for vGPU heterogeneous profiles, Charmed Kubernetes, and new GPU platforms.

Charmed Kubernetes and NVIDIA AI Enterprise 5.0

Data scientists and developers leveraging NVIDIA frameworks and workflows on Ubuntu across the board now have a single platform to rapidly develop AI applications on the latest generation NVIDIA Tensor Core GPUs. For data scientists and AI/ML developers who would like to deploy their latest AI workloads using kubernetes, it is vital to leverage the most performance out of Tensor Core GPUs through NVIDIA drivers and integrations.

Fig. NVIDIA AI Enterprise 5.0

With Charmed Kubernetes from Canonical, several features are provided that are unique to this distribution including inclusion of NVIDIA operators and GPU optimisation features, composability and extensibility using customised integrations through Ubuntu operating system.

Best-In-Class Kubernetes from Canonical 

Charmed Kubernetes can automatically detect GPU-enabled hardware and install required drivers from NVIDIA repositories. With the release of Charmed Kubernetes 1.29, the NVIDIA GPU Operator charm is available for specific GPU configuration and tuning. With support for GPU operators in Charmed K8s, organisations can rapidly and repeatedly deploy the same models utilising existing on-prem or cloud infrastructure to power AI workloads. 

With the NVIDIA GPU operator, users can automatically detect the GPU on the system and install NVIDIA repositories. It also allows for the most optimal configurations through features such as NVIDIA Multi-Instance GPU (MIG) technology in order to leverage the most efficiency out of the Tensor Core GPUs. GPU-optimised instances for AI/ML applications reduce latency and allow for more data processing, freeing for larger-scale applications and more complex model deployment. 

Paired with the GPU Operator, the Network Operator enables GPUDirect RDMA (GDR), a key technology that accelerates cloud-native AI workloads by orders of magnitude. GDR allows for optimised network performance, by enhancing data throughput and reducing latency. Another distinctive advantage is its seamless compatibility with NVIDIA’s ecosystem, ensuring a cohesive experience for users. Furthermore, its design, tailored for Kubernetes, ensures scalability and adaptability in various deployment scenarios. This all leads to more efficient networking operations, making it an invaluable tool for businesses aiming to harness the power of GPU-accelerated networking in their Kubernetes environments.

Speaking about these solutions, Marcin “Perk” Stożek, Kubernetes Product Manager at Canonical says: “Charmed Kubernetes validation with NVIDIA AI Enterprise is an important step towards an enterprise-grade, end-to-end solution for AI workloads. By integrating NVIDIA Operators with Charmed Kubernetes, we make sure that customers get what matters to them most: efficient infrastructure for their generative AI workloads.” 

Getting started is easy (and free). You can rest assured that Canonical experts are available to help if required.

Get started with Canonical open source solutions with NVIDIA AI Enterprise 

Try out NVIDIA AI Enterprise with Charmed Kubernetes with a free, 90-day evaluation

kubernetes logo

What is Kubernetes?

Kubernetes, or K8s for short, is an open source platform pioneered by Google, which started as a simple container orchestration tool but has grown into a platform for deploying, monitoring and managing apps and services across clouds.

Learn more about Kubernetes ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Deploy GenAI applications with Canonical’s  Charmed Kubeflow and NVIDIA NIM

It’s been over two years since generative AI (GenAI) took off with the launch of ChatGPT. From that moment on, a variety of applications, models and libraries...

Ventana and Canonical collaborate on enabling enterprise data center, high-performance and AI computing on RISC-V

This blog is co-authored by Gordan Markuš, Canonical and Kumar Sankaran, Ventana Micro Systems Unlocking the future of semiconductor innovation  RISC-V, an...

Join Canonical in Mumbai at HPE Discover More

Canonical, the company behind Ubuntu, is proud to sponsor HPE Discover More in Mumbai. Join us to learn how Canonical and Hewlett Packard Enterprise (HPE) can...