Accelerating AI with open source machine learning infrastructure
Andreea Munteanu
on 20 March 2025
Tags: Dell , Dell Technologies , Kubeflow , kubernetes , machine learning , MicroK8s , NIM Microservices , nvidia , NVIDIA AI Enterprise , PowerEdge

The landscape of artificial intelligence is rapidly evolving, demanding robust and scalable infrastructure. To meet these challenges, we’ve developed a comprehensive reference architecture (RA) that leverages the power of open-source tools and cutting-edge hardware. This architecture, built on Canonical’s MicroK8s and Charmed Kubeflow, running on Dell PowerEdge R7525 servers, and accelerated by NVIDIA NIM microservices, provides a streamlined path for deploying and managing machine learning workloads.
Empowering data scientists and engineers
This solution is designed to empower data scientists and machine learning engineers, enabling them to iterate faster, scale seamlessly, and maintain robust security. For infrastructure builders, solution architects, DevOps engineers, and CTOs, this RA offers a clear path to advance AI initiatives while addressing the complexities of large-scale deployments.
At the heart of this architecture lies the synergy between Canonical and NVIDIA. Our collaboration ensures that the software stack, from Ubuntu Server and Ubuntu Pro to Charmed Kubeflow, is optimized for NVIDIA-Certified Systems. This integration delivers exceptional performance and reliability, allowing organizations to maximize their AI efficiency.
Dell PowerEdge R7525: the foundation for high-performance AI
The Dell PowerEdge R7525 server plays a crucial role in this architecture, providing the robust hardware foundation needed for demanding AI workloads. As a 2U rack server, it’s engineered for high-performance computing, virtualization, and data-intensive tasks.
Featuring dual-socket AMD EPYC processors, the R7525 delivers exceptional scalability, advanced memory capabilities, and flexible storage options. This makes it ideal for AI and machine learning environments, where processing large datasets and complex models is essential. The R7525’s design ensures that organizations can virtualize traditional IT applications alongside transformative AI systems, providing a unified platform for diverse workloads.
Leveraging NVIDIA NIM and A100 GPUs
The architecture leverages NVIDIA NIM microservices included with NVIDIA AI Enterprise software platform, for secure and reliable AI model inferencing. This, combined with the power of NVIDIA A100 GPUs, provides the necessary computational muscle for demanding AI workloads. By deploying an LLM with NVIDIA NIM on Charmed Kubeflow, organizations can seamlessly transition from model development to production.
Canonical’s open-source components
Canonical’s MicroK8s, a CNCF-certified Kubernetes distribution, provides a lightweight and efficient container orchestration platform. Charmed Kubeflow simplifies the deployment and management of AI workflows, offering an extensive ecosystem of tools and frameworks. This combination ensures a smooth and efficient machine learning lifecycle.
Key benefits of the architecture
The benefits of this architecture are numerous. Faster iteration, enhanced scalability, and robust security are just a few. The deep integrations between NVIDIA and Canonical ensure that the solution works seamlessly out of the box, with expedited bug fixes and prompt security updates. Moreover, the foundation of Ubuntu provides a secure and stable operating environment.
This reference architecture is more than just a blueprint— it’s a practical guide. The document includes hardware specifications, software versions, and a step-by-step tutorial for deploying an LLM with NIM. It also addresses cluster monitoring and management, providing a holistic view of the system.
Unlocking new opportunities
By leveraging the combined expertise of Canonical, Dell, and NVIDIA, organizations can unlock new opportunities in their respective domains. This solution enhances data analytics, optimizes decision-making processes, and revolutionizes customer experiences.
Get started today
This RA is a solid foundation for deploying AI workloads. By combining Canonical, Dell, and NVIDIA’s expertise, organizations can enhance data analytics, optimize decision-making, and revolutionize customer experiences. As the RA concludes, organizations can confidently embrace this solution to drive innovation and accelerate AI adoption.
Ready to elevate your AI initiatives?
Further Information
Run Kubeflow anywhere, easily
With Charmed Kubeflow, deployment and operations of Kubeflow are easy for any scenario.
Charmed Kubeflow is a collection of Python operators that define integration of the apps inside Kubeflow, like katib or pipelines-ui.
Use Kubeflow on-prem, desktop, edge, public cloud and multi-cloud.
What is Kubeflow?
Kubeflow makes deployments of Machine Learning workflows on Kubernetes simple, portable and scalable.
Kubeflow is the machine learning toolkit for Kubernetes. It extends Kubernetes ability to run independent and configurable steps, with machine learning specific frameworks and libraries.
Install Kubeflow
The Kubeflow project is dedicated to making deployments of machine learning workflows on Kubernetes simple, portable and scalable.
You can install Kubeflow on your workstation, local server or public cloud VM. It is easy to install with MicroK8s on any of these environments and can be scaled to high—availability.