NVIDIA GTC 2018

Canonical

on 14 March 2018

Event Information

Date: March 27 – 29

City/State: San Jose, CA

Location: San Jose McEnery Convention Center

Booth: #1227

GTC is the premier AI and deep learning conference, providing unparalleled training, industry insights, and direct access to NVIDIA and industry experts — all in one place. It features the latest breakthroughs in everything from healthcare to virtual reality, accelerated analytics, self-driving cars, and much more.

This year the Ubuntu team will be on site at booth 1227. Join us for demos on

  • Using Kubeflow to manage Tensorflow jobs from desktop to servers and cloud.
  • Building a complete, on premise AI platform using automation tools (MAAS and Juju) to deploy and manage Kubernetes on bare metal.
  • How worker nodes with and without GPU’s are automatically identified to execute Tensorflow jobs.

Talks:

Using Containers for GPU Workloads: March 28, 10:00AM – 10:50AM – Room 210E

Presenters: Christian Brauner – Software Engineer, Canonical Ltd. & Serge Hallyn: Principal Engineer, Cisco

Learn how to use containers for efficient GPU utilization to achieve bare-metal performance for computationally intensive workloads. We’ll show how NVIDIA tools and libraries can be used to achieve drop-in GPU support and efficient GPU feature integration for container runtimes, and illustrate how to leverage system containers to run complex statistical models on NVIDIA GPUs.

Tell us about your project

Come discuss with our team to find out how Ubuntu can help you in your AI and deep learning projects.

Book a meeting

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Newsletter signup

Select topics you’re
interested in

In submitting this form, I confirm that I have read and agree to Canonical’s Privacy Notice and Privacy Policy.

Related posts

Hardware discovery and kernel auto-configuration in MAAS

In this blog, we are going to explore how to leverage MAAS for hardware discovery and kernel auto-configuration using tags. In many cases, certain pieces of...

Machine Learning Operations (MLOps): Deploy at Scale

Artificial Intelligence and Machine Learning adoption in the enterprise is exploding from Silicon Valley to Wall Street with diverse use cases ranging from...

MicroK8s Version 1.16.0 Beta Released!

We’re excited to announce the release of MicroK8s 1.16 beta! MicroK8s is a lightweight and reliable Kubernetes cluster delivered as a single snap package – it...