Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Preview Confidential AI with Ubuntu Confidential VMs and NVIDIA H100 GPUs on Microsoft Azure

ijlal-loutfi

on 21 February 2024

With Ubuntu confidential AI on Azure, businesses can undertake various tasks including ML training, inference, confidential multi-party data analytics, and federated learning with confidence.

The effectiveness of AI models depends heavily on having access to large amounts of good quality data. While using publicly available datasets has its place, for tasks like medical diagnosis or financial risk assessment, we need access to private data during both training and inference. 

When performing machine learning tasks in the cloud, enterprises understandably have concerns about the potential compromise of their sensitive data privacy as well as their model’s intellectual property. Additionally, stringent industry regulations often prohibit the sharing of such data. This makes it difficult, or outright impossible, to utilise large amounts of valuable private data, limiting the true potential of AI across crucial domains.

Confidential AI tackles this problem head on, providing a hardware-rooted execution environment that spans both the CPU and GPU. This environment enhances the protection of AI data and code at runtime by helping to safeguard it against privileged system software (such as the hypervisor or host OS) and privileged operators in the cloud.

To address this challenge, we are happy  to announce today  the preview of Ubuntu confidential AI on Azure, with NVIDIA H100 Tensor core GPUs. This solution is built with Ubuntu 22.04 confidential VMs (CVMs), using AMD 4th Gen EPYC processors with SEV-SNP, alongside  NVIDIA H100 GPUs.  Ubuntu 22.04 is the only operating system to support this offering on Azure.

How confidential AI works

Confidential AI is made possible thanks to confidential computing, a game-changing  technology that represents a significant departure from the traditional threat model of public clouds. In the past, vulnerabilities within the extensive codebase of the cloud’s privileged system software, including the operating system, hypervisor, and firmware, posed a constant risk to the confidentiality and integrity of running code and data. Similarly, unauthorised access by a malicious cloud administrator could compromise the security of your virtual machine (VM) and its platform.

Ubuntu CVMs are here to give you back control over the security guarantees of your VMs. They enable you to run your workload within a hardware-protected Trusted Execution Environment, TEE. Such secure and isolated environments are purpose-built to prevent unauthorised access or alterations to applications and data at run-time, thereby enhancing security for organisations managing sensitive and regulated data.

As such, CVMs’ primary goal is to safeguard your guest workloads from various potential software threats, including the virtual-machine manager and other non-CVM software on the platform. CVMs also enhance your workload’s security against specific physical access attacks on platform memory, including offline dynamic random access memory (DRAM) analysis such as cold-boot attacks and active attacks on DRAM interfaces.

From confidential computing to confidential AI

While confidential computing efforts have historically focused primarily on CPUs, the advent of NVIDIA H100 GPUs with confidentiality computing capabilities opens up new possibilities for extending this security paradigm to GPUs as well. The Azure solution, which integrates both CPU and GPU components, is what makes confidential AI achievable. At a high level, this solution relies on the following components:

  • CPU-TEE: Ubuntu confidential VMs that run on the  AMD 4th Gen EPYC processors with SEV-SNP protect the workload’s computation while in the CPU:
    • Run-time confidentiality: the DRAM of your Ubuntu CVMs is kept encrypted thanks to the  new AES-128 hardware encryption engine that sits within the CPU memory  memory controller. This engine encrypts and decrypts memory pages whenever there is a memory read or write operation. Instead of having workload code and data in plain text in system memory, they are encrypted using a hardware-managed encryption key. This encryption and decryption process happens seamlessly within the CPU, ensuring strong memory isolation for confidential workloads.
    • Run-time Integrity: Ubuntu CVMs make use of the new AMD SEV SNP instructions and data structures that allow auditing of security-sensitive tasks typically carried out by privileged system software. These tasks encompass memory management and access to platform devices. For example, when reading memory pages mapped to confidential workloads, these new instructions also provide information about the last value written into the page. This feature helps prevent data corruption and replay attacks by detecting unauthorised modifications to memory pages.
  • GPU-TEE: NVIDIA H100 Tensor Core GPUs, which protect the confidentiality and integrity of the workload’s computation within the GPU.
  • Encrypted  PCIe communication between the CPUs and GPU.
  • Attestation: Enables a relying party, whether it’s the owner of the workload or a user of the services provided by the workload, to cryptographically verify the security claims of both the CPU and GPU TEEs.

By integrating these components into a cohesive solution, confidential AI becomes not only feasible but also practical, allowing organisations to harness the power of AI while maintaining the highest standards of data security and confidentiality. Confidential AI can then be further augmented with cryptographic primitives, such as differential privacy, which protect the workload from further sophisticated data leakage.

Build your confidential AI workloads with Ubuntu today

Confidential AI can support numerous use cases across the entire lifecycle of building and deploying an AI application. For example,  you can use Ubuntu CVMs during the training phase to protect your data, model IP, and its weights.

Confidential AI can also be beneficial for fine-tuning large language models, whereby enterprises  need to use private data to optimize the generic models and improve their performance for their specific industries

We firmly believe that confidential AI represents a pivotal opportunity to unleash the full potential of AI, especially for industries that need to deal with security-sensitive data, such as healthcare and finance. We invite you to join us on this transformative journey with Ubuntu. Together, we can chart new horizons in AI innovation while steadfastly maintaining the highest standards of privacy and security for sensitive data.

Join us today and sign up for the Azure preview of confidential AI with Ubuntu. 

Share your questions, use cases, and feedback with us. we’re eager to hear from you and collaborate on shaping the future of AI security and innovation.

Further reading

Talk to us today

Interested in running Ubuntu in your organisation?

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Ubuntu Confidential VMs on Azure: Introducing Ephemeral OS disks & vTPMs

Canonical introduces ephemeral vTPMs for Ubuntu Confidential VMs on Azure, Strengthening remote attestation. Explore the evolution of confidential computing,...

Introducing Confidential VMs on Ubuntu Pro for Azure

Unlock unparalleled security for your Azure-based enterprise workloads with the integration of Ubuntu Pro and Confidential VMs. Benefit from 10-year ESM,...

Deploying Open Language Models on Ubuntu

Discover the benefits of using Ubuntu for open-source AI and how to seamlessly deploy models on Azure, including leveraging GPU and Confidential Compute capabilities.