Edge AI in a 5G world

Alex Cattle

on 6 February 2020

Deploying AI/ML solutions in latency-sensitive use cases requires a new solution architecture approach for many businesses.

Fast computational units (i.e. GPUs) and low-latency connections (i.e. 5G) allow for AI/ML models to be executed outside the sensors/actuators (e.g. cameras & robotic arms). This reduces costs through lower hardware complexity as well as compute resource sharing amongst the IoT fleet.

Strict AI responsiveness requirements that before required IoT AI model embedding can now be met with co-located GPUs (e.g. on the same factory building) as the sensors and actuators. An example of this is the robot ‘dummification’ trend that is currently being observed for factory robotics with a view to reducing robot unit costs and fleet management.

In this webinar we will explore some real-life scenarios in which GPUs and low-latency connectivity can unlock previously prohibitively expensive solutions now available for businesses to put in place and lead the 4th industrial revolution.

Watch the webinar

Talk to us today

Interested in running Ubuntu in your organisation?

Newsletter signup

Select topics you’re
interested in

In submitting this form, I confirm that I have read and agree to Canonical’s Privacy Notice and Privacy Policy.

Related posts

Build a Raspberry Pi Desktop with an Ubuntu heart

On the 22nd October 2020, Canonical released an Ubuntu Desktop image optimised for the Raspberry Pi. The Raspberry Pi Foundation’s 4GB and 8GB boards work out...

Canonical & Ubuntu Join AfricaCom Virtual 2020

This year, AfricaCom becomes a virtual event as part of the new Virtual Africa Tech Festival – the largest and most influential tech and telecoms event on the...

Introducing HA MicroK8s, the ultra-reliable, minimal Kubernetes

15th October 2020: Canonical today announced autonomous high availability (HA) clustering in MicroK8s, the lightweight Kubernetes. Already popular for IoT and...