Onboarding edge applications on the dev environment

anaqvi

on 11 October 2019

This article is more than 5 years old.


Adoption of edge computing is taking hold as organisations realise the need for highly distributed applications, services and data at the extremes of a network. Whereas data historically travelled back to a centralised location, data processing can now occur locally allowing for real-time analytics, improved connectivity, reduced latency and ushering in the ability to harness newer technologies that thrive in the micro data centre environment.

In an earlier post, we discussed the importance of choosing the right primitives for edge computing services. When looking at use-cases calling for ultra-low latency compute, Kubernetes and containers running on bare metal are ideal for edge deployments because they offer direct access to the kernel, workload portability, easy upgrades and a wide selection of possible CNI choices.

While offering clear advantages, setting up Kubernetes for edge workload development can be a difficult task – time and effort better spent on actual development. The steps below walk you through an end-to-end deployment of a sample edge application. The application runs on top of Kubernetes with advanced latency budget optimization.  The deployed architecture includes Ubuntu 18.04 as the host operating system, Kubernetes v1.15.3 (MicroK8s) on bare-metal, MetalLB load balancer and CoreDNS to serve external requests.

Let’s roll

Summary of steps:

  1. Install MicroK8s
  2. Add MetalLB
  3. Add a simple service – Core DNS

Step 1: Install MicroK8s

Let’s start with the development workstation Kubernetes deployment using MicroK8s by pulling the latest stable edition of Kubernetes.

$ sudo snap install microk8s --classic
microk8s v1.15.3 from Canonical✓ installed
$ snap list microk8s
Name      Version Rev  Tracking Publisher   Notes
microk8s  v1.15.3 826  stable canonical✓  classic

Step 2: Add MetalLB

As I’m deploying Kubernetes on the bare metal node, I chose to utilise MetalLB, as I won’t be able to rely on the cloud to provide LBaaS service. MetalLB is a fascinating project supporting both L2 and BGP modes of operation, and depending on your use case, it might just be the thing for your bare metal development needs. 

$ microk8s.kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
namespace/metallb-system created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created

Once installed, you need to make sure to update the iptables configuration to allow IP forwarding and configure your metalLB with networking mode and address the pool you want to use for load balancing. The config files need to be created manually, please see listing 1 below for reference.

$ sudo iptables -P FORWARD ACCEPT

Listing 1 : MetalLB configuration (metallb-config.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.2.32/28

Step 3: Add a simple service

Now that you have your config file ready, you continue with CoreDNS sample workload configuration. Especially for edge use cases, you usually want to have fine-grained control over how your application is exposed to the rest of the world. This includes ports as well as the actual IP address you would like to request from your load balancer. For the purpose of this exercise, I use .35 IP addresses from 10.0.2.32/28  subnet and create Kubernetes service using this IP.

Listing 2: CoreDNS external service definition (coredns-service.yaml)

apiVersion: v1
kind: Service
metadata:
  name: coredns
spec:
  ports:
  - name: coredns
    port: 53
    protocol: UDP
    targetPort: 53
  selector:
    app: coredns
  type: LoadBalancer
  loadBalancerIP: 10.0.2.35

For the workload configuration itself, I use a simple DNS cache configuration with logging and forwarding to Google’s open resolver service.

Listing 3: CoreDNS ConfigMap (coredns-configmap.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
data:
  Corefile: |
    .:53 {
     forward . 8.8.8.8
     cache
     log
    }

Finally, the description of our Kubernetes deployment calling for 3 workload replicas, latest CoreDNS image and configuration I’ve defined in our ConfigMap.

Listing 4: CoreDNS Deployment definition  (coredns-deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns-deployment
labels:
app: coredns
spec:
replicas: 3
selector:
matchLabels:
app: coredns
template:
metadata:
labels:
app: coredns
spec:
containers:
- name: coredns
image: coredns/coredns:latest
imagePullPolicy: IfNotPresent
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile

Deploy

With all the service components defined, prepared and configured, you’re ready to start the actual deployment and verify the status of Kubernetes pods and services.

$ microk8s.kubectl apply -f metallb-config.yaml 
configmap/config created
$ microk8s.kubectl apply -f coredns-service.yaml
service/coredns created
$ microk8s.kubectl apply -f coredns-config.yaml
configmap/coredns created
$ microk8s.kubectl apply -f coredns-deployment.yaml
deployment.apps/coredns-deployment created
$ microk8s.kubectl get po,svc --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/coredns-deployment-9f8664bfb-kgn7b 1/1 Running 0 10s
default pod/coredns-deployment-9f8664bfb-lcrfc 1/1 Running 0 10s
default pod/coredns-deployment-9f8664bfb-n4ht6 1/1 Running 0 10s
metallb-system pod/controller-7cc9c87cfb-bsrwx 1/1 Running 0 4h8m
metallb-system pod/speaker-s9zz7 1/1 Running 0 4h8m
NAMESPACE   NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
default service/coredns LoadBalancer 10.152.183.89 10.0.2.35 53:31338/UDP 34m
default service/kubernetes ClusterIP 10.152.183.1 443/TCP 4h29m

Once all the containers are fully operational, you can evaluate how your new end to end service is performing. As you can see, the very first request takes around 50ms to get answered (which aligns with usual latency between my ISP access network and Google DNS infrastructure), however, subsequent requests provide significant latency reduction as expected from a local DNS caching instance.

$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 50 ms
$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 0 ms
$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 1 ms

CoreDNS is an example of a simple use case for distributed edge computing, proving how network distance and latency can be optimised for better user experience by changing service proximity. The same rules apply to exciting services such as AR/VR, GPGPU-based inference AI and content distribution networks.

The choice of proper technological primitives, flexibility to manage your infrastructure to meet service requirements and process to manage distributed edge resources in scale will become critical factors for edge cloud adoption. This is where MicroK8s comes in, to reduce the complexity and cost of development and deployment without sacrificing quality.

End Note

So you’ve just on-boarded an edge application, now what? Take MicroK8s for a spin with your use case(s) or just try to break stuff. If you’d like to contribute or request features/enhancements, Please shout out on our Github, Slack #MicroK8s or Kubernetes forum.

smart start

IoT as a service

Bring an IoT device to market fast. Focus on your apps, we handle the rest. Canonical offers hardware bring up, app integration, knowledge transfer and engineering support to get your first device to market. App store and security updates guaranteed.

Get your IoT device to market fast ›

smart start logo

IoT app store

Build a platform ecosystem for connected devices to unlock new avenues for revenue generation. Get a secure, hosted and managed multi-tenant app store for your IoT devices.

Build your IoT app ecosystem ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

Space pioneers: Lonestar gears up to create a data centre on the Moon

Why establish a data centre on the Moon? Find out in our blog.

Advantech RSB-3810, a new Single Board Computer powered by MediaTek Genio 1200, is now certified on Ubuntu 22.04 LTS

Discover this new hardware solution designed for IoT and edge applications Canonical has partnered with MediaTek to optimise Ubuntu for IoT innovations and...

Edge AI: what, why and how with open source

Edge AI is transforming the way that devices interact with data centres, challenging organisations to stay up to speed with the latest innovations. From...