Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Ubuntu EKS Platform Images for k8s 1.19

Joshua Powers

on 18 February 2021

This article is more than 3 years old.


This article originally appeared on Cody Shepherd’s blog.

1.19 Platform Images Now Live

Following the GA of Kubernetes 1.19 support in AWS, EKS-optimized Ubuntu images for 1.19 node groups have been released. The ami-id of this image for each region can be found on the official site for Ubuntu EKS images.

Like the 1.18 platform AMIs, these images are minimized 20.04 Focal Fossa server cloud images that include a couple of extra customized utilities for interacting with EKS, namely the kubectl-eks snap, which simply pins kubectl to a platform version-appropriate channel, as well as pinned versions of cni and auth tools.

These packages are all pinned because, although unattended-upgrades is enabled for EKS-optimized Ubuntu images, the recommended upgrade path for Kubernetes compatibility is via launching the latest AMI, rather than trying to update in place.

Launch Paths for Ubuntu in EKS

It might be worth taking this opportunity to describe the methods for launching EKS clusters and nodegroups using the Ubuntu images identified above that are currently the best-supported. One of my previous posts described in detail how to use CloudFormation to launch Self-Managed nodegroups under an EKS cluster, and although I’ve tried my best to keep those instructions up to date, there are a couple of far easier ways to use Ubuntu with EKS.

eksctl

While not an AWS product, eksctl is a tool that appears in AWS EKS Docs and is well-supported, open-source, and under active development. It removes a huge portion of the manual config and tedium of launching EKS clusters and nodegroups via any other method.

Using eksctl to launch a 1.19 cluster that uses the EKS-optimized Ubuntu image, and enables key-based ssh access to the node instances, is as simple as the following example command:

eksctl  \
  --profile <profile> create cluster \
  --name <cluster name> \
  --version 1.19 \
  --nodegroup-name <nodegroup name> \
  --node-type <instance type, e.g. m5.large> \
  --nodes <number of nodes, e.g. 10> \
  --nodes-min <number of nodes, e.g. 5> \
  --nodes-max < number of nodes, e.g. 15> \
  --node-ami <relevant Ubuntu ami id for your region, e.g. ami-0fc3ca5b2c5e1fb11> \
  --node-ami-family Ubuntu2004 \
  --region <aws region, e.g. us-west-2> \
  --ssh-access \
  --ssh-public-key <ec2 public key name>

eksctl will take care of creating both cluster and nodegroup, as well as VPCs and security groups, and will also update your kubeconfig file. The tool also produces helpful logging and error messages in the event that something goes amiss.

Managed Nodegroups with Launch Templates

Using managed nodegroups is a nice middle ground between the full-on-manual self-managed nodegroups method described in my previous post, and the almost totally hands-free use of eksctl described above. And, judging by the results of end-to-end tests performed with the kubernetes-test-eks snap, this method also produces a nodegroup with the best Conformance characteristics of any method I’ve described thus far.

Launching a managed nodegroup requires the following steps; these steps assume you’ve already created a cluster, VPC, and security group, following the steps detailed in my previous post:

  1. In your AWS web console, navigate to the EC2 service, then to ‘Launch Templates’ under the Instances submenu in the sidebar. Click ‘Create Launch Template’ and use the following key configuration options:
    1. AMI: choose the ami id of the EKS-optimized Ubuntu image for your region
    2. Instance type: choose the shape you want for your nodes
    3. Key pair: this will enable SSH’ing to the node instance if necessary
    4. Under Advanced Settings, for ‘User Data,’ add the following, making sure to use the actual name of the cluster you’ve created:#!/usr/bin/bash sudo /etc/eks/bootstrap.sh <cluster name>
  2. Navigate to the IAM service, click Roles, and create a new IAM role with the following AWS-Managed policies, naming the role whatever you like:
    • AmazonEKSWorkerNodePolicy
    • AmazonEC2ContainerRegistryReadOnly
    • AmazonEKS_CNI_Policy
  3. Navigate to the EKS service, click Clusters, click the cluster you’ve created, and under the Compute tab click ‘Add Node Group’, using the following configurations:
    1. Choose the newly-created IAM role
    2. Enable the ‘Use launch template’ toggle and select the template you’ve created above; click Next
    3. Set the options for the number of nodes to whatever you like; click Next
    4. Use the default subnets for your VPC; click Next; click Create
  4. Viola! The nodegroup will have a status of “Creating” that will likely last several minutes. Once it is finished, if you’ve updated your kubeconfig to point to your cluster, you can verify your nodes are ready via kubectl get nodes.

Happy piloting!

Ubuntu cloud

Ubuntu offers all the training, software infrastructure, tools, services and support you need for your public and private clouds.

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

A call for community

Introduction Open source projects are a testament to the possibilities of collective action. From small libraries to large-scale systems, these projects rely...

MAAS Outside the Lines

Far from the humdrum of server setups, this is about unusual deployments – Raspberry Pis, loose laptops, cheap NUCs, home appliances, and more. What the heck...

No more DHCP(d)

“He’s dead, Jim.”  Dr. McCoy DHCP is dead; long live DHCP. Yes, the end-of-life announcement for ISC DHCP means that the ISC will no longer provide official...