Your submission was sent successfully! Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Getting Started with Serverless Computing using Knative

This article was last updated 4 years ago.


What is Knative?

Knative is a Kubernetes-based platform to build, deploy, and manage modern serverless workloads. There are three key features in Knative that help deliver its serverless mission:

  • Build – Provides easy-to-use, simple source-to-container builds. You benefit by leveraging standard build mechanisms and constructs.
  • Serving – Knative takes care of the details of networking, autoscaling (even to zero), and revision tracking. You just have to focus on your core logic.
  • Eventing – Universal subscription, delivery, and management of events. You can build modern apps by attaching containers to a data stream with declarative event connectivity and a developer-friendly object model.

What is Serverless?

Serverless computing is a style of computing that simplifies software development by separating code development from code packaging and deployment. You can think of serverless computing as synonymous with function as a service (FaaS). 

Serverless has at least three parts, and consequently can mean something different depending on your persona and which part you look at – the infrastructure used to run your code, the framework and tools (middleware) that hide the infrastructure, and your code which might be coupled with the middleware. In practice, serverless computing can provide a quicker, easier path to building microservices. It will handle the complex scaling, monitoring, and availability aspects of cloud native computing.

Why does serverless matter?

In a few words, productivity and innovation velocity. Serverless can help your developers and operations teams become more productive. More productive engineers are happier and innovate faster. How does it do this? How does serverless help improve productivity?

Before I oversell it I should mention that there can be difficulties. As with any new paradigm, some aspects of computing are made simpler, and other aspects are more difficult. What’s simpler? The act of building and deploying software. What’s more difficult? Well, you will now have a lot more functions to monitor and manage. Ensure the benefits exceed the disadvantages before committing to a large scale project.

How does it work?

Knative is Kubernetes-based, which means it leverages extensible features in Kubernetes. Custom resources (CRs) is a common way to extend Kubernetes – they allow you to define new resources in Kubernetes and register operators to listen to events related to those resources. 

In addition to defining their own resources, Knative leverages Istio to front several of its features, primarily serving and eventing. We’ll explore the architecture in more detail in a future post. For a quick look at the documentation, look here for serving, and here for eventing.

Getting Started with Knative on Ubuntu with MicroK8s

Regardless of the desktop, or server, operating system you use, my preference is to use multipass to quickly setup an Ubuntu VM. There are a few reasons for using an Ubuntu VM:

  1. These instructions have a greater chance of working 🙂
  2. All installed software is confined to the VM
  3. Which means you can easily delete everything by deleting the VM, or you can stop the VM which will stop all associated running processes
  4. These instructions will work in any VM – on your laptop or in the cloud
  5. You can constrain the resources consumed. This is important – if you turn on knative monitoring, a lot more memory and cpu is consumed, which could overload your system if you don’t specify constraints

MicroK8s has native support for Knative and is the best way to get started with all of the components of Knative – build, serving, eventing, monitoring. However, for more control of Knative version and/or components, instructions are presented for that route as well. In the custom route, we don’t install the monitoring components, which saves about 6G of memory.

0. [optional] Create a VM

Use multipass to create an Ubuntu VM. This works on Windows, Mac, and Linux once multipass is installed. Alternatively, launch an ubuntu VM in your environment of choice, such as AWS, GCP, Azure, VMware, OpenStack, etc.

multipass launch --mem 10G --cpus 6 --disk 20G --name knative
multipass shell knative

1. Install Microk8s

sudo snap install --classic microk8s
microk8s.status --wait-ready
sudo snap alias microk8s.kubectl kubectl

2. Install Knative (everything)

This will install the MicroK8s default version and configuration of Knative, including all components of Knative. Skip this step if you want to customise the install, choosing your own version or set of components.

# This is all you need!
echo 'N;' | microk8s.enable knative

# check status of knative pods
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods -n knative-eventing
kubectl get pods -n knative-monitoring

# You can review Knative version, and configuration, by looking at the configuration files.
# Here’s an example for Knative serving. 
cat /snap/microk8s/current/actions/knative/serving.yaml

3. Install Knative (custom)

Choose this path for tighter control over Knative version and/or components

######################
# Install Istio first
######################
echo 'N;' | microk8s.enable istio

# Ensure Istio is running before proceeding
# All pods should be either 'running' or completed
kubectl get pods -n istio-system

######################
# Install Knative - select version and components
######################
export KNATIVE_071=releases/download/v0.7.1
export KNATIVE_070=releases/download/v0.7.0
export KNATIVE_URL=https://github.com/knative

# if you see errors, re-run next command
kubectl apply --selector knative.dev/crd-install=true \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/serving.yaml \
   --filename ${KNATIVE_URL}/build/${KNATIVE_070}/build.yaml \
   --filename ${KNATIVE_URL}/eventing/${KNATIVE_071}/release.yaml \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/monitoring.yaml

# After this command, total of ~5G of disk and ~2G of memory used
kubectl apply \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/serving.yaml \
   --filename ${KNATIVE_URL}/build/${KNATIVE_070}/build.yaml \
   --filename ${KNATIVE_URL}/eventing/${KNATIVE_071}/release.yaml

# ensure all pods are running
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods -n knative-eventing

############
# Monitoring wasn't installed, but you can install it with this command
# Note: it will consume about 6GB more memory and additional CPU
############
kubectl apply \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/monitoring.yaml
# check that all monitoring pods are running
kubectl get pods -n knative-monitoring

4. Hello World

From here you can experiment with several of the hello world examples in Knative. Each component – build, serving, eventing – can be used independently. At the moment, the serving examples leverage local (docker) builds rather than knative build. Why? The local build is still somewhat simpler than a knative build. I imagine this will change over time, showing better integration between the components. However, in the next post, we’ll show a complete solution that relies on knative only. You’ll be led through an example that will use knative to build and serve a simple function, and a local container repository to host images.

For now, and to test a simple example, review the serving hello world samples. If you enjoy Java and Spark, try the helloworld-java-spark example in github.

5. Steps Summary

############
# Launch a VM capable of running all four components of knative
############
multipass launch --mem 10G --cpus 6 --disk 20G --name knative
multipass shell knative

############
# Install MicroK8s
############
sudo snap install --classic microk8s
microk8s.status --wait-ready
sudo snap alias microk8s.kubectl kubectl

############
# Install Knative - everything
# NOTE: for customer Knative install, skip this step
############
echo 'N;' | microk8s.enable knative

############
# --OR-- Install custom Knative
############
# Install Istio
echo 'N;' | microk8s.enable istio
# check status of istio pods
kubectl get pods -n istio-system

# Install essential components of knative
export KNATIVE_071=releases/download/v0.7.1
export KNATIVE_070=releases/download/v0.7.0
export KNATIVE_URL=https://github.com/knative

# if you see errors, re-run next command as is
kubectl apply --selector knative.dev/crd-install=true \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/serving.yaml \
   --filename ${KNATIVE_URL}/build/${KNATIVE_070}/build.yaml \
   --filename ${KNATIVE_URL}/eventing/${KNATIVE_071}/release.yaml \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/monitoring.yaml

# After this command, total of ~5G of disk and ~2G of memory used
kubectl apply \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/serving.yaml \
   --filename ${KNATIVE_URL}/build/${KNATIVE_070}/build.yaml \
   --filename ${KNATIVE_URL}/eventing/${KNATIVE_071}/release.yaml

# ensure all pods are running
kubectl get pods -n knative-serving
kubectl get pods -n knative-build
kubectl get pods -n knative-eventing

############
# Install Monitoring if needed .. extra 6GB of memory required
############
kubectl apply \
   --filename ${KNATIVE_URL}/serving/${KNATIVE_071}/monitoring.yaml
# check that all monitoring pods are running
kubectl get pods -n knative-monitoring

Next Steps

We’ve only scratched the surface of Knative. The next set of posts will focus on each major feature of Knative. The posts will include getting started guides with MicroK8s and Knative. That combination works great for local discovery, development, and as part of your continuous integration pipeline.

Resources

kubernetes logo

What is Kubernetes?

Kubernetes, or K8s for short, is an open source platform pioneered by Google, which started as a simple container orchestration tool but has grown into a platform for deploying, monitoring and managing apps and services across clouds.

Learn more about Kubernetes ›

Newsletter signup

Get the latest Ubuntu news and updates in your inbox.

By submitting this form, I confirm that I have read and agree to Canonical's Privacy Policy.

Related posts

How should a great K8s distro feel? Try the new Canonical Kubernetes, now in beta

Try the new Canonical Kubernetes beta, our new distribution that combines ZeroOps for small clusters and intelligent automation for larger production...

Canonical Kubernetes 1.29 is now generally available

A new upstream Kubernetes release, 1.29, is generally available, with significant new features and bugfixes. Canonical closely follows upstream development,...

Turbocharge your API and microservice delivery on MicroK8s with Microcks

Give Microcks on MicroK8s a try and experience the benefits of accelerated development cycles and robust testing.