Despite the fact that OpenStack’s mission statement has not fundamentally changed since the inception of the project in 2010, we have found many different interpretations of the technology through the years. One of them was that OpenStack would be an all-inclusive anything-as-a-service, in a striking parallel to the many different definitions the “cloud” assumed at the time. At the OpenStack Developer Summit in Sydney, we found a project that is returning to its roots: scalable Infrastructure-as-a-Service. It turns out, that resonates well with its user base.
Although application developers have not endorsed the OpenStack API ecosystem as a whole, the foundation still notes significant increases in deployments (95% increase compared to 2016), and even public cloud use cases. It may just turn out that Containers, initially having brought some consternation and many proclamations of the end of OpenStack, actually will co-exist. This makes sense: virtual machines are useful constructs to work with, and API controlled management of network and storage primitives in a multi-tenant environment provide a sophistication and control currently not available in the context of application containers.
Stable, reliant and secure IaaS + Kubernetes: we are onto something.
Adding Kubernetes to the mix provides the “no vendor lock-in” so highly sought after by enterprises and telcos, and we at Canonical strive to provide the best experience using Kubernetes anywhere – on premise on OpenStack, on bare metal with MAAS, and in the public clouds at Amazon, Microsoft or Google.
Jointly, both technologies can fulfil the premise for developers and business alike: infrastructure as code, flexible orchestration and scale out models, in a multi-cloud setting.
Effective and efficient bare-metal management is key.
However, this journey towards software controlled infrastructure faces challenges in the form of time and money. If it takes too long or costs too much, your project is in peril.
Hence, our recommendation is to offer IaaS and Kubernetes quickly and see that your cloud is consumable by your developers right away. Avoid the situation where they may have bought into a vendor lock-in scenario on a public cloud due to delays in your IaaS offering.
MAAS is a crucial ingredient for your success as some applications require bare-metal or containers on bare metal; having a scalable provisioning system will enable developers immediately and positions your IaaS as the premier choice for their workloads.
Across three sessions, Canonical Founder and CEO Mark Shuttleworth gave a blueprint for a successful OpenStack implementation, outlining the two most impactful obstacles for the success of your cloud strategy:
- If you are not providing a more cost attractive solution than the alternatives (VMware or Public Clouds), your OpenStack installation will not find the support of the business you need to sustain the effort.
- If you are unable to provide your developers with the latest features or fix issues due to operational constraints, you end up in a “stuckstack” situation, and your crucial constituency will look elsewhere for innovation.
It follows that if you are building your cloud, there are only two measures of success:
- can you exercise full control and lifecycle manage your cloud?
- Your total cost of ownership of a virtual machine: how much does it cost to run a VM in your environment?
The most significant factor in high TCO per VM is consulting costs.
As long as your OpenStack is intended to provide stable IaaS services, there is no need to spend hundreds of thousands of dollars on experts who tune your cloud. It is more important to provide a stable IaaS and Kubernetes offering to your developers as quickly as possible.
By using best “bang for the buck” hardware you can (and need to) get started immediately, for example using a managed service offering such as Bootstack. If your intended target size is under 200 nodes, it is very likely that your TCO will be lowest with a continuously managed service on reference architecture. Even if you plan to scale to thousands of nodes, it will likely take you a minimum of two years to get there. Do not wait until you have a cloud designed for thousands of nodes at a 25 node scale only to find out you just wasted six months planning something that has become so expensive on the books already it will be killed long before it can reach its full potential.
If you build a more substantial cloud, say until roughly 4,000 nodes, you should consider following a reference architecture, but you should invest in your team operating the cloud as the costs for a managed service may become prohibitive at that scale.
Over 4,000 nodes, we recommend to leverage our Ubuntu OpenStack packages and invest in the capabilities you need.
Dondy Bappedyanto, CEO of BizNet GIO co-presented with Mark and explained the benefits of getting started quickly with a managed service: it allows BizNet GIO, an Indonesian public cloud provider, to focus on selling services on top of OpenStack, instead of OpenStack itself. This lends itself well to the local developer market in Indonesia, which is starving for flexible, reliable, cost-attractive and open alternatives to the existing choices in the market.
New use cases: Financial Sector and Edge Compute
City Network provides public and private cloud services for financial institutions and appreciates the pragmatic and quick onramp to OpenStack and Kubernetes as Florian Haas, VP Professional Services & Education at City Networks, explained. The success of City Networks hinges on being able to focus on the regulatory compliance requirements of its customers immediately. Messing around with, say, Neutron settings, is distracting and not conducive to providing this service.
Finally, Kandan Kathirvel, Director of Cloud Strategy & Architecture at AT&T, joined Mark in exploring OpenStack at the Edge, which will require a new reference architecture that is much simplified compared to the existing control plane and setup. OpenStack is needed for the foreseeable future at the Edge because VNF vendors still dictate virtual machines since many of these network functions are not available in a containerised version today. To achieve simplification of the stack at the Edge, the IaaS needs to be very workload specific, OpenStack services need to be containerised, and one toolset needs to be found to manage both the edge and the data centre instantiations of OpenStack. AT&T chose the community project OpenStack Helm to provide this functionality and is actively promoting the project as well as asking for community members to step up and contribute. Several other service providers and telcos have already committed to join AT&T in this effort.
OpenStack has consolidated and matured, both as a project and as a community. The code base of its core is stable, reliant and performant and tackles production workloads for increasingly demanding use cases every day. New use cases such as edge computing will challenge OpenStack to provide answers for a use case that has been, until now, not included in the reference design of the project. The hype may be over, but that only means we can finally start focussing on what is essential: providing the best possible developer experience at the lowest reasonable cost – with OpenStack, MAAS and Kubernetes.