Container orchestration: Moving from fleet to Kubernetes
By Josh Wood, CoreOS
February 8, 2017
Over the past two years, we’ve seen a shift in the way organizations think about and manage distributed applications. It started with fleet, and today we are seeing widespread adoption of Kubernetes, which has become the de facto standard for open source container orchestration.
For numerous technical and market reasons, Kubernetes is the best tool for managing and automating container infrastructure at massive scale.
To this end, CoreOS will remove the fleet cluster scheduling software from Container Linux on February 1, 2018, and support for fleet will end at that time. fleet is effectively in maintenance mode, receiving only security and bugfix updates, as of February, 2017. This move reflects our focus on Kubernetes and Tectonic for cluster orchestration and management. It also simplifies the deployment picture for users while delivering an automatically updated Container Linux operating system of the absolute minimum surface and size.
New cluster deployments should be using:
After February 1, 2018, a fleet container image will continue to be available from the CoreOS Quay registry, but will not be shipped as part of Container Linux. Current fleet users with Container Linux Support can get help with migration from their usual support channel until the final deprecation date. We also include documentation on this migration below.
We’ll also stand ready to help fleet users with their questions on the CoreOS-User mailing list throughout this period. To help fleet admins get a head start, we’re hosting a live webinar on the move from fleet to Kubernetes with CoreOS CTO Brandon Philips on February 14 at 10 AM PT. It’s your chance to get questions answered live.
fleet: First steps on a journey
CoreOS started working on cluster orchestration from the moment we launched our company and our operating system, now known as CoreOS Container Linux. We were among the first developers exploring the discrete packaging of software containers as a way to allow automated deployment and scheduling on the cluster resources offered by cloud providers. The result of those early efforts was fleet, an open-source cluster scheduler designed to treat a group of machines as though they shared an init system.
A little less than a year into our work on fleet, Google introduced the open source Kubernetes code. We were flattered that it leveraged the CoreOS etcd distributed key-value backing store that we created for fleet and Container Linux, but more importantly Kubernetes offered direction and solutions we’d identified but not yet implemented for fleet. Kubernetes was designed around a solid, extensible API, which fleet lacked, and had already laid down code for service discovery, container networking, and other features essential for scaling the core concepts. Beyond that, it was backed by the decades of experience in the Google Borg, Omega, and SRE groups.
Kubernetes and Tectonic: How we orchestrate containers today
For those reasons, we dedicated developer resources and began contributing to the Kubernetes code base and community right away, well before Kubernetes 1.0. CoreOS is also a charter member of the Cloud Native Computing Foundation (CNCF), the industry consortium to which Google donated the Kubernetes copyrights, making the software a truly industry-wide effort.
CoreOS developers lead Kubernetes release cycles, Special Interest Groups (SIGs), and have worked over the last two years to make Kubernetes simpler to deploy, easier to manage and update, and more capable in production. The CoreOS flannel SDN is a popular mechanism for Kubernetes networking, in part because the Kubernetes network interface model is the Container Network Interface (CNI) pioneered by CoreOS and now shared by many containerized systems. Our teams worked closely on the design and implementation of the Kubernetes Role-Based Access Control (RBAC) system, and our open-source dex OIDC provider complements it with federation to major authentication providers and enterprise solutions like LDAP. And of course, etcd, originally a data store for fleet, carries the flag of those early efforts into the Kubernetes era.
Fleet explored a vision for automating many cluster chores, but as CEO Alex Polvi likes to say, Kubernetes “completed our sentence.” We’re thankful for the feedback and support fleet has had from the community over time, and beyond what you’ve done for fleet, we’ve brought your experiences and ideas forward into Kubernetes and Tectonic and the current world of container cluster orchestration.
Getting started with Kubernetes on CoreOS Tectonic
If you’re deploying a new cluster, the easiest way to get started is to check out Tectonic. Tectonic delivers simple installation and automated upgrades of the cluster orchestration software, atop pure open source Kubernetes. A free license for a cluster of up to 10 machines is available to enable you to test your applications on either of the two supported platforms: AWS or bare metal in your own datacenter.
Note on minikube: An easy first look at Kubernetes
If you are new to container orchestration, minikube, a tool that makes it easy to run Kubernetes locally, is also an easy way to get a first look at Kubernetes, by deploying on your laptop or any local computer.
Getting started with Kubernetes on CoreOS Container Linux
To dive into the details of Kubernetes, take a look at the guides for deploying Kubernetes on CoreOS Container Linux. These docs offer a general introduction to Kubernetes concepts, some peeks under the covers, and paths to deployment on platforms beyond the initial two supported by Tectonic.
Sustaining fleet clusters with the fleet container
After fleet is removed from the Container Linux Alpha channel in February 2018, it will additionally be removed from the Beta and Stable channels in turn. After those releases, users who need to maintain a fleet cluster will need to adopt the wrapper-and-container method that may already be familiar from etcd v3 and the Kubernetes kubelet itself on Container Linux. A small wrapper script knows where to get the desired application container and how to run it.
For fleet, admins can migrate toward a containerized deployment by tuning this example fleet Ignition config. The Ignition machine provisioner can place the configured wrapper on intended fleet nodes and activate the service.
Next steps from fleet to Kubernetes