At times developers find that it is not sufficient to test their applications "naked" on their local operating systems or under a local container engine, such as Podman. They need orchestration among multiple containers, network connectivity, dynamic storage, and other capabilities upon which their applications depend.
Using a lightweight orchestrator such as Docker Compose may not be enough. Maintaining two configuration sets—one for a development/test environment and another for a Kubernetes-based production environment—may be too much work. It also incurs additional risk, as those two configurations drift away from each other. After all, containers are supposed to solve "it runs fine on my machine" issues.
Those developers will discover that the Red Hat build of MicroShift is a compelling alternative. This article discusses how MicroShift is easier to install and run, highly compatible with Red Hat OpenShift, and more powerful than other lightweight distributions of Kubernetes.
What is MicroShift
The first thing you must understand is that MicroShift is not a lightweight OpenShift. It is not an edition of Red Hat OpenShift. According to its open source project page, "MicroShift is a project that optimizes OpenShift Kubernetes for small form factor and edge computing."
Likewise, according to its Red Hat page: “The Red Hat build of MicroShift is a lightweight Kubernetes container orchestration solution built from the edge capabilities of Red Hat OpenShift and based on the open source community’s project by the same name.”
MicroShift comes preconfigured with OpenShift features such as Security Context Constraints (SCCs) and Routes, which may not be trivial to add to other distributions of Kubernetes.
MicroShift also comes preconfigured with dynamic storage, based on the LVM Storage Operator, and advanced networking features from Multus and OVN-Kubernetes.
The Operator Lifecycle Manager (OLM) is supported on MicroShift, but it is not installed by default. If you install the OLM, you can install your own or third-party add-on operators, though their supportability and compatibility with MicroShift will vary on a case-by-case basis.
How MicroShift compares to OpenShift
Developers targeting Red Hat OpenShift as their production Kubernetes environment usually realize that even on its simplest deployment models (i.e., single node OpenShift), it is too resource hungry. Anecdotal evidence suggests that real development work requires an OpenShift instance with 32GB of memory and 4 CPUs, which may represent the majority of the capacity of some developers' work machines, leaving little room for integrated development environments (IDEs) and other tools required for development tasks.
Those developers may end up using a lightweight Kubernetes distribution, such as Kind or MiniKube, and hope that their application manifests work unchanged by Red Hat OpenShift. But there are significant differences in supported versions of APIs and installed add-on components, such as Ingress controllers. It would be safer to test applications in a leaner Red Hat OpenShift than to manually track and work around all settings that might differ between local Kubernetes and production OpenShift clusters.
MicroShift presents an interesting alternative to those developers. Because MicroShift is aligned with Red Hat OpenShift releases, built from the same sources, and reuses container images from the same add-on operators, there is a much lower risk of configuration drift that impacts applications. For edge scenarios, the Red Hat build of MicroShift is included in the support scope of Red Hat Device Edge, giving corporate managers and InfoSec peace of mind.
MicroShift for development inner and outer loops
The subset of features from Red Hat OpenShift that are included in MicroShift may or may not be sufficient for your applications. When it is not, believe me that configuring any other Kubernetes distribution to be closer would require more effort, and you should do your testing with real Red Hat OpenShift.
On the other hand, when MicroShift is sufficient for your applications, it is guaranteed to be compatible with OpenShift and much easier to install and configure than other Kubernetes distributions. Configuring a Kubernetes distribution such as MiniKube to provide the same API versions as Red Hat OpenShift and compatible behaviors for storage and networking, is not a trivial task.
Small things, such as the configurations of Kubernetes admission controllers, can make a significant impact on application behavior and developer workflows and be difficult to troubleshoot. Using MicroShift shields a developer from most of these issues.
That said, beware of replacing all your OpenShift clusters that support developers with MicroShift instances. MicroShift is not designed to be a scalable and highly available application platform, but rather a single-node Kubernetes engine tuned for edge devices and integrated with image mode for Red Hat Enterprise Linux (RHEL) and edge computing with Red Hat Enterprise Linux (RHEL).
It is true that if you know how to deploy and configure tools such as Tekton and Argo CD on vanilla Kubernetes, you could hack your way into running any of them on MicroShift. But what’s the point in going to all that trouble when you could use Red Hat supported add-on operators for the same tasks on Red Hat OpenShift?
It is fine to mix Red Hat OpenShift and MicroShift in the same development workflow. Developers could test code in their local MicroShift instances, which they can configure and shut down as they wish. Then they can push code to continuous integration and continuous delivery (CI/CD) pipeline running on OpenShift, to perform more involved integration, performance, and security tests.
To bring it full circle, when the end target is an edge device, their CI/CD could build edge system images, including MicroShift. Then they can test those system images by using either Red Hat OpenShift Virtualization or cloud instances. Start with MicroShift, switch to OpenShift, and optionally finish by deploying on MicroShift.
How to get started with MicroShift
MicroShift is an open source project. However, it does not provide ready-to-run binaries for any Linux distribution. The interdependencies between other open source components, such as Open Virtual Networking (OVN), are very involved and necessary for compatibility with Red Hat OpenShift. RPM packages from the Red Hat build of MicroShift built for RHEL are the only MicroShift binaries that are readily available.
Even if you could dnf install microshift
on your Fedora box, I would advise against it. Those interdependencies are likely to affect other things on your work machine, so I recommend using a virtual machine (VM) dedicated to MicroShift.
Note:
People have had success running MicroShift in community distributions such as Fedora CoreOS, but it is not easy. There is work in progress to integrate MicroShift with OKD (the community distribution of Red Hat OpenShift which runs on Fedora) and provide a single container MicroShift that would run on Podman. But we’re not there yet, and it will probably change how storage and networking are set up for MicroShift.
Currently, the easiest way to install and run MicroShift is by using a RHEL VM. The good news is that everything is available for free, through the Red Hat Developer Program.
You will need:
- RHEL installation media, such as ISO or cloud images.
- Access to RHEL package repositories for updates and additional software.
- Access to Red Hat registries for Red Hat OpenShift, runtimes, and add-on operator container images.
Note:
By joining the Red Hat Developer program, you are entitled to use most Red Hat software, for personal and professional use, with no risk of violating the End User License Agreements between your employer and Red Hat, while having continued access to minor and z-stream updates, bug fixes, and security fixes.
If your main work machine runs Windows, MacOS, or Linux, you should be able to create a relatively small RHEL VM with two vCPUs, 4GB of memory, and 20GB of disk space, and install MicroShift on it. You will probably be fine with a little less disk space, CPU, or memory, depending on the requirements of your applications, or you might need more.
Important:
As you install RHEL, do not use automatic disk partitioning. If you forget about this, MicroShift can not provision dynamic storage for your Kubernetes PVCs. You should install RHEL using manual LVM partitioning and make sure that there’s free space (recommended 10GB) on the RHEL volume group. Configure your boot, efi, and root partitions to add up to only 10GB of the 20GB virtual disk suggested for your RHEL VM.
Remember that your RHEL plus MicroShift VM uses its root partition to store container images from Red Hat operators and applications, in addition to ephemeral storage of application files, temporary files, and container logs. So don’t make it too small.
Using MicroShift with OpenShift Local
If you’re new to managing virtual machines, Red Hat OpenShift Local can perform all the hard work for you. OpenShift Local is a distribution of CRC that includes preconfigured VM images called bundles for Red Hat software, such as OpenShift and the Red Hat build of MicroShift.
OpenShift Local is not entitled to enterprise support, only to community support, which may be good enough for developers and personal projects. Regardless, you must refer to the community docs from the CRC project to operate OpenShift Local.
Note:
If you want to use CRC without any Red Hat software, there is also a bundle for OKD, which configures a VM based on Fedora CoreOS.
Benefits of OpenShift Local
There are many features and benefits that come with using MicroShift with OpenShift Local, or CRC.
CRC runs natively on Windows, MacOS, or Linux and configures a local VM using the native hypervisor of the operating system. Depending on the selected preset, OpenShift Local runs either Red Hat OpenShift on a larger VM or MicroShift on a smaller VM.
CRC saves you the trouble of creating/managing VMs and of installing/configuring RHEL, Red Hat OpenShift, and MicroShift. By using preconfigured VM images, it makes creating and starting these VMs quickly, to the point that some development teams use OpenShift Local to provide short-lived OpenShift instances for CI/CD jobs.
CRC also takes care of post-installation tasks you would have to perform when manually installing MicroShift on RHEL and other Kubernetes distributions (i.e., Kind and MiniKube), such as creating a Kubeconfig file with remote access credentials and CA trust.
An additional benefit of CRC is configuring the domain name system (DNS) of your laptop to resolve hostnames assigned to OpenShift routes, so you can access your applications without resorting to tricks, such as manual edits to your local hosts file.
As an advanced use case, you could create your own custom bundles for CRC and use them to provision CRC VMs. In fact, some developer teams keep multiple CRC bundles on their machines, either from Red Hat or their own customizations, to support quickly testing software with different releases of OpenShift and MicroShift.
A caveat of VMs created and managed by CRC is that they’re not supposed to retain state for a long time. Be prepared to initialize application data, such as databases, from files outside of your CRC-managed VM. CRC VMs are not upgradeable. Instead, you would provision a new VM, even though you are not supposed to fine-tune those VMs, but use the presets and bundles from OpenShift Local or from OKD.
Using Podman Desktop to manage CRC VMs
The downside of CRC is that it is a command-line tool. That makes it nice for integration into CI/CD tools, but not as comfortable for daily use. The good news is that, after initial installation and configuration of CRC, you can use Podman Desktop to create and manage CRC VMs.
To do this, just enable the OpenShift Local extension to Podman Desktop and use it to select the desired preset and create, start, stop, or destroy your CRC VM. Then use the Kubernetes extension of Podman Desktop extensions to connect to your MicroShift instance and manage Kubernetes API resources.
Configuring access
After your MicroShift VM is ready, with or without OpenShift Local, you have only cluster-administrator access as the system:admin
identity, exactly as you would have with MiniKube and most other Kubernetes distributions. Since it is not an application platform, MicroShift does not provide user management or integration with external lightweight directory access protocol (LDAP) or OpenID Connect (OIDC) identity management that OpenShift would provide.
Nonetheless, you should not use the system:admin
identity for development work for the same reasons you shouldn’t use the root account for daily tasks on Linux. Instead, you should configure an unprivileged identity with access to pre-provisioned projects for your routine work testing Kubernetes applications as you would with any other Kubernetes distribution.
Wrap up
MicroShift is designed to support edge devices at scale, coupled with other technologies of the Red Hat Device Edge product, such as RHEL and Red Hat Ansible Automation Platform. It is also a nice alternative for developers who may not be targeting edge deployments, but need a local Kubernetes environment for inner loop testing.
MicroShift provides stronger compatibility and alignment with Red Hat OpenShift than other local Kubernetes distributions. You can configure a small VM running RHEL and MicroShift, using the native hypervisor of your Windows, MacOS, or Linux machine and create a free Red Hat Developer account, or let OpenShift Local (or CRC) create and manage the VM for you.
If running a VM on your laptop seems like too much, consider that most developers using containers are already running VMs. Their Docker or Podman installations on Windows and MacOS manage Linux VMs because there’s no way to run containers without a Linux Kernel, and that requires a VM running Linux. In fact, CRC and Podman Machine share the same VM management code that abstracts the native hypervisors of different desktop operating systems.
Only Linux users are actually running containers without any VM, and these users are most likely running Fedora or other desktop-oriented Linux distributions instead of RHEL or CentOS Stream where they could run MicroShift directly. For them, running a RHEL VM with MicroShift ensures closer parity with production environments and also insulates their desktop OS settings from any requirements of MicroShift.
Learn more about MicroShift and try a free Red Hat product trial.
Thanks to Andrew Block, Daniel Froehlich, Fabrice Flore-Thébault, Gerard Braad, Johnny Westerlund, Praveen Kumar, Stephen Buck, and Vladislav Walek for their reviews on drafts of this article.