Are you looking for an easy way to setup a local OpenShift 4 Cluster in your Laptop?. The Red Hat CodeReady Containers enables you to run a minimal OpenShift 4.2 or newer cluster on your local laptop or desktop computer. This should only be used for development and testing purposes. We’ll provide a separate guide to be used for setting up a production OpenShift 4 cluster.
Red Hat CodeReady Containers is a regular OpenShift installation with the following notable differences:
- It uses a single node which behaves both as a master and as a worker node.
- The
machine-config
andmonitoring
Operators are disabled by default. - These disabled Operators will cause the corresponding parts of the web console to be non functional.
- For the same reason, there is currently no upgrade path to newer OpenShift versions.
- Due to technical limitations, the CodeReady Containers cluster is ephemeral and will need to be recreated from scratch once a month using a newer release.
- The OpenShift instance is running in a virtual machine, which could cause some other differences, in particular in relation with external networking.
Minimum system requirements
CodeReady Containers requires the following minimum hardware and operating system requirements.
- 4 virtual CPUs (vCPUs)
- 8 GB of memory
- 35 GB of storage space
CodeReady Containers can be run on Linux, Windows, and macOS but this setup have been tested on CentOS 7/8 and Fedora 31. CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Microsoft Windows 10.
Step 1: Install required software packages
CodeReady Containers requires the libvirt
and NetworkManager
packages to be installed on the host system prior to its setup.
### Fedora ###
sudo dnf install NetworkManager qemu-kvm libvirt virt-install
sudo systemctl enable --now libvirtd
### CentOS / Rocky Linux ###
sudo yum -y install qemu-kvm libvirt virt-install bridge-utils NetworkManager
sudo systemctl enable --now libvirtd
### Ubuntu / Debian ###
sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system network-manager
Step 2: Install CodeReady Containers
Download the latest binary file for CRC from the below URL.
# Linux
wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz
# macOS
wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-macos-amd64.pkg
Extract the downloaded CodeReady Containers archive.
# Linux
tar xvf crc-linux-amd64.tar.xz
# macOS
open crc-macos-amd64.pkg
Place the binary in your $PATH
.
sudo mv crc*/crc /usr/local/bin
macOS
On macOS double click on the file you have to install or use open command:
open crc-macos-amd64.pkg
Confirm installation by checking the software version.
$ crc version
CodeReady Containers version: 2.0.1+bf3b1a6
OpenShift version: 4.10.3
Podman version: 3.4.4
To view crc commands help page, run:
$ crc --help
CodeReady Containers is a tool that manages a local OpenShift 4.x cluster optimized for testing and development purposes
Usage:
crc [flags]
crc [command]
Available Commands:
bundle Manage CRC bundles
cleanup Undo config changes
config Modify crc configuration
console Open the OpenShift Web Console in the default browser
delete Delete the OpenShift cluster
help Help about any command
ip Get IP address of the running OpenShift cluster
oc-env Add the 'oc' executable to PATH
podman-env Setup podman environment
setup Set up prerequisites for the OpenShift cluster
start Start the OpenShift cluster
status Display status of the OpenShift cluster
stop Stop the OpenShift cluster
version Print version information
Flags:
-h, --help help for crc
--log-level string log level (e.g. "debug | info | warn | error") (default "info")
Use "crc [command] --help" for more information about a command.
Step 3: Deploy CodeReady Containers virtual machine.
Run the crc setup
command to set up your host operating system for the CodeReady Containers virtual machine.
$ crc setup
The installer will check for setup requirements before installation.
INFO Checking if running as non-root
INFO Caching oc binary
INFO Setting up virtualization
INFO Setting up KVM
INFO Installing libvirt service and dependencies
INFO Adding user to libvirt group
INFO Enabling libvirt
INFO Starting libvirt service
INFO Will use root access: start libvirtd service
INFO Checking if a supported libvirt version is installed
INFO Installing crc-driver-libvirt
INFO Removing older system-wide crc-driver-libvirt
INFO Setting up libvirt 'crc' network
INFO Starting libvirt 'crc' network
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Writing Network Manager config for crc
INFO Will use root access: write NetworkManager config in /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf
INFO Will use root access: execute systemctl daemon-reload command
INFO Will use root access: execute systemctl stop/start command
INFO Writing dnsmasq config for crc
INFO Will use root access: write dnsmasq configuration in /etc/NetworkManager/dnsmasq.d/crc.conf
INFO Will use root access: execute systemctl daemon-reload command
INFO Will use root access: execute systemctl stop/start command
INFO Unpacking bundle from the CRC binary
Once the Setup is complete, run the command below to start the OpenShift cluster in your Laptop machine.
$ crc start
INFO Checking if running as non-root
INFO Checking if oc binary is cached
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if libvirt is enabled
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
? Image pull secret [? for help] * <PASTE-PULL-SECRET>
Please note that a valid OpenShift user pull secret is required during installation. The pull secret can be copied or downloaded from the Pull Secret section of the Install on Laptop: Red Hat CodeReady Containers page on cloud.redhat.com.
Paste the pulling secret when prompted, then cluster setup will continue.
INFO Extracting bundle: crc_libvirt_4.10.3_amd64...
INFO Creating CodeReady Containers VM for OpenShift 4.10.3...
INFO Verifying validity of the cluster certificates ...
INFO Check internal and public DNS query ...
INFO Copying kubeconfig file to instance dir ...
INFO Adding user's pull secret and cluster ID ...
INFO Starting OpenShift cluster ... [waiting 3m]
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD
INFO
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
Started the OpenShift cluster
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
Access details and credentials are printed after a successful setup.
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
To be able to access your cluster, first set up your environment by running.
$ crc oc-env
export PATH="/home/jmutai/.crc/bin:$PATH"
eval $(crc oc-env)
Run the commands printed in your terminal or add them to your ~/.bashrc or ~/.zshrc file, then source it.
$ vim ~/.bashrc
export PATH="~/.crc/bin:$PATH"
eval $(crc oc-env)
### Then source ###
source ~/.bashrc
Login as Admin using command printed out:
$ oc login -u kubeadmin -p UMeRe-hBQAi-JJ4Bi-8ynRD https://api.crc.testing:6443
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Login successful.
You have access to 53 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
Confirm cluster setup.
$ oc cluster-info
Kubernetes master is running at https://api.crc.testing:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ oc get nodes
NAME STATUS ROLES AGE VERSION
crc-2n9vw-master-0 Ready master,worker 5d13h v1.22.3+fdba464
$ oc config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://api.crc.testing:6443
name: api-crc-testing:6443
- cluster:
certificate-authority: /home/jmutai/.minikube/ca.crt
server: https://192.168.39.35:8443
name: minikube
contexts:
- context:
cluster: api-crc-testing:6443
user: developer/api-crc-testing:6443
name: /api-crc-testing:6443/developer
- context:
cluster: api-crc-testing:6443
namespace: default
user: kube:admin/api-crc-testing:6443
name: default/api-crc-testing:6443/kube:admin
- context:
cluster: minikube
user: minikube
name: minikube
current-context: default/api-crc-testing:6443/kube:admin
kind: Config
preferences: {}
users:
- name: developer/api-crc-testing:6443
user:
token: Pvqjq-b5HkV9UQtOYH8P9yOtm17MrOUVs-eaiSeQqXA
- name: kube:admin/api-crc-testing:6443
user:
token: LDrdGJMUpPUAxtg0IvWynedbtSBLjs8S2S6kdpvbMU8
- name: minikube
user:
client-certificate: /home/jmutai/.minikube/client.crt
client-key: /home/jmutai/.minikube/client.key
To view cluster operators:
$ oc get clusteroperators
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.10.3 True False False 2d
baremetal 4.10.3 True False False 26d
cloud-credential 4.10.3 True False False 26d
cluster-autoscaler 4.10.3 True False False 26d
config-operator 4.10.3 True False False 26d
console 4.10.3 True False False 45h
csi-snapshot-controller 4.10.3 True False False 26d
dns 4.10.3 True False False 26d
etcd 4.10.3 True False False 26d
image-registry 4.10.3 True False False 26d
ingress 4.10.3 True False False 26d
insights 4.10.3 True False False 26d
kube-apiserver 4.10.3 True False False 26d
kube-controller-manager 4.10.3 True False False 26d
kube-scheduler 4.10.3 True False False 26d
kube-storage-version-migrator 4.10.3 True False False 45h
machine-api 4.10.3 True False False 26d
machine-approver 4.10.3 True False False 26d
machine-config 4.10.3 True False False 26d
marketplace 4.10.3 True False False 26d
monitoring 4.10.3 True False False 46h
network 4.10.3 True False False 26d
node-tuning 4.10.3 True False False 46h
openshift-apiserver 4.10.3 True False False 3d7h
openshift-controller-manager 4.10.3 True False False 25d
openshift-samples 4.10.3 True False False 46h
operator-lifecycle-manager 4.10.3 True False False 26d
operator-lifecycle-manager-catalog 4.10.3 True False False 26d
operator-lifecycle-manager-packageserver 4.10.3 True False False 9d
service-ca 4.10.3 True False False 26d
storage 4.10.3 True False False 26d
Step 4: Access OpenShift Cluster
You can access the OpenShift cluster deployed locally from CLI or by opening the OpenShift 4.x console on your web browser.
$ oc login -u developer -p developer https://api.crc.testing:6443
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
Access as admin:
$ oc login -u kubeadmin -p UMeRe-hBQAi-JJ4Bi-8ynRD https://api.crc.testing:6443
Login successful.
You have access to 51 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
To open the console from your default web browser, run:
$ crc console
You can also view the password for the developer
and kubeadmin
users by running the following command:
crc console --credentials
Login with the credentials printed earlier.
There you have a cluster running.
Step 5: Stopping OpenShift Cluster
To stop your OpenShift cluster, run the command:
$ crc stop
Stopping the OpenShift cluster, this may take a few minutes...
Stopped the OpenShift cluster
The virtual machine can be started any time by running the command:
$ crc start
INFO Checking if running as non-root
INFO Checking if oc binary is cached
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if libvirt is enabled
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
INFO Starting CodeReady Containers VM for OpenShift 4.10.3...
INFO Verifying validity of the cluster certificates ...
INFO Check internal and public DNS query ...
INFO Starting OpenShift cluster ... [waiting 3m]
INFO
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD
INFO
...
Deleting CodeReady Containers virtual machine
If you want to delete an existing CodeReady Containers virtual machine, run:
$ crc delete
This command will delete the CodeReady Containers virtual machine.