How do I install OpenShift Origin on Ubuntu 19.04/18.04/16.04?. OpenShift Origin (OKD) is an Open Source implementation of Red Hat OpenShift. In a nutshell, it is the community distribution of Kubernetes optimized for developing, deploying, and managing container-based applications. Openshift gives you a self-service platform to create, modify, and deploy applications on demand.
If you’re on CentOS, check: How To Setup Local OpenShift Origin (OKD) Cluster on CentOS 7
Similar: How to run Local Openshift Cluster with Minishift
OpenShift aims at ensuring there is faster development and release life cycles. This guide has been written to guide you through the installation of single node OpenShift Origin on Ubuntu 18.04 / 16.04. This setup is not recommended for production use. Refer to Openshift Origin Cluster installation for production use.
Step 1: Install Docker CE on Ubuntu
A single node installation will run all OKD services in docker containers. Docker Engine runtime is required on the host system.
Import Docker GPG key.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Once it is imported, add Docker APT repository to your Ubuntu 18.04 system.
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
The Docker repository is now added, run the commands below to update system and install Docker CE on Ubuntu.
sudo apt update && sudo apt -y install docker-ce
Verify your Docker Engine installation.
$ docker version
Client:
Version: 18.09.3
API version: 1.39
Go version: go1.10.8
Git commit: 774a1f4
Built: Thu Feb 28 06:53:11 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.3
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 774a1f4
Built: Thu Feb 28 05:59:55 2019
OS/Arch: linux/amd64
Experimental: false
Add your User account to docker group.
sudo usermod -aG docker $USER
Step 2: Download OpenShift Origin on Ubuntu 19.04/18.04/16.04
Download the OpenShift client utility (oc) which is used to bootstrap Openshift Origin on Ubuntu. As of this writing, the most recent release is 3.11.0
.
wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz
Uncompress downloaded file.
tar xvf openshift-origin-client-tools*.tar.gz
Switch to created folder and copy kubectl
and oc
binaries to the /usr/local/bin
directory.
cd openshift-origin-client*/
sudo mv oc kubectl /usr/local/bin/
Verify installation of OpenShift client utility.
$ oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
Allow use of Insecure Docker registry.
cat << EOF | sudo tee /etc/docker/daemon.json
{
"insecure-registries" : [ "172.30.0.0/16" ]
}
EOF
Restart Docker service after adding the file.
sudo systemctl restart docker
Start OpenShift Origin All-in-One Server
Start OKD server by running the following command:
$ oc cluster up
The command above will:
- Start OKD Cluster listening on local interface – 127.0.0.1:8443
- Start a web console listening on all interfaces at
/console
(127.0.0.1:8443). - Launch Kubernetes system components.
- Provisions registry, router, initial templates, and a default project.
There are a number of options which can be applied when setting up Openshift Origin, view them with:
$ oc cluster up --help
On a successful installation, you should get output similar to below.
Login to server …
Creating initial project "myproject" …
Server Information …
OpenShift server started.
The server is accessible via web console at:
https://127.0.0.1:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
Example below uses custom options.
$ oc cluster up --routing-suffix=<ServerPublicIP>.xip.io \
--public-hostname=<ServerPulicDNSName>
Or Just public/Private IP
oc cluster up --public-hostname=192.168.10.10
OpenShift cluster configuration files will be located inside the openshift.local.clusterup/
directory.
To login as administrator, use:
$ oc login -u system:admin
Logged into "https://116.203.125.128:8443" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project ':
* default
kube-dns
kube-proxy
kube-public
kube-system
myproject
openshift
openshift-apiserver
openshift-controller-manager
openshift-core-operators
openshift-infra
openshift-node
openshift-service-cert-signer
openshift-web-console
Using project "default.
Change to the default
project:
oc project default
Deploy OKD cluster integrated container image registry if it doesn’t exist.
$ oc adm registry
Docker registry "docker-registry" service exists
Check current project status.
$ oc status
In project default on server https://192.168.10.10:8443
svc/docker-registry - 172.30.1.1:5000
dc/docker-registry deploys docker.io/openshift/origin-docker-registry:v3.11
deployment #1 deployed about an hour ago - 1 pod
svc/kubernetes - 172.30.0.1:443 -> 8443
svc/router - 172.30.119.192 ports 80, 443, 1936
dc/router deploys docker.io/openshift/origin-haproxy-router:v3.11
deployment #1 deployed about an hour ago - 1 pod
View details with 'oc describe /' or list everything with 'oc get all'.
Creating a Project on OKD
Now that we have OKD installed and working, we can test the deployment by deploying a test project. Switch to test user account.
$ oc login
Authentication required for https://116.203.125.128:8443 (openshift)
Username: developer
Password: developer
Login successful.
Confirm if Login was successful.
$ oc whoami
developer
Create a new project using oc new-project
command.
$ oc new-project dev --display-name="Project1 - Dev" --description="My Dev Project"
Access Admin Console in a browser
OKD includes a web console which you can use for creation and management actions. This web console is accessible on Server IP/Hostname on the port,8443
via https.
https://<IP|Hostname>:8443/console
If you are redirected to https://127.0.0.1:8443/ when trying to access OpenShift web console, then do this:
1. Stop OpenShift Cluster
$ oc cluster down
2. Edit OCP configuration file.
$ nano ./openshift.local.clusterup/openshift-controller-manager/openshift-master.kubeconfig
Locate line “server: https://127.0.0.1:8443“, then replace with:
server: https://serverip:8443
3. Then start cluster:
$ oc cluster up
You should see an OpenShift Origin window with Username and Password forms, similar to this one:
Login with:
Username: developer
Password: developer
You should see a dashboard similar to below.
A Project can be created from the web console.
Give it a name, optional Display Name, and Description. If you click on Project name, you should get to the Project management dashboard where you can Browse Catalog, Deploy Image and Import YAML/JSON.
Status of deployed Project can be viewed from CLI.
$ oc login
$ oc project <projectname>
$ oc status
In project My Project (myproject) on server https://116.203.125.128:8443
svc/parksmap-katacoda - 172.30.144.250:8080
dc/parksmap-katacoda deploys istag/parksmap-katacoda:1.0.0
deployment #1 deployed 4 minutes ago - 1 pod
2 infos identified, use 'oc status --suggest' to see details.
Deploy Test Application on OpenShift Origin
We can now deploy test Application in the cluster.
1. Login to Openshift cluster:
$ oc login
Authentication required for https://https://127.0.0.1:8443 (openshift)
Username: developer
Password: developer
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project
2. Create a test project.
$ oc new-project test-project
3. Tag an application image from Docker Hub registry.
$ oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest
Tag deployment-example:latest set to openshift/deployment-example:v2.
4. Deploy Application to OpenShift.
$ oc new-app deployment-example
--> Found image da61bb2 (3 years old) in image stream "test-project/deployment-example" under tag "latest" for "deployment-example"
* This image will be deployed in deployment config "deployment-example"
* Port 8080/tcp will be load balanced by service "deployment-example"
* Other containers can access this service through the hostname "deployment-example"
* WARNING: Image "test-project/deployment-example:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
deploymentconfig.apps.openshift.io "deployment-example" created
service "deployment-example" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/deployment-example'
Run 'oc status' to view your app.
5. Show Application Deployment status.
$ oc status
In project test-project on server https://127.0.0.1:8443
svc/deployment-example - 172.30.15.201:8080
dc/deployment-example deploys istag/deployment-example:latest
deployment #1 deployed about a minute ago - 1 pod
2 infos identified, use 'oc status --suggest' to see details.
6. Get service detailed information.
$ oc get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-example ClusterIP 172.30.15.201 8080/TCP 18m
$ oc describe svc deployment-example
Name: deployment-example
Namespace: test-project
Labels: app=deployment-example
Annotations: openshift.io/generated-by=OpenShiftNewApp
Selector: app=deployment-example,deploymentconfig=deployment-example
Type: ClusterIP
IP: 172.30.15.201
Port: 8080-tcp 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.17.0.12:8080
Session Affinity: None
Events: <none>
7. Test App local access.
curl http://172.30.15.201:8080
8. Show Pods status
$ oc get pods
NAME READY STATUS RESTARTS AGE
deployment-example-1-vmf7t 1/1 Running 0 21m
9. Allow external access to the application.
$ oc expose service/deployment-example
route.route.openshift.io/deployment-example exposed
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
deployment-example deployment-example-testproject.services.computingpost.com deployment-example 8080-tcp None
10. Test external access to the application.
Open the URL shown in your browser.
Note that I have Wildcard DNS record for *.services.computingpost.com pointing to OpenShift Origin server IP address and –routing-suffix set to ‘services.computingpost.com‘ during deployment.
11. Delete test Application
$ oc delete all -l app=deployment-example
pod "deployment-example-1-8n8sd" deleted
replicationcontroller "deployment-example-1" deleted
service "deployment-example" deleted
deploymentconfig.apps.openshift.io "deployment-example" deleted
route.route.openshift.io "deployment-example" deleted
$ oc get pods
No resources found.
Read OpenShift Origin documentation and stay connected for more updates.