Calico Installation: OpenShift (Assisted-Installer)
Author: Brandon B. Jozsa
Table of Contents- Part I: Create a Cluster in Assisted-Service
- Part II: Edit the Install Config
- Part III: Generating the Assisted-Service ISO
- Part IV: Prepare and Deploy the Calico Manifests
- Part V: Deploying the eBPF Dataplane
Red Hat quietly released a new method for installing bare-metal OpenShift clusters via a tool called Assisted-Installer, which is based on the Assisted-Service project. What makes this installer unique is that it greatly reduces the infrastructure requirements for provisioning bare metal (i.e. IPMI or Redfish management, DHCP, web servers, etc). Reduction of these traditional bare-metal provisioning requirements opens up some really interesting opportunities for telco and other provider deployments such as uCPE, RAN, CDN, MEC, and many other Edge and FE types of solutions.
So, if the assisted-installer doesn't require any of the traditional bare-metal bootstrap infrastructure, then how does it work? Think of the assisted-installer first in terms of a declarative API, similar to Kubernetes. If you want to make general changes to a cluster or more specifically to each individual host within a given cluster, first you'll need to instruct the API of the changes you want. Each bare-metal node will boot with a custom-configured liveISO and begin to check-in to the API. It will wait in a staged state until it receives instructions from the API. The liveISO works just like any other liveISO that you might already be familiar with, in the fact that it's loaded into memory and the user can initiate low-level machine actions, such as reformatting the disk and installing an OS. The assisted-service builds these liveISOs automatically, based on user instructions (more on this below). The LiveISO includes instructions to securely connect back to the assisted-service API and wait (in a staged-like state) for further instructions; for example, how each member of given cluster should be configured. This includes ignition instructions which have the ability to wipe the disks, deploy the OS, and configure any customizations. This can even include per-node agents (via containers), systemd units, OpenShift customizations, Operators, and even other OpenShift-based applications. The options are really quite limitless, and this allows us to build unique, and incredible solutions for customers.
This brings me to my main point. How can you deploy a customized OpenShift cluster via the Assisted-Installer? Well, let's get into that now.
This blog post will describe how to install the following:
- OpenShift 4.8.0-fc.3
- Single or Multi-node OpenShift (introduced in 4.8.x)
- Calico as the default CNI (plus instructions on how to enable eBPF)
- All via the Assisted-Installer over REST calls (think automation)
To access the assisted-installer, log into your cloud.redhat.com account. This will work for 60-day evaluations as well. If you have questions, please leave them in the comments below. So let's get started!
Required Links
Assisted-Service Red Hat Cloud
Red Hat Token
Part I: Create a Cluster in Assisted-Service
In order to use the Assisted-Service API via cloud.redhat.com, you will need to create a bearer token. Red Hat provides a user-level "OpenShift Cluster Manager API Token" which can be used to provide a bearer token.
- Once you have created an OpenShift Cluster Manager API Token, use the "Copy to clipboard" function and provide it as the variable
OFFLINE_ACCESS_TOKEN
.
OFFLINE_ACCESS_TOKEN="<PASTE_TOKEN_HERE>"
export TOKEN=$(curl \
--silent \
--data-urlencode "grant_type=refresh_token" \
--data-urlencode "client_id=cloud-services" \
--data-urlencode "refresh_token=${OFFLINE_ACCESS_TOKEN}" \
https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token | \
jq -r .access_token)
IMPORTANT: This token will expire often (5 minutes). So if you recieve a 400 - Token is expired
, reissue the command above.
VERIFY: You can check to see if the TOKEN
variable is set via the following conditional:
if [ -z ${TOKEN+x} ];
then echo "Token is undefined. Please check formatting before continuing\!"; else echo "Token is ready\!";
fi
- Review the following contents, and modify them as needed. These variables will be used throughout the rest of the demonstration and have been test to work:
ASSISTED_SERVICE_API="api.openshift.com"
CLUSTER_VERSION="4.8"
CLUSTER_IMAGE="quay.io/openshift-release-dev/ocp-release:4.8.4-x86_64"
CLUSTER_NAME="calico-poc"
CLUSTER_DOMAIN="jinkit.com"
CLUSTER_NET_TYPE="Calico"
CLUSTER_CIDR_NET="10.128.0.0/14"
CLUSTER_CIDR_SVC="172.30.0.0/16"
CLUSTER_HOST_NET="192.168.3.0/24"
CLUSTER_HOST_PFX="23"
CLUSTER_WORKER_HT="Enabled"
CLUSTER_WORKER_COUNT="0"
CLUSTER_MASTER_HT="Enabled"
CLUSTER_MASTER_COUNT="0"
CLUSTER_SSHKEY='ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDE1F7Fz3MGgOzst9h/2+5/pbeqCfFFhLfaS0Iu4Bhsr7RenaTdzVpbT+9WpSrrjdxDK9P3KProPwY2njgItOEgfJO6MnRLE9dQDzOUIQ8caIH7olzxy60dblonP5A82EuVUnZ0IGmAWSzUWsKef793tWjlRxl27eS1Bn8zbiI+m91Q8ypkLYSB9MMxQehupfzNzJpjVfA5dncZ2S7C8TFIPFtwBe9ITEb+w2phWvAE0SRjU3rLXwCOWHT+7NRwkFfhK/moalPGDIyMjATPOJrtKKQtzSdyHeh9WyKOjJu8tXiM/4jFpOYmg/aMJeGrO/9fdxPe+zPismC/FaLuv0OACgJ5b13tIfwD02OfB2J4+qXtTz2geJVirxzkoo/6cKtblcN/JjrYjwhfXR/dTehY59srgmQ5V1hzbUx1e4lMs+yZ78Xrf2QO+7BikKJsy4CDHqvRdcLlpRq1pe3R9oODRdoFZhkKWywFCpi52ioR4CVbc/tCewzMzNSKZ/3P0OItBi5IA5ex23dEVO/Mz1uyPrjgVx/U2N8J6yo9OOzX/Gftv/e3RKwGIUPpqZpzIUH/NOdeTtpoSIaL5t8Ki8d3eZuiLZJY5gan7tKUWDAL0JvJK+EEzs1YziBh91Dx1Yit0YeD+ztq/jOl0S8d0G3Q9BhwklILT6PuBI2nAEOS0Q=='
VERIFY: Verify that you can correctly talk with the API with the following curl
command (install jq
before running this command):
curl -s -X GET "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters" \
-H "accept: application/json" \
-H "Authorization: Bearer $TOKEN" \
| jq -r
If you are able to verify/return a response via the command above, you can continue to the next steps and deploy a cluster with modifications!
- Next, download your pull_secret from the following URL: https://cloud.redhat.com/openshift/install/pull-secret
HINT: All of these instructions are intended to be run from the current directory. Considering this, make sure that the pull-secret.txt
and installation are where you want them to be before continuing.
- Next, create a variable with the raw contents of your
pull-secret.txt
file. This is important, because escape characters should be included as part of this output.
PULL_SECRET=$(cat pull-secret.txt | jq -R .)
- Now create an Assisted-Service deployment
.json
file:
cat << EOF > ./deployment.json
{
"kind": "Cluster",
"name": "$CLUSTER_NAME",
"openshift_version": "$CLUSTER_VERSION",
"ocp_release_image": "$CLUSTER_IMAGE",
"base_dns_domain": "$CLUSTER_DOMAIN",
"hyperthreading": "all",
"cluster_network_cidr": "$CLUSTER_CIDR_NET",
"cluster_network_host_prefix": $CLUSTER_HOST_PFX,
"service_network_cidr": "$CLUSTER_CIDR_SVC",
"user_managed_networking": true,
"vip_dhcp_allocation": false,
"host_networks": "$CLUSTER_HOST_NET",
"hosts": [],
"ssh_public_key": "$CLUSTER_SSHKEY",
"pull_secret": $PULL_SECRET
}
EOF
HINT: If you recieve an error that you cannot overwrite the file, then make sure to run setopt clobber
in your shell.
HINT: There's a new option in OpenShift v4.8.x that allows you to create a single node OpenShift cluster (referred to as SNO). If this is what you want, then you will need to add the following lines to the ./deployment.json
file:
"high_availability_mode": "None"
As an example, a SNO deployment will look like this:
cat << EOF > ./deployment.json
{
"kind": "Cluster",
"name": "$CLUSTER_NAME",
"openshift_version": "$CLUSTER_VERSION",
"ocp_release_image": "$CLUSTER_IMAGE",
"base_dns_domain": "$CLUSTER_DOMAIN",
"hyperthreading": "all",
"cluster_network_cidr": "$CLUSTER_CIDR_NET",
"cluster_network_host_prefix": $CLUSTER_HOST_PFX,
"service_network_cidr": "$CLUSTER_CIDR_SVC",
"user_managed_networking": true,
"vip_dhcp_allocation": false,
"high_availability_mode": "None",
"host_networks": "$CLUSTER_HOST_NET",
"hosts": [],
"ssh_public_key": "$CLUSTER_SSHKEY",
"pull_secret": $PULL_SECRET
}
EOF
- Create the cluster via the Assisted-Service API:
curl -s -X POST "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters" \
-d @./deployment.json \
--header "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
| jq '.id'
- IMPORTANT: that this will generate a
CLUSTER_ID
, which will need to be exported for future use. Export this variable from the output of the previous command in Step 5:
curl -s -X POST "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters" \
-d @./deployment.json \
--header "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
| jq '.id'
"0da7cf59-a9fd-4310-a7bc-97fd95442ca1"
CLUSTER_ID="0da7cf59-a9fd-4310-a7bc-97fd95442ca1"
We're going to need this CLUSTER_ID
variable for the next step, which is where we'll edit the CNI option for the OpenShift Installation Configuration.
Part II: Edit the Install Config
Now you can update the cluster install-config
via the Assisted-Service API:
curl \
--header "Content-Type: application/json" \
--request PATCH \
--data '"{\"networking\":{\"networkType\":\"Calico\"}}"' \
-H "Authorization: Bearer $TOKEN" \
"https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/install-config"
VERIFY: You can review your changes by issuing the following curl
request:
curl -s -X GET \
--header "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
"https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/install-config" \
| jq -r
Now you can move on to generating the Installation Media (ISO), which can then be mounted in either your bare metal machine or guest machine, if you're using a hypervisor.
Part III: Generating the Assisted-Service ISO
- Create another
json
file, this time callediso-params.json
which will be used to generate the deployment ISO:
cat << EOF > ./iso-params.json
{
"ssh_public_key": "$CLUSTER_SSHKEY",
"pull_secret": $PULL_SECRET
}
EOF
- Now use the following command to
POST
a request for Assisted-Service to build the deployment ISO:
curl -s -X POST "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/downloads/image" \
-d @iso-params.json \
--header "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
| jq '.'
- Use
curl
to download the ISO just generated. This ISO will be used to build the OpenShift cluster, as with any other Assisted-Service deployment:
curl \
-H "Authorization: Bearer $TOKEN" \
-L "http://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/downloads/image" \
-o ai-liveiso-$CLUSTER_ID.iso
- Lastly, boot the bare metal instance from this ISO just downloaded.
Part IV: Prepare and Deploy the Calico Manifests
- Create a
manifests
folder, and download the manifests for Calico. Be sure to check Calico's documentation for any potential changes:
mkdir manifests
curl https://docs.projectcalico.org/manifests/ocp/crds/01-crd-apiserver.yaml -o manifests/01-crd-apiserver.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/01-crd-installation.yaml -o manifests/01-crd-installation.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/01-crd-imageset.yaml -o manifests/01-crd-imageset.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/01-crd-tigerastatus.yaml -o manifests/01-crd-tigerastatus.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_bgpconfigurations.yaml -o manifests/crd.projectcalico.org_bgpconfigurations.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_bgppeers.yaml -o manifests/crd.projectcalico.org_bgppeers.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_blockaffinities.yaml -o manifests/crd.projectcalico.org_blockaffinities.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_clusterinformations.yaml -o manifests/crd.projectcalico.org_clusterinformations.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_felixconfigurations.yaml -o manifests/crd.projectcalico.org_felixconfigurations.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_globalnetworkpolicies.yaml -o manifests/crd.projectcalico.org_globalnetworkpolicies.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_globalnetworksets.yaml -o manifests/crd.projectcalico.org_globalnetworksets.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_hostendpoints.yaml -o manifests/crd.projectcalico.org_hostendpoints.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_ipamblocks.yaml -o manifests/crd.projectcalico.org_ipamblocks.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_ipamconfigs.yaml -o manifests/crd.projectcalico.org_ipamconfigs.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_ipamhandles.yaml -o manifests/crd.projectcalico.org_ipamhandles.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_ippools.yaml -o manifests/crd.projectcalico.org_ippools.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_kubecontrollersconfigurations.yaml -o manifests/crd.projectcalico.org_kubecontrollersconfigurations.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_networkpolicies.yaml -o manifests/crd.projectcalico.org_networkpolicies.yaml
curl https://docs.projectcalico.org/manifests/ocp/crds/calico/kdd/crd.projectcalico.org_networksets.yaml -o manifests/crd.projectcalico.org_networksets.yaml
curl https://docs.projectcalico.org/manifests/ocp/tigera-operator/00-namespace-tigera-operator.yaml -o manifests/00-namespace-tigera-operator.yaml
curl https://docs.projectcalico.org/manifests/ocp/tigera-operator/02-rolebinding-tigera-operator.yaml -o manifests/02-rolebinding-tigera-operator.yaml
curl https://docs.projectcalico.org/manifests/ocp/tigera-operator/02-role-tigera-operator.yaml -o manifests/02-role-tigera-operator.yaml
curl https://docs.projectcalico.org/manifests/ocp/tigera-operator/02-serviceaccount-tigera-operator.yaml -o manifests/02-serviceaccount-tigera-operator.yaml
curl https://docs.projectcalico.org/manifests/ocp/tigera-operator/02-configmap-calico-resources.yaml -o manifests/02-configmap-calico-resources.yaml
curl https://docs.projectcalico.org/manifests/ocp/tigera-operator/02-tigera-operator.yaml -o manifests/02-tigera-operator.yaml
curl https://docs.projectcalico.org/manifests/ocp/01-cr-installation.yaml -o manifests/01-cr-installation.yaml
curl https://docs.projectcalico.org/manifests/ocp/01-cr-apiserver.yaml -o manifests/01-cr-apiserver.yaml
- Next, we need to base64 encode each of the Calico manifests and upload each of them to the Assisted-Installer in order to be included as part of the deployment
manifests
directory. Before starting, verify if any manifests have been previously uploaded for this cluster. If either nothing or brackets are returned, continue (this is what we want):
curl -s -X GET \
--header "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
"https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/manifests"
- Set variables for the location of the Calico directory. This will be used in the next command, where we'll base64 encode each of the yaml docs:
MANIFESTS=(manifests/*.yaml)
- Run the following BASH/ZSH loop to POST each base64 encoded manifest to the Assisted-Service API automatically:
total=${#MANIFESTS[@]}
i=0
for file in "${MANIFESTS[@]}"; do
i=$(( i + 1 ))
eval "CALICO_B64_MANIFEST=$(cat $file | base64 -w 0)";
eval "BASEFILE=$(basename $file)";
printf "Processing file: $file \n"
printf "Basename of file: $BASEFILE \n"
curl \
--header "Content-Type: application/json" \
--request POST \
-H "Authorization: Bearer $TOKEN" \
--data "{\"file_name\":\"$BASEFILE\", \"folder\":\"manifests\", \"content\":\"$CALICO_B64_MANIFEST\"}" \
"https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/manifests"
done
printf "Total Manifests: $total \n"
VERIFY: You can review your changes by issuing the following curl
request:
curl -s -X GET \
--header "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
"https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/manifests" | jq -r
-
IMPORTANT: At this point, you MUST boot each of your servers via the LiveISO downloaded from "Part II, Step 3" and verify that each have correctly checked into the Assisted-Installer UI.
DO NOT change any settings, but you can safely click "Next" until you see the list of staged hosts. -
Now it is time to update the cluster with the host-level subnet:
curl -X PATCH \
"https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d "{ \"machine_network_cidr\": \"$CLUSTER_HOST_NET\"}" | jq
NOTE: If you recieve an error, be sure that you have hosts staged.
- (a) Finally, it is time to start the installation! There are two ways you can do this; the first is via an API call (of course)
curl -X POST \
"https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/actions/install" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" | jq
- (b) The other method is to confirming the cluster deployment in the WebIO by clicking Next a couple of times, and finally Finish.
That should do it! You've deployed a custom OpenShift cluster using the Assisted-Installer.
Part V: Deploying the eBPF Dataplane
eBPF is the real deal, however you will want to explore any potential limitations with implementing eBPF and take into considerations what your specific use-cases or requirements are.
- First, create a
ConfigMap
in thetigera-operator
namespace. For theKUBERNETES_SERVICE_HOST
variable used below, make sure to use the IP address that's been assigned toapi.<cluster-name>.<example.com>
:
KUBERNETES_SERVICE_HOST=192.168.3.32
KUBERNETES_SERVICE_PORT=6443
cat << EOF > ./02-configmap-endpoint-tigera-operator.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: "$KUBERNETES_SERVICE_HOST"
KUBERNETES_SERVICE_PORT: "$KUBERNETES_SERVICE_PORT"
EOF
oc apply -f ./02-configmap-endpoint-tigera-operator.yaml
- IMPORTANT: Because of Kubernetes Issue 30189, wait for 60 seconds before forcing the
tigera-operator
to perform a rolling update of Calico across each of the nodes:
sleep 60
oc delete pod -n tigera-operator -l k8s-app=tigera-operator
oc wait deployment.apps/calico-kube-controllers --for condition=available -n calico-system
oc wait deployment.apps/calico-typha --for condition=available -n calico-system
oc rollout status daemonset.apps/calico-node -n calico-system
- Now it's time to enable the Calico eBPF dataplane. To do this, enter the following commmand to patch the
linuxDataplane
key with the value ofBPF
for the Tigera Operator:
oc patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"linuxDataplane":"BPF", "hostPorts":null}}}'
- With that complete it's time to remove the OpenShift-provided
kube-proxy
. Not doing this step can result in deployment confusion within your environment and it will increase CPU use:
oc patch networks.operator.openshift.io cluster --type merge -p '{"spec":{"deployKubeProxy": false}}'
Optional: If you'd like, you can now enable DSR mode (Direct Server Return). Read more HERE:
calicoctl patch felixconfiguration default --patch='{"spec": {"bpfExternalServiceMode": "DSR"}}'
Reversal
If you want to remove the eBPF dataplane, and return to the IPTables-driven controlplane, follow these steps.
- To reverse the eBPF dataplane as described above, first re-enable the OpenShift-provided
kube-proxy
service:
oc patch networks.operator.openshift.io cluster --type merge -p '{"spec":{"deployKubeProxy": true}}'
- Lastly, enable IPTables for the Calico/Linux dataplane:
oc patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"linuxDataplane":"Iptables"}}}'
That should do it! Enjoy and explore your eBPF-enabled cluster.