SiteConfig Operator Tutorial
I’m excited to try a new feature in ACM 2.12 that's now GA — the SiteConfig Operator.
TL;DR: With the SiteConfig Operator, you can deploy a cluster with just a single YAML file! (And no, this doesn't require SiteConfig, nor does it need the GitOps plugin.)
The SiteConfig Operator brings in a unified ClusterInstance API that simplifies cluster deployment by separating cluster definitions from installation methods. It supports both Git and non-Git workflows, providing scalability and flexibility for cluster management. Overall, this new functionality enables more efficient, scalable, and customizable management of OpenShift clusters.
What Is the SiteConfig Operator?
Here is the description of the siteconfig operator taken from the ACM documentaion site.
The SiteConfig Operator improves upon the older SiteConfig API from the SiteConfig Generator Kustomize plugin by introducing several enhancements to cluster management and deployment:
- Isolation: Separates the cluster definition from the deployment method, enabling you to define the cluster using the ClusterInstance CR while the templates manage the architecture and installation process.
- Unification: Supports both Git and non-Git workflows, so you can apply ClusterInstance CRs directly or sync them using a GitOps tool like ArgoCD.
- Consistency: Provides a consistent API for all installation methods, including Assisted Installer, Image-Based Install Operator, and custom templates.
- Scalability: Enhances cluster scalability compared to the SiteConfig Kustomize plugin.
- Flexibility: Allows users to create custom cluster templates for flexible deployments.
- Troubleshooting: Improves troubleshooting by offering more detailed insights into the cluster deployment status and rendered manifests.
In this blog post, I’ll walk you through how to deploy a Single-Node OpenShift (SNO) cluster using the SiteConfig Operator and demonstrate just how easy it is!
For more details on the SiteConfig Operator, check out the official documentation:
Prerequisites and Assumptions
Before you begin, ensure that you have a Red Hat Advanced Cluster Management (ACM) version 2.12 hub cluster running.
STep1: Enable the SiteConfig operator from the MultiClusterHub resource
Enabling the SiteConfig Operator is straightforward. You can enable it by running a single command.
By default, the multiclusterhub CR is installed in the open-cluster-management namespace.
Run the following command to enable the SiteConfig operator:
$ oc patch multiclusterhubs.operator.open-cluster-management.io multiclusterhub -n open-cluster-management --type json --patch '[{"op": "add", "path":"/spec/overrides/components/-", "value": {"name":"siteconfig","enabled": true}}]'
You can verify the change by checking the pods:
$ oc get pod -n open-cluster-management | grep siteconfig
siteconfig-controller-manager-59c6c6976d-mgzbq 2/2 Running 0 52s
Once the operator is enabled, you'll have the default set of templates available.
In this tutorial, we will be using the Assisted Installer method templates.
You can verify the templates available in the config maps:
$ oc get cm -n open-cluster-management
NAME DATA AGE
ai-cluster-templates-v1 5 78s
ai-node-templates-v1 2 78s
ibi-cluster-templates-v1 3 78s
ibi-node-templates-v1 3 78s
And that’s it — you're now ready to deploy a cluster with the siteconfig operator!
Step2: Deploy a Single-Node OpenShift (SNO) Cluster
The process to deploy a cluster is incredibly simple and follows the same flow as Zero-Touch Provisioning (ZTP) or any other ACM-based deployment.
- Create a namespace for the cluster.
- Create the BMC and pull secrets required for the installation.
- Create the ClusterInstance CR.
Step2.1: Create the target namespace
First, create a YAML file for the target namespace. In this example, we will use the same namespace name as in the official documentation. The YAML file is named clusterinstance-namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: example-sno
Now, apply the YAML file to create the namespace for the cluster:
$ oc apply -f clusterinstance-namespace.yaml
Step2.2: Create the Pull Secret
As always, to enable your cluster to pull images from container registries, you need to create a pull secret. This secret contains the necessary credentials for accessing container images.
Here’s an example of what the YAML file (pull-secret.yaml) should look like:
apiVersion: v1
data:
.dockerconfigjson: <encoded_docker_configuration>
kind: Secret
metadata:
name: pull-secret
namespace: example-sno
type: kubernetes.io/dockerconfigjson
After updating the YAML file, apply the pull secret to the cluster:
$ oc apply -f pull-secret.yaml
Step2.3: Create the BMC secret
In this step, we’ll create a secret that is required to connect to your Baseboard Management Controller (BMC) when using the Assisted Installer (AI) method.
Here’s a sample YAML file for creating the BMC secret. Save this file as example-bmc-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: example-bmh-secret
namespace: "example-sno"
type: Opaque
data:
username: <base64-encoded-username>
password: <base64-encoded-password>
Once you’ve updated the YAML with the correct encoded values, apply the secret to your cluster:
oc apply -f example-bmc-secret.yaml
Step 2.4: Create the ClusterInstance CR
Finally, create the ClusterInstance Custom Resource (CR) to define the cluster configuration.
Below is an example of an SNO (Single Node OpenShift) configuration using the Assisted Installer (AI) method:
apiVersion: siteconfig.open-cluster-management.io/v1alpha1
kind: ClusterInstance
metadata:
name: "example-clusterinstance"
namespace: "example-sno"
spec:
clusterName: example-sno
clusterImageSetNameRef: img4.17.4-x86-64-appsub
baseDomain: cotton.blue
holdInstallation: false
# extraManifestsRefs:
# - name: extra-machine-configs
# - name: enable-crun
machineNetwork:
- cidr: 192.168.1.0/24
networkType: OVNKubernetes
sshPublicKey: "<my public ssh-key>"
pullSecretRef:
name: "pull-secret"
templateRefs:
- name: ai-cluster-templates-v1
namespace: open-cluster-management
nodes:
- role: master
templateRefs:
- name: ai-node-templates-v1
namespace: open-cluster-management
bmcCredentialsName:
name: "example-bmh-secret"
bmcAddress: redfish-virtualmedia+https://192.168.1.102:8000/redfish/v1/Systems/35f0970f-e83a-400a-99b7-8faca85b761f
bootMACAddress: 52:54:00:ac:c6:ae
bootMode: UEFI
hostName: example-sno
nodeNetwork:
interfaces:
- name: "ens1s0"
macAddress: "52:54:00:ac:c6:ae"
config:
interfaces:
- name: ens1s0
type: ethernet
state: up
ipv4:
enabled: true
dhcp: true
dns-resolver:
config:
search:
- cotton.blue
server:
- 192.168.1.1
routes:
config:
- destination: 0.0.0.0/0
next-hop-interface: ens1s0
next-hop-address: 192.168.1.1
table-id: 254
Once you’ve created and updated the YAML file, apply the ClusterInstance CR using the following command:
$ oc apply -f clusterinstance-ai.yaml
At this point, the cluster will appear in the ACM GUI, and you should see the inventory for the new cluster. This process is very similar to what you’d experience with ZTP (Zero-Touch Provisioning) or other ACM-based deployments.
Here are some screenshots of the cluster in the ACM GUI:
ACM dashbaord view:
Cluster Inventory Views:
Step 3: Monitor Cluster Installation
The ACM GUI shows the progress of your cluster installation.
In addition, you can monitor the status from the command line using the following command:
oc get clusterdeployment example-sno -n example-sno -ojson | jq '.status.conditions[] | select(.type == "ClusterInstallCompleted") | .message'
Once you see the status as InstallationCompleted: True, the SNO cluster will be ready!
How simple is that?
Conclusion
The SiteConfig Operator provides a powerful, unified API that simplifies and enhances the cluster deployment process in OpenShift. With just a few YAML files, you can easily deploy a Single-Node OpenShift (SNO) cluster, whether you're using GitOps or non-Git workflows.
This feature offers improved scalability, flexibility, and troubleshooting capabilities, making it an excellent tool for managing clusters with Red Hat ACM.
In future posts, I will explore further customization options for the manifests to suit different cluster configurations.