Image-Based Installation for SNO with the RAN DU Profile and GitOps ZTP

Author: Motohiro Abe

Introduction

This post is a follow-up to my previous blog post — consider it Part 2.
In this entry, I’ll walk through an Image-Based Installation (IBI) of a Single Node OpenShift (SNO) using Zero Touch Provisioning (ZTP) with the Telco RAN DU profile.

The most important point to note is that this work is based on the following official documentation:

Disclaimer: This blog is based on my personal experiments and testing, following official documentation. It is intended to share insights and practical experience and may not represent a production-ready deployment.

TL;DL: Demo

Here’s the demo recording, covering:

  • Deploying the seed cluster with ZTP
  • Pre-installation steps for the target SNO
  • Deploying the SNO with IBI and ZTP

In this blog, I aim to keep things as simple as possible while highlighting a few key points. The target SNO will be configured with the RAN DU profile—within the limits of my lab environment (running on KVM VMs).

That said, PTP site-specific configuration and SR-IOV virtual functions (VFs) are not included in this example. However, the necessary operators and performance profiles from the RAN DU reference configuration are applied.

The seed SNO will carry:

  • Day 2 operator subscriptions
  • Real-time kernel enabled
  • Peformance Profile
  • SriovOperatorConfig
  • ...everything except storage components, which are omitted based on IBI best practices.

I believe this setup is a good starting point to explore the workflow of combining RAN DU profiles with IBI and ZTP.

Note: fine-tuning individual RAN profiles or applying detailed performance optimizations is beyond the scope of this blog.

High level workflow

In this blog, I’ll demonstrate building two clusters using ZTP.

Seed Cluster Creation
   ↓
Generate Seed Image → RAN DU Profile + Operators
   ↓
Pre-install Target Node → Ship to Remote Site
   ↓
Finalize Deployment → ClusterInstance with IBI Template

Pleaes note that the generation of the seed image and the staging of the target node are manual steps, which I described in my previous post. Please refer to that entry for details.

The following sections highlight the key points I want to emphasize.
If you have any questions, feel free to reach out.

ACM Setup for ZTP

Assuming you have Red Hat Advanced Cluster Management (ACM) installed on your bare-metal host, but GitOps ZTP is not yet configured, the primary guide to follow is:

Enabling Assisted Installer Service

This guide provides comprehensive steps to enable the Assisted Installer service, which is crucial for setting up GitOps Zero Touch Provisioning (ZTP) on your hub cluster.

ACM IBI Operator

To support ZTP with Image-Based Installation (IBI), you must enable the Image-Based Install Operator in ACM.

This is done by updating the multiclusterengine instance with the following patch:

oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type json \
  --patch '[{"op": "add", "path":"/spec/overrides/components/-", "value": {"name":"image-based-install-operator","enabled": true}}]'

Setup Gitops Operator and repository

The official documentation for reference is here:

Preparing the ZTP Git Repository

I highly recommend reviewing this documentation and examining the GitOps repository, which contains reference deployment files, example SiteConfig CRs, and profiles.

For simplicity, here is the structure of my repository:

└── ztp-ibi
    ├── argocd_deployment
    │   ├── policies-app.yaml
    │   ├── seed-clusters-app.yaml
    │   └── site1-clusters-app.yaml
    ├── ibi-seed-setup
    │   ├── image-based-installation-config.yaml
    │   └── seedimagegenerator
    │       ├── secret.yaml
    │       └── seedgenerator.yaml
    ├── policies
    │   ├── kustomization.yaml
    │   ├── ns.yaml
    │   ├── seed-common-ranGen.yaml
    │   ├── seed-group-du-sno-ranGen.yaml
    │   ├── site1-sno.yaml
    │   └── source-crs
    │       ├── ClusterLogCatSource.yaml
    │       ├── ClusterLogNS.yaml
    │       ├── ClusterLogOperatorStatus.yaml
    │       ├── ClusterLogOperGroup.yaml

.................

    │       ├── SriovSubscriptionOperGroup.yaml
    │       ├── SriovSubscription.yaml
    │       ├── StorageLVMCluster.yaml
    │       ├── StorageLVMSubscriptionNS.yaml
    │       ├── StorageLVMSubscriptionOperGroup.yaml
    │       ├── StorageLVMSubscription.yaml
    │       └── TunedPerformancePatch.yaml
    └── siteconfig
        ├── seed-sno
        │   ├── clusterinstance.yaml
        │   ├── extra-manifests
        │   │   ├── 01-container-mount-ns-and-kubelet-conf-master.yaml
        │   │   ├── 01-disk-encryption-pcr-rebind-master.yaml
        │   │   ├── 03-sctp-machine-config-master.yaml

...................

        │   │   ├── 98-var-lib-containers-partitioned.yaml
        │   │   ├── 99-crio-disable-wipe-master.yaml
        │   │   ├── 99-sync-time-once-master.yaml
        │   │   └── enable-crun-master.yaml
        │   ├── kustomization.yaml
        │   ├── ns.yaml
        │   └── secrets.yaml
        └── site1
            ├── kustomization.yaml
            └── sno1
                ├── clusterinstance.yaml
                ├── kustomization.yaml
                ├── ns.yaml
                └── secrets.yaml

GitOps Application

In ArgoCD, a Project is a logical grouping of applications.
In this setup, each application represents either a cluster or a set of policy deployments.

For this blog, I define three types of applications:

  • Seed Cluster application → points to the siteconfig/seed-sno folder
    File: ztp-ibi/argocd_deployment/seed-clusters-app.yaml

  • Remote Cluster application → points to the siteconfig/site1 folder, which contains the sno1 deployment manifests
    File: ztp-ibi/argocd_deployment/site1-clusters-app.yaml

  • Policies application → points to the policygenerator folder, where ACM policies are defined. These policies are applied using a LabelSelector
    File: ztp-ibi/argocd_deployment/policies-app.yaml

Operators in the SNO Seed Cluster

The screenshot below shows an example of the Seed Cluster after a successful deployment with ACM and ZTP.
At this stage, the TALM operator (Topology Aware Lifecycle Manager) applies the defined policies, which complete the installation of the required Operators.

In addition to Operators, the associated MachineConfigs are also applied during the deployment:

As part of this configuration, a dedicated container partition is created, along with kernel parameters such as iommu:

ClusteInstance example for SNO1

The following is an example ClusterInstance manifest for sno1, using the Image-Based Install Operator with default templates.
Since the target node has already been pre-installed with the seed image, this YAML only includes site-specific configuration.

apiVersion: siteconfig.open-cluster-management.io/v1alpha1
kind: ClusterInstance
metadata:
  name: "sno1-clusterinstance"
  namespace: "sno1"
spec:
  clusterName: sno1
  clusterImageSetNameRef: img4.19.4-x86-64-appsub
  baseDomain: cotton.blue
  holdInstallation: false
  machineNetwork:
    - cidr: 192.168.1.0/24
  networkType: OVNKubernetes
  sshPublicKey: "<SSH KEY>"
  pullSecretRef:
    name: "assisted-deployment-pull-secret"
  templateRefs:
    - name: ibi-cluster-templates-v1
      namespace: open-cluster-management
  extraLabels:
    ManagedCluster:
      siteName: "site1"
      group-du-sno: ""
      common: "true"
      du-profile: ""
  cpuPartitioningMode: AllNodes
  nodes:
    - role: master
      templateRefs:
        - name: ibi-node-templates-v1
          namespace: open-cluster-management
      bmcCredentialsName:
        name: "sno1-bmh-secret"
      bmcAddress: redfish-virtualmedia+http://192.168.1.105:8000/redfish/v1/Systems/b4dc8485-78df-43fb-96b0-2dbc6d837244
      bootMACAddress: 52:54:00:6a:b8:7d
      bootMode: UEFI
      hostName: sno1
      nodeNetwork:
        interfaces:
          - name: "enp1s0"
            macAddress: "52:54:00:6a:b8:7d"
        config:
          interfaces:
            - name: enp1s0
              type: ethernet
              state: up
              ipv4:
                enabled: true
                address:
                  - ip: "192.168.1.71"
                    prefix-length: 24
                dhcp: false
              ipv6:
                enabled: false
          dns-resolver:
             config:
               search:
               - cotton.blue
               server:
               - 192.168.1.1
          routes:
            config:
            - destination: 0.0.0.0/0
              next-hop-interface: enp1s0
              next-hop-address: 192.168.1.1
              table-id: 254

Conclusion

Between the Seed Cluster deployment and the SNO deployment with ZTP, there are still a few manual steps required—such as seed image generation, installation ISO creation, and staging the target node.
However, these steps can be automated. Once the seed image and installation ISO are prepared, the process can be scaled easily, dramatically reducing the time needed to deploy clusters at remote sites.

In this lab setup, the SNO became available in just a short time.
I believe the image-based installation approach is a versatile choice for us, as telecom cloud engineers, when planning cluster deployments in edge environments, where speed and repeatability are critical.

Thanks for reading!

Note:
Edited with the help of AI tools for clarity