Photo by Kate Trysh / Unsplash

Workload Onboarding on Managed Clusters through OpenShift GitOps

Kubernetes Jun 27, 2023

Author: Motohiro Abe

Welcome to my technical blog, this my first entry!

In this write-up, I aim to provide a little bit of insights for those interested in leveraging ACM (Application Configuration Management) and GitOps Operator when deploying OCP (OpenShift) clusters, as well as deploying applications onto them. I realized that finding the exact procedures for setting up workloads on spoke clusters as separate applications within the same ArgoCD hub cluster can be a bit tedious. Therefore, I just wanted to share my experiences and shed light on this process.


For the purpose of this example, it is assumed that the spoke cluster “sno1” has already been deployed and imported into the ACM. This step is necessary to ensure that the placement rule can be triggered as part of the subsequent process. Additionally, it is assumed that sno1 has been carried out using the siteconfig generator.


First, it is important to ensure that the managed cluster has the “”: “available” label applied. This label indicates that the Application Manager feature is available and enabled on the cluster. To verify the label, you can use the following command:

$ oc get managedcluster sno1 -n open-cluster-management-agent-addon -o jsonpath='{.metadata.labels}'

Replace sno1 with the actual name of your managed cluster. The above command retrieves the labels associated with the managed cluster from the open-cluster-management-agent-addon namespace. Look for the presence of the “” label and verify that its value is set to “available”.

If the label is present and set to “available”, it indicates that the Application Manager feature is ready to be utilized on the cluster. If the label is not present or set to a different value, you may need to enable or configure the Application Manager feature on the cluster before proceeding with the next steps.

In case of this label is empty, it can be added through siteconfig.yaml with KlusterletAddonConfigOverride.yaml.

To update the siteconfig.yaml file and override the klusterletAddonConfig with a separate YAML file, follow these steps:

  1. Add the following lines to the siteconfig.yaml file
  KlusterletAddonConfig: KlusterletAddonConfigOverride.yaml
  1. Next, update or create KlusterletAddonConfigOverride.yaml in the same folder as the siteconfig.yaml. In this file, you will override the configuration for the applicationmanager component.
kind: KlusterletAddonConfig
  annotations: "2"
  name: "{{ .Cluster.ClusterName }}"
  namespace: "{{ .Cluster.ClusterName }}"
  clusterName: "{{ .Cluster.ClusterName }}"
  clusterNamespace: "{{ .Cluster.ClusterName }}"
    cloud: auto-detect
    vendor: auto-detect
    enabled: true        # enable
    enabled: false
    enabled: false
    enabled: true
    enabled: false

After updating files and syncing the ArgoCD cluster application, the spoke cluster will be labeled and the agent-addon pod will be spawned, enabling centralized management and application deployment.

$ oc get pod -A | grep add
open-cluster-management-agent-addon  application-manager-584f555b87-ll2wq  1/1     Running     0  88s

ManagedClusterSet and binding rules

To create a ManagedClusterSet custom resource (CR), you can use the following example YAML definition:

kind: ManagedClusterSet
  name: myworkload

Once you have created the ManagedClusterSet CR, you can associate it with relevant ManagedClusters by adding the appropriate labels to those clusters.

To create a ManagedClusterSetBinding that binds the ManagedClusterSet to the namespace where Red Hat OpenShift GitOps is deployed (typically the openshift-gitops namespace), you can use the following example YAML definition:

kind: ManagedClusterSetBinding
  name: myworkload
  namespace: openshift-gitops
  clusterSet: myworkload

This allows you to control which clusters are part of the ManagedClusterSet and can be used for workload placement and management within the GitOps namespace.

To register the managed cluster(s) with the GitOps operator using a Placement custom resource, you can use the following example YAML definition:

kind: Placement
  name: all-openshift-clusters
  namespace: openshift-gitops
  - key:
    operator: Exists
  - key:
    operator: Exists
  - requiredClusterSelector:
        - key: sites
          operator: In
          - sno1

Finally, create a GitOpsCluster custom resource to register the set of managed clusters from the placement decision to the specified instance of GitOps.

kind: GitOpsCluster
  name: gitops-cluster-sample
  namespace: openshift-gitops
    argoNamespace: openshift-gitops
    cluster: local-cluster
    kind: Placement
    name: all-openshift-clusters

To include the “sno1” cluster in the cluster set using the oc label command, execute the following command:

$ oc label Managedcluster sno1 --overwrite

After executing this command, you should observe the following information in ArgoCD.

sno1 is shown in clusters list

The connection status for the “sno1” cluster will show as unknown initially since it doesn’t have any applications deployed or being monitored yet.


Great! By defining the placement rules and binding the managed clusters to GitOps, we have laid the foundation for managing and deploying applications in a controlled manner. This setup brings us to achieving a streamlined GitOps workflow for our cluster management and application deployment needs.


For more detailed information and specific instructions, I recommend referring to the official product documentation.

Red Hat Advanced Cluster Management for Kubernetes (ACM): Configuring Managed Clusters for OpenShift GitOps operator

Files for this testing can be found here.