Photo by Daniel Stiel / Unsplash

Multicloud Object Gateway for Single-Node OpenShift (SNO)

Storage May 27, 2024

Author: Brandon B. Jozsa

"Remember, there's no such thing as a small act of kindness. Every act creates a ripple with no logical end."
- Scott Adams

Table of Contents

- Part I: Introduction
- Part II: Multicloud Object Storage
- Part III: Installation of MCG
- Part IV: Creating S3 Bucket Claims
- Part V: Using S3 Object Buckets
- Final Thoughts

Part I: Introduction

So you've deployed a fresh, new Single-Node OpenShift environment. Now what do you do? Well, you can only get so far without storage for your workloads. In the past, Red Hat has suggested using the Local Storage Operator (LSO) or Hostpath Provisioner (HPP), but there's one glaring storage option missing, and this is especially true in the age of AI: object storage.

The most common protocol for object storage is of course Amazon's implementation of S3, and what most people may not realize is that you can implement this object storage option within your own SNO-based clusters, as well as multi-node environments (compact or full deployments). This leads us to the OpenShift Multicloud Object Storage (MCG) operator, provided by Red Hat (via the NooBaa project). Let's get started, because this is going to be a fun one today!

Part II: Multicloud Object Storage

There are quite a few benefits to leveraging the MCG solution for a SNO environment. For one, MCG is very light-weight. This is especially true when comparing against other solutions, such as Ceph's own S3 offering. Similar to Ceph's Object Gateway, the MCG can be leveraged by applications that are currently using Amazon's AWS S3 SDK. In order to use the MCG on premise for example, you simply need to target the MCG endpoint and provide the appropriate access and secret access keys (more on this below). Even better is the fact that MCG democratizes your S3 endpoints across cloud providers to avoid cloud lock-in.

But this also means that you can leverage the power of AI within your SNO clusters for tasks such as training AI models, without having to deploy heavier alternatives such as Ceph or other community-based options like MinIO. You can use Red Hat's very own OpenShift Data Foundation - but in this case, you're just using a very small subset of the operator deployment.

Part III: Installation of MCG

A prerequisite of running the MCG operator is that you still need storage of some sort installed. What we will be using in this guide will be provided via the LVM Operator by the Red Hat OpenShift team.

STOP: Be sure to review my other blog post for details about installing the LVM Operator on OpenShift.

  1. With the LVM Operator installed, I am going to assume that you have a StorageClass named lvms-vg1-immediate ready to be used by the MCG.
    STOP AND REVIEW the following manifest. OpenShift Data Foundation uses release versions as part of their sub.spec.channel, so be absolutely sure that you're either running OpenShift 4.15 or change the channel to match your version of OpenShift (i.e. channel: "stable-4.15").

    cat <<EOF | oc apply -f -
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        openshift.io/cluster-monitoring: "true"
      name: openshift-storage
    
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: openshift-storage-operatorgroup
      namespace: openshift-storage
    spec:
      targetNamespaces:
      - openshift-storage
    
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: odf-operator
      namespace: openshift-storage
    spec:
      name: odf-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      channel: "stable-4.15"
      installPlanApproval: Automatic
    EOF
    
  2. Once the ODF operator has been installed, you can review the status of the operator with the following command.

    ❯ oc get csv -n openshift-storage | awk 'NR==1 || /odf/'
    NAME                                    DISPLAY                       VERSION        REPLACES                                PHASE
    odf-operator.v4.15.2-rhodf              OpenShift Data Foundation     4.15.2-rhodf   odf-operator.v4.15.1-rhodf              Succeeded
    
  3. Once the ODF operator has reached a PHASE of Succeeded, use the following manifest to install the MCG object storage solution to your SNO deployment. This will not install any other ODF components to your environment. It will only install the NooBaa MCG.

    cat <<EOF | oc apply -f -
    apiVersion: ocs.openshift.io/v1
    kind: StorageCluster
    metadata:
      annotations:
        uninstall.ocs.openshift.io/cleanup-policy: delete
        uninstall.ocs.openshift.io/mode: graceful
      name: ocs-storagecluster
      namespace: openshift-storage
    spec:
      arbiter: {}
      encryption:
        kms: {}
      externalStorage: {}
      resourceProfile: "lean"
      enableCephTools: false
      allowRemoteStorageConsumers: false
      managedResources:
        cephObjectStoreUsers: {}
        cephCluster: {}
        cephBlockPools: {}
        cephNonResilientPools: {}
        cephObjectStores: {}
        cephFilesystems: {}
        cephRBDMirror: {}
        cephToolbox: {}
        cephDashboard: {}
        cephConfig: {}
      mirroring: {}
      multiCloudGateway:
        dbStorageClassName: lvms-vg1-immediate
        reconcileStrategy: standalone
        disableLoadBalancerService: true
    EOF
    
  4. Once again, review the csv installation phase and wait until everything is in a Succeeded phase.

    ❯ oc get csv -n openshift-storage | awk 'NR==1 || /odf/'
    NAME                                    DISPLAY                       VERSION        REPLACES                                PHASE
    mcg-operator.v4.15.2-rhodf              NooBaa Operator               4.15.2-rhodf   mcg-operator.v4.15.1-rhodf              Succeeded
    ocs-operator.v4.15.2-rhodf              OpenShift Container Storage   4.15.2-rhodf   ocs-operator.v4.15.1-rhodf              Succeeded
    odf-csi-addons-operator.v4.15.2-rhodf   CSI Addons                    4.15.2-rhodf   odf-csi-addons-operator.v4.15.1-rhodf   Succeeded
    odf-operator.v4.15.2-rhodf              OpenShift Data Foundation     4.15.2-rhodf   odf-operator.v4.15.1-rhodf              Succeeded
    

Part IV: Creating S3 Bucket Claims

Now we come to the really fun part: how to use MCG to create object storage buckets for workloads!

  1. Let's create a project first, and then create a custom ObjectBucketClaim. We'll try to use this obc further in our tutorial.

    ❯ oc create ns ai-testing
    namespace/ai-testing created
    
  2. Next, let's create the ObjectBucketClaim for that project.

    cat <<EOF | oc apply -f -
    ---
    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: test01-obc
      namespace: ai-testing
    spec:
      bucketName: test01-obc
      storageClassName: openshift-storage.noobaa.io 
    EOF
    
  3. Now you want to get some information, as well as the credentials for the obc that you just created. Use the following commands to do this. Please note the variables used for the command, but everything else is as simple as copy and paste.

    OBC_NAME=test01-obc
    OBC_NS=ai-testing
    
    BUCKET_HOST=$(oc get -n $OBC_NS configmap $OBC_NAME -o jsonpath='{.data.BUCKET_HOST}')
    BUCKET_NAME=$(oc get -n $OBC_NS configmap $OBC_NAME -o jsonpath='{.data.BUCKET_NAME}')
    BUCKET_PORT=$(oc get -n $OBC_NS configmap $OBC_NAME -o jsonpath='{.data.BUCKET_PORT}')
    BUCKET_KEY=$(oc get secret -n $OBC_NS $OBC_NAME -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d)
    BUCKET_SECRET=$(oc get secret -n $OBC_NS $OBC_NAME -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)
    
    printf "\n"
    printf "BUCKET_HOST: $BUCKET_HOST \n"
    printf "BUCKET_NAME: $BUCKET_NAME \n"
    printf "BUCKET_PORT: $BUCKET_PORT \n"
    printf "BUCKET_KEY: $BUCKET_KEY \n"
    printf "BUCKET_SECRET: $BUCKET_SECRET \n" 
    

    Now, the commands above will give you some very key things about your object bucket claim. The two most important are:

    • BUCKET_KEY - This is the AWS-based access key
    • BUCKET_SECRET - This is the AWS-based access secret key

    You can also view your object buckets and object bucket claims within the OpenShift UI under Storage > Object Storage (see example below).

Part V: Using S3 Object Buckets

Remember that part about using the AWS S3-based commands with the Multicloud Object Gateway? Well, let's put this to work, and see what we've got.

To use the S3 bucket, you need to export a couple of aws CLI variables, and include the --endpoint URL for your NooBaa instance within OpenShift. The NooBaa/MCG instance endpoint is a TLS endpoint, and is formatted like so:

https://s3-openshift-storage.apps.demo.ai.ocp.run

In the example above, demo is the cluster name, while ai.ocp.run is the domain name. Take these into consideration with your own MCG deployment.

  1. You can simply use the aws CLI utility to work with your S3 buckets provided by the Multicloud Object Gateway (via NooBaa). Here's an example that will list the buckets deployed to your SNO environment:

    export AWS_ACCESS_KEY_ID=$BUCKET_KEY
    export AWS_SECRET_ACCESS_KEY=$BUCKET_SECRET
    
    ❯ aws --endpoint https://s3-openshift-storage.apps.demo.ai.ocp.run --no-verify-ssl s3 ls
    2024-05-27 16:49:33 netobserver-loki-obc
    2024-05-27 16:49:33 test01-obc
    

It's really that easy!

Final Thoughts

So, now that you've deployed the Multicloud Object Gateway, can you think of any projects you want to use it for? How about some AI projects? The MCG is a simple, yet great operator to leverage with your OpenShift deployment. And now that you've seen how to use it within your SNO environment, it should open up a whole list of new projects you can try out.

Tags