Photo by δΊ”ηŽ„εœŸ ORIENTO / Unsplash

LVM Storage Operator for Single-Node OpenShift (SNO)

Storage May 27, 2024

Author: Brandon B. Jozsa

"It is not the beauty of a building you should look at; its the construction of the foundation that will stand the test of time."
- David Allen Coe

Table of Contents

- Part I: Introduction
- Part II: Preparation
- Part III: Operator Installation
- Part IV: LVM Deployment
- Part V: StorageClass Enhancements (Optional)
- Final Thoughts

Part I: Introduction

It can be deceivingly easy to install and use OpenShift until you need something more advanced that requires additional file, block, or object storage, like a full-featured Network Observability Operator deployment (INSERT REFERENCE HERE).

Today we're going to be taking a look at running a more realistic storage solution as part of our SNO series, and the only requirement is to have a second disk; a pretty reasonable requirement.

Part II: Preparation

It's always wise to wipe any disk before reusing them, but what if you want to do this within RHCOS? Let's explore this more, but we'll do it in a way that preserves the appliance nature of RHCOS (and yes, we do treat them as appliances). Verifying and wiping the drives is particularly important if you have either existing Linux filesystems or partitions configured on a reused drive.

  1. As always, verify that you're connected to the correct SNO environment and review the disk topology. Let's use oc debug commands to achieve this task, rather than using ssh directy to the node.

    ❯ oc get nodes
    NAME       STATUS   ROLES                                                        AGE   VERSION
    roderika   Ready    b200-m5-large-worker,control-plane,master,master-rt,worker   25d   v1.28.7+f1b5f6c
    
    cat <<EOF | oc debug node/roderika
    chroot /host
    lsblk -o NAME,ROTA,SIZE,TYPE
    EOF
    

    The lsblk with output list (-o) will provide some useful information. For example, I have two disks in this system, and both have different OpenShift installations on them. My primary deployment is on /dev/sda, while a second disk also has an old OpenShift installation configured on /dev/sdb. In this case, let's wipe /dev/sdb so we can leverage this device/disk for our LVM deployment.

    sda                                                   1   931G disk
    |-sdc1                                                1     1M part
    |-sdc2                                                1   127M part
    |-sdc3                                                1   384M part
    `-sdc4                                                1 930.5G part
    sdb                                                   1   931G disk
    |-sdc1                                                1     1M part
    |-sdc2                                                1   127M part
    |-sdc3                                                1   384M part
    `-sdc4                                                1 930.5G part
    sr0                                                   1  1024M rom
    
  2. You can wipe the disks similarly with another direct oc debug command. PLEASE TAKE CARE to change the drive to match your environment, especially if you copy/paste these commands.

    cat <<EOF | oc debug node/roderika
    chroot /host
    sudo wipefs -af /dev/sdb
    sudo sgdisk --zap-all /dev/sdb
    sudo dd if=/dev/zero of=/dev/sdb bs=1M count=100 oflag=direct,dsync
    sudo blkdiscard /dev/sdb
    EOF
    

Part III: Operator Installation

Installation of the LVM Operator couldn't be any easier or straight forward either. Since we already wiped the disk in our previous section, let's get right to it.

  1. Please review the following manifest below. Note the three primary parts included in this manifest.

    • A Namespace object (or ns)
    • An OperatorGroup object (or og)
    • A Subscription object (or sub)

    You shouldn't need to change any of the options below, but there are two fields you may want to simply review in the Subscription deployment.

    • spec.installPlanApproval
    • spec.channel

    These are perfectly sane defaults across any OpenShift version, but if you want to target different versions or wish to manually upgrade the operator, you can review these CRD notes by running the following command (which is provided simply as an example): oc explain sub.spec.installPlanApproval

    For the sake of this demonstration, please keep things simple and deploy the following manifest "as is".

    cat <<EOF | oc apply -f -
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-local-storage
    
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: local-operator-group
      namespace: openshift-local-storage
    spec:
      targetNamespaces:
        - openshift-local-storage
    
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: local-storage-operator
      namespace: openshift-local-storage
    spec:
      channel: stable
      installPlanApproval: Automatic 
      name: local-storage-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    EOF
    

    Now you've deployed the LVM operator. It's really that simple, and it's how all of the operators work within OpenShift. If you want to see what Operators are deployed, you can run the following example.

    ❯ oc get sub -A
    NAMESPACE                          NAME                                                                         PACKAGE                       SOURCE             CHANNEL
    openshift-gitops-operator          openshift-gitops-operator                                                    openshift-gitops-operator     redhat-operators   latest
    openshift-local-storage            local-storage-operator                                                       local-storage-operator        redhat-operators   stable
    openshift-netobserv-operator       netobserv-operator                                                           netobserv-operator            redhat-operators   stable
    openshift-nmstate                  kubernetes-nmstate-operator                                                  kubernetes-nmstate-operator   redhat-operators   stable
    openshift-operators-redhat         loki-operator                                                                loki-operator                 redhat-operators   stable-5.9
    openshift-sriov-network-operator   sriov-network-operator-subscription                                          sriov-network-operator        redhat-operators   stable
    openshift-storage                  lvms-operator                                                                lvms-operator                 redhat-operators   stable-4.15
    openshift-storage                  mcg-operator-stable-4.15-redhat-operators-openshift-marketplace              mcg-operator                  redhat-operators   stable-4.15
    openshift-storage                  ocs-operator-stable-4.15-redhat-operators-openshift-marketplace              ocs-operator                  redhat-operators   stable-4.15
    openshift-storage                  odf-csi-addons-operator-stable-4.15-redhat-operators-openshift-marketplace   odf-csi-addons-operator       redhat-operators   stable-4.15
    openshift-storage                  odf-operator                                                                 odf-operator                  redhat-operators   stable-4.15
    

Part IV: LVM Deployment

With the operator installed, you can customize and deploy the LVM to your SNO cluster and target the specific disks you want to use.

  1. Carefully look at the following example CR. I am going to show you have you can use multiple disks as part of an LVM deployment: in this case, /dev/sdb and /dev/sdc. You can target these disks by leveraging the spec.storage.deviceSelector` options.

    cat <<EOF | oc apply -f -
    apiVersion: lvm.topolvm.io/v1alpha1
    kind: LVMCluster
    metadata:
      name: lvmcluster
      namespace: openshift-storage
    spec:
      storage:
        deviceClasses:
          - deviceSelector:
              paths:
                - /dev/sdb
                - /dev/sdc
            name: vg1
            thinPoolConfig:
              name: thin-pool-1
              overprovisionRatio: 10
              sizePercent: 90
    EOF
    

    With this manifest deployed, your LVM disk should be configured and you will have a new StorageClass named lvms-vg1 deployed.

    ❯ oc get sc
    NAME                           PROVISIONER                       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    lvms-vg1                       topolvm.io                        Delete          WaitForFirstConsumer   true                   5d4h
    

Part V: StorageClass Enhancements (Optional)

One thing you may have noticed about your new StorageClass is that lvms-vg1 is configured for WaitForConsumer, which means that a PVC will need a properly configured PersistentVolume to be configured and bound via a workload (or consumer). There's a little trick around this, but it's somewhat situational.

First, you can use an Immediate option to bind PersistentVolumeClaims as part of testing, or there are some cases where a workload could hang because it expects the storage provisioner to handle this process. Either way, you can get around this by creating another StorageClass with the following parameters (which also makes the StorageClass default).

cat <<EOF | oc apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: lvms-vg1-immediate
  annotations:
    description: Provides RWO and RWOP Filesystem & Block volumes
    storageclass.kubernetes.io/is-default-class: 'true'
    storageclass.kubevirt.io/is-default-virt-class: 'true'
provisioner: topolvm.io
parameters:
  csi.storage.k8s.io/fstype: xfs
  topolvm.io/device-class: vg1
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
EOF

Final Thoughts

This is a really simple blog post; perhaps more simple and direct than some of my previous posts. However, the LVM Operator is used so much in my deployments as of late, that I've pretty much been using it all the time. As such, I figured it would be a great base for my other blog posts, so I don't have to repeat this process for every other blog post going forward.

Thanks for reading my post! Hopefully this post can help you or someone you know when working with OpenShift.

Tags