Emulate SSD/NVMe for OpenShift 4.17+
Author: Brandon B. Jozsa
"The world hate change, yet it is the only thing that has brought progress."
- Charles Kittering
- Part I: Introduction
- Part II: Apply the MachineConfig
- Part III: How it Works
- Part IV: Conclusion
Updated: Nov 16, 2025 (originally published on Dec 21, 2024)
Part I: Introduction
The other day I was working on OpenShift 4.17, and attempting to install the OpenShift Data Foundation (ODF) operator. There was a subtle change made that wasn't called out in the release notes, so I wanted to discuss it here for a moment.
In OpenShift Data Foundation 4.17, for "new deployments" (a critical detail), you can no longer install the operator using rotational disks. ODF has always been clear that solid state disks are required, but there was also a provisional understanding that the administrator could accept risks associated with using rotational disks and continue with the installation - albeit with a stern warning. There was never an understanding that Red Hat would support ODF deployed on rotational disks, but there was also no notice for folks using ODF with rotational disks for POCs either.
This creates some issues for users who are genuinely using SSD/NVMe devices with OpenShift Virtualization, as the VM will always report the use of rotational disks from within the VM regardless of using solid state devices in reality. As a result of this, I created a pull request on GitHub to address this problem (you can find the PR to the KubeVirt team HERE).
Let me describe my lab setup and why the workaround I am presenting today is acceptable, specifically for my given purposes. I am running a Single Node OpenShift (SNO) deployment on a Dell T550. This server is using Dell-branded Micron 9200 Pro U.2 drives for storage. These are very capable drives, and well within the specification for running ODF. I am running OpenShift Virtualization within this same SNO environment (single hypervisor), but because the VMs report rotational-based drives for their attached storage, I cannot install ODF for testing. This ends here.
This is an ODF-related, self-inflicted problem. Therefore, I created the following MachineConfig workaround which you may also want to use to run ODF in your POC. You no longer need to gather drive details or worry about differences between SATA vs. VirtIO reported devices (like I suggested in my original Dec 21, 2024 post). This new MC manifest covers all of these scenarios. You can even use this solution with real rust spinners if you really want.
So let's get into the new solution (which is much simpler).
Part II: Apply the MachineConfig
That's it. Literally just apply the following MC in order for your spinning rust or virtual disks to report as non-rotational disks.
-
You can copy and paste this right into your terminal. Just be sure to adjust the
role(i.e.master,worker, orinfra).cat <<EOF | oc apply -f - apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 99-fake-nonrotational-disks labels: machineconfiguration.openshift.io/role: master # <--- STOP!!! ADJUST FOR ROLE!! (i.e. master, worker, infra) spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/fake-nonrotational.sh mode: 0755 contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCnRhcmdldF9kaXNrcz0kKGxzIC9zeXMvYmxvY2svIHwgZ3JlcCAtRSAnXnNkfF52ZCcpCgpmb3IgZGlzayBpbiAkdGFyZ2V0X2Rpc2tzOyBkbwogICAgZWNobyAiQ2hhbmdpbmcgZGlzazogL2Rldi8kZGlzayB0byBub24tcm90YXRpb25hbC4iCiAgICBlY2hvIDAgPiAvc3lzL2Jsb2NrLyRkaXNrL3F1ZXVlL3JvdGF0aW9uYWwKZG9uZQo= systemd: units: - name: fake-nonrotational.service enabled: true contents: | [Unit] Description=Force attached disks to report as non-rotational After=local-fs.target Wants=local-fs.target [Service] Type=simple ExecStart=/etc/fake-nonrotational.sh Restart=always RestartSec=5s RemainAfterExit=true User=root [Install] WantedBy=multi-user.target EOF
Part III: How It Works
This process is so much simpler. It will basically look for any attached sd or vd* based device, and mark that corresponding device as non-rotational. So far, I haven't found any issues with doing this, but generally speaking none of this is supported anyway, so use at your own risk (as with anything you find on the internet). This whole work around is just to keep working, and get around the soft blocker that prevent you from deploying ODF on rotational devices. Technically, as I've described with my own example, I'm not using spinning rust devices anyway.
But to provide more useful detail into how this works, there's an included script within the MC that runs as a systemd service unit on each of your labeled RHCOS systems. It looks like this:
#!/bin/bash
target_disks=$(ls /sys/block/ | grep -E '^sd|^vd')
for disk in $target_disks; do
echo "Changing disk: /dev/$disk to non-rotational."
echo 0 > /sys/block/$disk/queue/rotational
done
This little gem simply looks for any block device that's labeled as sd* or vd*, and then it writes a 0 into the associated $disk rotational status. It's small, and simple. It took more time to update this blog post than it did to write and apply the script to my environment.
- Remember that your RHCOS host will reboot after the
MCis applied. - Always apply this prior to creating/running a
LocalVolumeDiscoveryobject. - Never use this solution in production (it is NOT supported).
Part IV: Conclusion
Do we really need to say anymore? This is an updated solution for an updated time. One year later, and we're doing things better than we did in 2024. I'm happy. Have some wine for the holiday season, and always be good to one-another!
- v1k0d3n (Brandon B. Jozsa)