Installing a Secondary NVMe Drive
Author: Brandon B. Jozsa
I talk about my lab a lot throughout this blog. Recently, I've been wanting to recycle an older NVMe drive (a Samsung 970 Evo Plus) in order to create an LVM volume group, which can be resized later. I'm currently using Samsung 980 Pro 1TB drives for my OS, but this new volume will be used as an NFS share for Kubernetes storage.
After the drive has been installed, use the
lsblk command with -
io to output the following columns:
NAME,TYPE,SIZE,MOUNTPOINT,FSTYPE,MODEL. By listing the
NAME specifically, you should be able to correctly identify the drive you want to use.
[bjozsa@galvatron03 ~]$ sudo lsblk -io NAME,TYPE,SIZE,MOUNTPOINT,FSTYPE,MODEL NAME TYPE SIZE MOUNTPOINT FSTYPE MODEL nvme1n1 disk 931.5G Samsung SSD 980 PRO 1TB └─nvme1n1p1 part 931.5G LVM2_member └─vgroot-lvroot lvm 931.5G / ext4 nvme0n1 disk 931.5G Samsung SSD 970 EVO Plus 1TB [bjozsa@galvatron03 ~]$
Next, use the command
pvcreate to define the Physical Volume.
[bjozsa@galvatron03 ~]$ sudo pvcreate /dev/nvme0n1 Physical volume "/dev/nvme0n1" successfully created. [bjozsa@galvatron03 ~]$
Now use the command
vgcreate to define a Volume Group.
[bjozsa@galvatron03 ~]$ sudo vgcreate vgnfs /dev/nvme0n1 Volume group "vgnfs" successfully created [bjozsa@galvatron03 ~]$
To see the changes you made, run
[bjozsa@galvatron03 ~]$ sudo vgs -v VG Attr Ext #PV #LV #SN VSize VFree VG UUID VProfile vgnfs wz--n- 4.00m 1 0 0 931.51g 931.51g w6c0Fp-MViZ-Lzyd-N50O-ONlW-4EA7-DFYrae vgroot wz--n- 4.00m 1 1 0 931.50g 0 f9a8Lq-RzUe-531e-ApxQ-3cCK-mS85-xKuJhx [bjozsa@galvatron03 ~]$
Next, you'll need to create the Logical Volume. In the command below, you can use
-l [percent], and as you can see I am choosing to use 100% of the drive. You can also use the
-L option to declare a size (example:
-L250G). It's always safer to use less and resize later, but I don't really care about the size right now. Just be aware of your options, and when in doubt be sure to use
--help with each of these commands.
[bjozsa@galvatron03 ~]$ sudo lvcreate -l 100%VG -n lvnfs vgnfs Logical volume "lvnfs" created. [bjozsa@galvatron03 ~]$
Great! Let's look at what you created with the
[bjozsa@galvatron03 ~]$ sudo lvdisplay --- Logical volume --- LV Path /dev/vgnfs/lvnfs LV Name lvnfs VG Name vgnfs LV UUID w6c0Fp-MViZ-Lzyd-N50O-ONlW-4EA7-DFYrae LV Write Access read/write LV Creation host, time galvatron03, 2021-01-05 17:35:25 +0000 LV Status available # open 0 LV Size 931.51 GiB Current LE 238467 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:1 WARNING: PV /dev/nvme1n1p1 in VG vgroot is using an old PV header, modify the VG to update. --- Logical volume --- LV Path /dev/vgroot/lvroot LV Name lvroot VG Name vgroot LV UUID 6x6ROT-N3gR-vy2m-OZBM-paLY-CS8E-ui082d LV Write Access read/write LV Creation host, time galvatron03, 2020-11-27 16:30:09 +0000 LV Status available # open 1 LV Size 931.50 GiB Current LE 238465 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0 [bjozsa@galvatron03 ~]$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvnfs vgnfs -wi-a----- 931.51g lvroot vgroot -wi-ao---- 931.50g [bjozsa@galvatron03 ~]$
Now it's time to create a filesystem within the Logical Volume. Now full disclosure, at the time of writing I'm using RHEL 8.3 in a fairly unconventional Ubuntu MaaS setup which is deploying LVM with an ext4 filesystem. Eventually I'll migrate away from MaaS, but since this current deployment is using
ext4 for the primary drive, I'll continue to use this format for the new drive. You can use whatever filesystem format that fits your needs. I am doing this with the
mkfs.[type] command (example:
mkfs.ext4). You need to do this against the device in
/dev/vgnfs/lvnfs, if you've followed along exactly. This is why I used the
lv nomenclature throughout this tutorial.
[bjozsa@galvatron03 ~]$ sudo mkfs.ext4 /dev/vgnfs/lvnfs mke2fs 1.45.6 (20-Mar-2020) Discarding device blocks: done Creating filesystem with 244190208 4k blocks and 61054976 inodes Filesystem UUID: df323a65-2f51-489e-8241-9b1b295bc8a8 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done [bjozsa@galvatron03 ~]$
Everyone's use case and reasoning is different. For this example, I am going to create a mount folder for this NFS server in the optional directory in
[bjozsa@galvatron03 ~]$ sudo mkdir -p /opt/nfs/ [bjozsa@galvatron03 ~]$ sudo mount /dev/vgnfs/lvnfs /opt/nfs [bjozsa@galvatron03 ~]$ sudo df -H Filesystem Size Used Avail Use% Mounted on devtmpfs 34G 0 34G 0% /dev tmpfs 34G 0 34G 0% /dev/shm tmpfs 34G 9.7M 34G 1% /run tmpfs 34G 0 34G 0% /sys/fs/cgroup /dev/mapper/vgroot-lvroot 985G 50G 886G 6% / tmpfs 6.7G 0 6.7G 0% /run/user/1001 /dev/mapper/vgnfs-lvnfs 984G 80M 934G 1% /opt/nfs [bjozsa@galvatron03 ~]$
Use your favorite variant of
df to verify that things were mounted correctly.
WARNING: Debates welcome. The last thing to do would make the mount point permanent by writing changes to
/etc/fstab. This will map the mount for persistence.
Take a look at your current
/etc/fstab file. Depending on the OS, you can mount either using the UUID (which some recommend) or using the
/dev/mapper, which others recommend when using LVM volumes. For simplicity, I will use
/dev/mapper, but I will write a future article (and link it here) when I have more time.
When looking at
/dev/mapper/, you will notice some symbolic links. In my case, my new mount link is
/dev/mapper/vgnfs-lvnfs. This is what I will use in my
/etc/fstab, like below.
# /etc/fstab: static file system information. /dev/disk/by-id/dm-uuid-LVM-f9a8LqRzUe531eApxQ3cCKmS85xKuJhx6x6ROTN3gRvy2mOZBMpaLYCS8Eui082d / ext4 defaults 0 0 /swap.img none swap sw 0 0 # # NFS Share /dev/mapper/vgnfs-lvnfs /opt/nfs ext4 defaults 0 0
Add that last line, or similar depending on your own specific needs, and your mount should come up after reboots.
I'll talk about NFS and my NFS use case in another post for later.