Static Networking with Assisted-Installer

assisted-installer Sep 1, 2021

Author: Brandon B. Jozsa

So you've done it. You've finally installed a custom Assisted-Installer deployment using Red Hat's Assisted-Service API. I just finished writing a couple articles about creating custom Red Hat OpenShift Assisted-Installer clusters with Calico or Cilium and then the question comes up, "but how can I do this with custom IP addresses?" Yes, I realize that most if us live in the real world where DHCP is a luxury. But you really should consider using it if you plan on autoscaling your bare metal infrastructure - but I digress. Let's get straight into our topic for today.

Table of Contents

- Part I: Guest Deployments (Optional)
- Part II: Defining the NMState
- Part III: JSON the YAML
- Part IV: Apply the Static Networking

Part I: Guest Deployments (Optional)

As with any of my other articles, I want to start off with something tangible, something that you can see from beginning to end. If you already have bare metal hosts or virtual machines that you wish to use, that is to say more clearly, that you don't need to install virtual machines for the purpose of demonstration, then you can skip this section entirely. If you want to see the process from start to finish, or you want to use a hypervisor like the one I described in my previous article entitled "Virtualization Management on RHEL, CentOS, and Fedora" then you can start at "Part IV: Create Virtual Machines".

So we're going to create three virtual machines using virt-install, just like we did in the previous article. Let's start with some variables, which you can edit to match your environment. But first, let me call out some very important things first.

  • I am not going to create a CD device. More on this later.
  • Be sure to create a predictable MAC address. Again, more on this later.
  • It's important to match or exceed Assisted-Installer's hardware requirements.

With this in mind, we can get started. Here are the variables we'll need.

OCP_NODE="01"
VM_NAME="ocp$OCP_NODE"
VM_UUID="00000000-0000-0000-0000-0000000000$OCP_NODE"
VM_POOL="itamae"
VM_BOOT_PATH="/var/lib/libvirt/boot"
VM_BOOT_NAME="rhcos-discovery.iso"
VM_DISK_PATH="/home/libvirt/pool/itamae"
VM_DISK_SIZE="200"
VM_RAM="32768"
VM_CPU=8
VM_MAC="52:54:00:00:00:$OCP_NODE"
VM_BRIDGE="br3"

To make this easier for demonstration purposes, the only thing you would need to change is the first variable OCP_NODE between each of your deployments. So you can create 3 virtual machines like below.

virt-install \
  --import \
  --uuid=$VM_UUID \
  --name=$VM_NAME \
  --ram=$VM_RAM \
  --vcpus=$VM_CPU \
  --cpu host-passthrough \
  --os-type linux \
  --os-variant rhel8.4 \
  --noreboot \
  --events on_reboot=restart \
  --noautoconsole \
  --boot hd,cdrom \
  --import \
  --disk path=$VM_DISK_PATH/$VM_NAME.qcow2,size=$VM_DISK_SIZE,pool=$VM_POOL \
  --disk $VM_BOOT_PATH/$VM_BOOT_NAME,device=cdrom \
  --network type=direct,source=$VM_BRIDGE,mac=$VM_MAC,source_mode=bridge,model=virtio

Next deploy the same thing, reusing the variables from above, but this time changing OCP_NODE="01" to OCP_NODE=2, and then OCP_NODE=3. In the end you should have 3 virtual machines like below.

Example output:

[root@itamae ~]# virsh list --all
 Id   Name         State
-----------------------------
 12   cuttlefish   running
 43   tuna         running
 -    ocp01        shut off
 -    ocp02        shut off
 -    ocp03        shut off
 -    sno004       shut off

[root@itamae ~]#

Some important things to note here is that --import used to create the VM, but allow the device=cdrom to be used as installation media, without actually starting the virtual machine (which is what virt-install would do by default). I learned about this recently when researching for this article.

Part II: Defining the NMState

Since we know what the MAC addresses are for each of the hosts, this part will be fairly easy. Since OpenShift 4.8.x, NMState can be defined as part of the Assisted-Service deployment. If you've never done this before, details can seem a little bit unclear, but the process is actually straight-forward. This is the whole purpose of writing this article. Let's begin by mapping out the NMState files for each of the servers we just deployed. If you're using real server hardware (bare metal), there's really no differences other than the fact that you will have more interfaces to define. You may or may not want them to be enabled, but feel free to ask me questions on Twitter @v1k0d3n.

As before, let's first define some variables that will be used to generate each of the NMState YAML docs. You will generate one for each host (3 total). Here's an example of the first one:

OCP_NODE="01"
VM_NAME="ocp$OCP_NODE"
NET_GW="192.168.3.1"
NET_DNS="192.168.1.70"
NET_IFACE="enp1s0"
NET_IPADDR="192.168.3.231"
NET_MASK="24"
NET_TID="254"

Using the variables from above, let's write out a YAML doc for each of our hosts. I only needed to edit OCP_NODE and NET_IPADDR between each of my files.

cat << EOF > ./$VM_NAME.yaml
dns-resolver:
  config:
    server:
    - $NET_DNS
interfaces:
- name: $NET_IFACE
  ipv4:
    address:
    - ip: $NET_IPADDR
      prefix-length: $NET_MASK
    dhcp: false
    enabled: true
  state: up
  type: ethernet
routes:
  config:
  - destination: 0.0.0.0/0
    next-hop-address: $NET_GW
    next-hop-interface: $NET_IFACE
    table-id: $NET_TID
EOF

You should have 3 YAML files now.

Example output:

❯ ls -asl
total 12
0 drwxr-xr-x  5 bjozsa staff 160 Sep  1 14:23 .
0 drwxr-xr-x 15 bjozsa staff 480 Sep  1 14:23 ..
4 -rw-r--r--  1 bjozsa staff 342 Sep  1 14:23 ocp01.yaml
4 -rw-r--r--  1 bjozsa staff 342 Sep  1 14:23 ocp02.yaml
4 -rw-r--r--  1 bjozsa staff 342 Sep  1 14:23 ocp03.yaml
❯ 

That wasn't very hard. Let's move onto the next section.

Part III: JSON the YAML

Now this part get's a little bit tricky, so I'll try to explain it the best way that I can. Generally, you are going to use the tool jq to convert the 3 YAML documents created previously into a JSON file which will be sent to the Assisted-Service API as an HTTP data body. Simply put, we will convert the YAML documents into machine readable data that the API can associate with our OpenShift deployment.

It's going to be hard to use variable for this part, but I want you to have a solid grasp the general flow so you can write more complex deployments in the future. So let's break this down. There is one variable I want you to use for your SSH key (read the notes).

CLUSTER_SSHKEY='YOUR_SSH_PUB_KEY_IN_SINGLE_QUOTES'

jq -n --arg SSH_KEY "$CLUSTER_SSHKEY" --arg NMSTATE_YAML1 "$(cat ocp01.yaml)" --arg NMSTATE_YAML2 "$(cat ocp02.yaml)" --arg NMSTATE_YAML3 "$(cat ocp03.yaml)" \
'{
  "ssh_public_key": $SSH_KEY,
  "image_type": "full-iso",
  "static_network_config": [
    {
      "network_yaml": $NMSTATE_YAML1,
      "mac_interface_map": [{"mac_address": "52:54:00:00:00:01", "logical_nic_name": "enp1s0"}]
    },
    {
      "network_yaml": $NMSTATE_YAML2,
      "mac_interface_map": [{"mac_address": "52:54:00:00:00:02", "logical_nic_name": "enp1s0"}]
    },
    {
      "network_yaml": $NMSTATE_YAML3,
      "mac_interface_map": [{"mac_address": "52:54:00:00:00:03", "logical_nic_name": "enp1s0"}]
    }
  ]
}' > msg_body.json

So let's break this down:

jq will be taking arguments from your SSH key (typically ${HOME}/.ssh/id_rsa.pub or similar), and each of the YAML documents created from above. Next, it will be inserting these arguments into a JSON body which creates the following.

Example output:

{
  "ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDE1F7Fz3MGgOzst9h/2+5/pbeqCfFFhLfaS0Iu4Bhsr7RenaTdzVpbT+9WpSrrjdxDK9P3KProPwY2njgItOEgfJO6MnRLE9dQDzOUIQ8caIH7olzxy60dblonP5A82EuVUnZ0IGmAWSzUWsKef793tWjlRxl27eS1Bn8zbiI+m91Q8ypkLYSB9MMxQehupfzNzJpjVfA5dncZ2S7C8TFIPFtwBe9ITEb+w2phWvAE0SRjU3rLXwCOWHT+7NRwkFfhK/moalPGDIyMjATPOJrtKKQtzSdyHeh9WyKOjJu8tXiM/4jFpOYmg/aMJeGrO/9fdxPe+zPismC/FaLuv0OACgJ5b13tIfwD02OfB2J4+qXtTz2geJVirxzkoo/6cKtblcN/JjrYjwhfXR/dTehY59srgmQ5V1hzbUx1e4lMs+yZ78Xrf2QO+7BikKJsy4CDHqvRdcLlpRq1pe3R9oODRdoFZhkKWywFCpi52ioR4CVbc/tCewzMzNSKZ/3P0OItBi5IA5ex23dEVO/Mz1uyPrjgVx/U2N8J6yo9OOzX/Gftv/e3RKwGIUPpqZpzIUH/NOdeTtpoSIaL5t8Ki8d3eZuiLZJY5gan7tKUWDAL0JvJK+EEzs1YziBh91Dx1Yit0YeD+ztq/jOl0S8d0G3Q9BhwklILT6PuBI2nAEOS0Q==",
  "image_type": "full-iso",
  "static_network_config": [
    {
      "network_yaml": "dns-resolver:\n  config:\n    server:\n    - 192.168.1.70\ninterfaces:\n- name: enp1s0\n  ipv4:\n    address:\n    - ip: 192.168.3.231\n      prefix-length: 24\n    dhcp: false\n    enabled: true\n  state: up\n  type: ethernet\nroutes:\n  config:\n  - destination: 0.0.0.0/0\n    next-hop-address: 192.168.3.1\n    next-hop-interface: enp1s0\n    table-id: 254",
      "mac_interface_map": [
        {
          "mac_address": "52:54:00:00:00:01",
          "logical_nic_name": "enp1s0"
        }
      ]
    },
    {
      "network_yaml": "dns-resolver:\n  config:\n    server:\n    - 192.168.1.70\ninterfaces:\n- name: enp1s0\n  ipv4:\n    address:\n    - ip: 192.168.3.232\n      prefix-length: 24\n    dhcp: false\n    enabled: true\n  state: up\n  type: ethernet\nroutes:\n  config:\n  - destination: 0.0.0.0/0\n    next-hop-address: 192.168.3.1\n    next-hop-interface: enp1s0\n    table-id: 254",
      "mac_interface_map": [
        {
          "mac_address": "52:54:00:00:00:02",
          "logical_nic_name": "enp1s0"
        }
      ]
    },
    {
      "network_yaml": "dns-resolver:\n  config:\n    server:\n    - 192.168.1.70\ninterfaces:\n- name: enp1s0\n  ipv4:\n    address:\n    - ip: 192.168.3.233\n      prefix-length: 24\n    dhcp: false\n    enabled: true\n  state: up\n  type: ethernet\nroutes:\n  config:\n  - destination: 0.0.0.0/0\n    next-hop-address: 192.168.3.1\n    next-hop-interface: enp1s0\n    table-id: 254",
      "mac_interface_map": [
        {
          "mac_address": "52:54:00:00:00:03",
          "logical_nic_name": "enp1s0"
        }
      ]
    }
  ]
}

Now, if you have multiple interfaces your your host, no problem! Simply add additional entries in the YAML file above.

Example output:

- name: enp1s0
  ipv4:
    address:
    - ip: 192.168.3.231
      prefix-length: 24
    dhcp: false
    enabled: true
  state: up
  type: ethernet
  - name: enp2s0
  ipv4:
    address:
    - ip: 192.168.60.31
      prefix-length: 24
    dhcp: false
    enabled: true
  state: up
  type: ethernet

Then define the corresponding MAC address in the JSON file.

Example output:

    {
      "network_yaml": $NMSTATE_YAML1,
      "mac_interface_map": [{"mac_address": "52:54:00:00:00:01", "logical_nic_name": "enp1s0"},{"mac_address": "52:54:00:00:01:01", "logical_nic_name": "enp2s0"}]
    },

That's it! You've got the hard part out of the way, and now it's time to work with the assisted-installer API.

Part IV: Apply the Static Networking

With all of this setup out of the way, it's finally time to apply the custom NMState configuration to the cluster. This can be done with the following command:

curl -s -X POST "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/downloads/image" \
  -d @./network/msg_body.json \
  --header "Content-Type: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  | jq '.'

You should be returned with a validation JSON, letting you know that the cluster has been updated with your custom network layout for each of the applied servers.

The last thing to do is to download the ISO installation media, and load it into each of the servers (this can be down with Redfish, as an example).

curl \
  -H "Authorization: Bearer $TOKEN" \
  -L "http://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/downloads/image" \
  -o ai-liveiso-$CLUSTER_ID.iso

References

  • Assisted-Service Restful API Guide (GitHub)

Tags

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.