Deploying a Private OpenShift Cluster on Azure Using an Existing VNet
Author: Motohiro Abe
Introduction
You can install a private OpenShift cluster into an existing Azure Virtual Network (VNet) using the installer-provisioned infrastructure (IPI) method.
I recently implemented this setup in a project and wanted to share some insights.
With this method, the cluster remains fully private and integrates seamlessly with your existing network architecture and security policies.
As always, I find that seeing a working example makes it much easier to navigate the official documentation, which can sometimes be confusing.
This post is meant to provide a quick overview that may help others get started, and of course serve as a personal reference for future projects.
Note: This is a personal blog based on my experience. Please consult the official OpenShift documentation for full guidance.
Prerequisites
Before we begin, make sure the following requirements are met:
- Azure CLI is installed and available on your system.
(Reference: How to install the Azure CLI on RHEL) - You have the following Azure credentials and configuration ready:
- client-id
- client-secret
- tenant-id
- subscription-id
- Target resource group
Steps
The following steps outline the high-level process for deploying a private OpenShift cluster using an existing Azure VNet:
- Create worker and machine networks
Ensure that the required subnets (e.g., for workers and control plane) are created within the existing VNet if they do not already exist. - Set up a NAT gateway (optional)
Configure a NAT gateway and associate it with the appropriate subnets to enable outbound internet access for the private cluster. - Prepare the install-config.yaml file
Customize the OpenShift installation configuration to reference the existing VNet, subnets, and service principal credentials. - Deploy the cluster
Run the OpenShift installer to provision the cluster infrastructure and complete the deployment.
Of course, always use the official Red Hat documentation as your primary guide:
Installing a cluster on Azure into an existing VNet
Lets get started.
Create VNET and Subnets
First, create a Virtual Network (VNet). For the sake of this tutorial, we’ll define a few variables as we go.
You can create the VNet using the Azure portal (web console), or via the Azure CLI. At this stage, you can run these commands from anywhere that has access to the Azure API—for example, your local laptop.
Since the OpenShift cluster will be deployed as private, we’ll later use a VM as a bastion host to access resources inside the VNet.
The installer will also run from this bastion VM to directly interact with the private network.
The following commands define environment variables that will be used throughout the deployment process.
export NETWORK_RG=my-ocp-vnet-rg
export VNET=my-ocp-vnet
export LOCATION=eastus
NETWORK_RG: The name of the resource group that will contain your VNet
VNET: The name of the Virtual Network
LOCATION: The Azure region where the resources will be created
The following commands create a new Virtual Network and define two subnets:
one for the control plane (master-subnet) and one for the compute nodes (worker-subnet).
az group create --name $NETWORK_RG --location $LOCATION
az network vnet create \
--resource-group $NETWORK_RG \
--name $VNET \
--address-prefixes 10.0.0.0/16
az network vnet subnet create \
--resource-group $NETWORK_RG \
--vnet-name $VNET \
--name master-subnet \
--address-prefixes 10.0.0.0/24
az network vnet subnet create \
--resource-group $NETWORK_RG \
--vnet-name $VNET \
--name worker-subnet \
--address-prefixes 10.0.1.0/24
Set Up NAT Gateway
To allow the private OpenShift cluster to access the internet—for example, to pull container images during installation—we set up a NAT gateway and associate it with the necessary subnets.
However, this step may not be required in all environments.
If you're deploying in a fully disconnected setup with a private image registry and mirror configuration, you can skip the NAT gateway entirely.
az network public-ip create \
--resource-group $NETWORK_RG \
--name my-nat-public-ip \
--sku Standard \
--allocation-method Static
az network nat gateway create \
--resource-group $NETWORK_RG \
--name my-nat-gateway \
--public-ip-addresses my-nat-public-ip \
--idle-timeout 4
az network vnet subnet update \
--name master-subnet \
--vnet-name $VNET \
--resource-group $NETWORK_RG \
--nat-gateway my-nat-gateway
az network vnet subnet update \
--name worker-subnet \
--vnet-name $VNET \
--resource-group $NETWORK_RG \
--nat-gateway my-nat-gateway
az network vnet subnet update \
--name bastion-subnet \
--vnet-name $VNET \
--resource-group $NETWORK_RG \
--nat-gateway my-nat-gateway
az network vnet subnet update \
--name proxy-subnet \
--vnet-name $VNET \
--resource-group $NETWORK_RG \
--nat-gateway my-nat-gateway
Prepare the install-config.yaml File
To keep the installation repeatable and cleanly separated, create a dedicated resource group for the OpenShift cluster itself—separate from the one used for networking components.
az group create --name my-ocp-rg --location eastus
Below is an example install-config.yaml file configured for a private cluster using an existing VNet:
additionalTrustBundlePolicy: Proxyonly
apiVersion: v1
baseDomain: example.com
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform: {}
replicas: 3
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform: {}
replicas: 3
metadata:
creationTimestamp: null
name: mycluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
baseDomainResourceGroupName: openenv-dqrr5
cloudName: AzurePublicCloud
region: eastus
networkResourceGroupName: my-ocp-vnet-rg
resourceGroupName: my-ocp-rg
virtualNetwork: my-ocp-vnet
controlPlaneSubnet: master-subnet
computeSubnet: worker-subnet
outboundType: UserDefinedRouting
publish: Internal
pullSecret: '<PULL SECRET>'
sshKey: |
<SSH PUBLIC KEY>
Key Settings for a Private Cluster
Here are some important fields that control the cluster’s privacy and networking:
networkResourceGroupName
This is the name of the resource group where your VNet and subnets reside.
resourceGroupName
This is the resource group that will contain OpenShift cluster resources (VMs, disks, etc.).
outboundType: UserDefinedRouting
This prevents Azure from assigning public IPs to cluster components, enabling true private networking. It assumes you've configured NAT or a private outbound path.
publish: Internal
This ensures that the cluster API and application routes are exposed only within the VNet, not to the public internet.
(Default is External, which exposes endpoints publicly.)
Optional: Set Up a Bastion Host
Since the cluster will be deployed into a private network, the OpenShift installer must have direct access to the VNet.
The simplest way to enable this is by provisioning a bastion host (a jump box) inside the VNet, which acts as the control point for installation.
Here’s a brief walkthrough to prepare the bastion VM for installation.
Install Azure CLI on RHEL
On the bastion VM (RHEL 9), install the Azure CLI with the following commands:
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
sudo dnf install -y https://packages.microsoft.com/config/rhel/9.0/packages-microsoft-prod.rpm
sudo dnf install azure-cli
Download OpenShift Installer and CLI Tools
You can find the OpenShift installer and CLI tools for your specific version at:
http://mirror.openshift.com/pub/openshift-v4/clients/ocp/
In this example, we’ll use version 4.18.20.
Download the CLI and installer on the bastion host:
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.18.20/openshift-client-linux-4.18.20.tar.gz
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.18.20/openshift-install-linux-4.18.20.tar.gz
Extract the downloaded files in your working directory:
tar -xvzf openshift-client-linux-4.18.20.tar.gz
tar -xvzf openshift-install-linux-4.18.20.tar.gz
This will give you access to the oc CLI and openshift-install commands. Make sure they are in your PATH:
sudo mv oc kubectl openshift-install /usr/local/bin/
Deploy the Cluster
First, create a working directory to hold the install-config.yaml file, then place the file there:
mkdir azure
cp install-config.yaml ./azure/
To start the installation, run:
./openshift-install create cluster --dir azure --log-level=debug
The installer will prompt you for Azure credentials: subscription ID, tenant ID, and client ID.
These values will be saved in a .azure directory as osServicePrincipal.json. On future runs, the installer will reuse this file, so you won't need to re-enter the credentials.
Ensure the target resource group (defined in install-config.yaml) is already created before running the installer.
If everything is set up correctly, the cluster will be ready in about 50 minutes.
Because the installer takes time and must run uninterrupted, it's recommended to use a terminal multiplexer like tmux or screen to keep the process alive even if your SSH session disconnects.
Access the Cluster
After the installation completes, the installer generates authentication files under the auth/ directory inside your working folder.
You can use the kubeconfig file like this:
export KUBECONFIG=./azure/auth/kubeconfig
oc whoami
Conclusion
The OpenShift installer provides a flexible and powerful way to deploy clusters, including into existing azure infrastructure.
In this tutorial, we demonstrated how to deploy a private OpenShift cluster into an existing Azure Virtual Network (VNet) using the IPI method.
The installer automatically handles tasks such as provisioning compute resources, configuring load balancers, and creating required DNS records.
This approach makes it significantly easier to bring OpenShift into enterprise environments with strict networking and security requirements.
Thanks for reading!
Note:
Edited with the help of AI tools for clarity.