Installation#

Prerequisites#

Important

Ensure the hardware, BIOS and jump node requirements have been met.

Before proceeding with the automated cluster installation using the downloaded Ansible playbooks, prepare the following items.

OpenShift Access Token#

Generate an offline token for authentication with the Red Hat Assisted Installer API.

  1. Log in to the OpenShift Cluster Manager as a user with cluster creation privileges.

  2. Click Downloads in the left-hand menu bar.

  3. Scroll to the bottom of the web page and click View API Token in the Tokens section.

  4. Click use API tokens to authenticate.

    Red Hat Console - Offline tokens to authenticate
  5. Click Load Token. Copy the token and save it to use during cluster installation.

IP Addresses and DNS Records#

The IP address of each node’s IPMI/BMC interface is required at installation time. Each node also requires a DHCP-assigned IP address for its node management interface. It is recommended to make DHCP reservations for these Node IP addresses.

All OpenShift clusters have two important external endpoints:

  • Kubernetes API: This provides a single, stable entry point for clients to interact with the OpenShift Container Platform API. This includes operations like managing cluster resources, deploying applications, and retrieving cluster status.

  • Ingress: This serves as the entry point for external traffic destined for applications running within the OpenShift cluster.

For single node clusters, these endpoints are exposed on the Node IP address. For compact (3-node) and standard (5+ nodes) clusters, a static Virtual IP (VIP) address is required for each of these endpoints.

The DNS names for these endpoints have the following format:

  • Kubernetes API: api.<name>.<base_dns_domain>

  • Ingress: *.apps.<name>.<base_dns_domain> (wildcard)

The Cluster Details section below describes how the name and base_dns_domain are configured at installation time.

Add DNS A records to your DNS server to map these DNS names to the VIP addresses or, in the case of a single-node cluster, the Node IP address. Alternatively, you can add entries to your local hosts file. This does not support DNS wildcards, so entries must be created for every application’s ingress endpoint and kept updated as new applications are deployed.

Configuring Installation Playbooks#

We need to change properties in the following asset files required by the installation playbook. These asset files are located in the installation/assets directory:

Asset Files#

File

Purpose

cluster.json

Defines cluster-level properties including name, base domain, OpenShift version, CPU architecture, high availability mode, and API/Ingress VIP addresses

cluster_details.json

Contains server-specific information including IPMI/BMC credentials, hostnames, and node roles for each physical server in the cluster

Cluster Details#

Open the cluster.json file and add the following details:

cluster.json properties#

Property

Description

name

Name of the cluster. For example, h4m-cluster.

base_dns_domain

Base DNS domain of the cluster. For example, example.com.

api_vips

API virtual IP address. For a single-node cluster this field is ignored.

ingress_vips

Ingress virtual IP address. For single-node cluster this field is ignored.

With the example name and base_dns_domain values:

  • Kubernetes API endpoint: https://api.h4m-cluster.example.com:6443

  • Ingress endpoints: *.apps.h4m-cluster.example.com, for example, https://console-openshift-console.apps.h4m-cluster.example.com

Server Details#

Open the cluster_details.json file and provide the Intelligent Platform Management Interface (IPMI) login credentials for each server.

Cluster Size Requirements:

  • Single Node Cluster: Add details of one host

  • Compact Cluster (3-node): Add details of three hosts (all with master role)

  • Standard Cluster (5+ nodes): Add details of five or more hosts (three with master role, the rest with worker role)

cluster_details.json properties#

Property

Description

ipaddress

IPMI/BMC IP address of the server (used for Redfish API access to control server power and virtual media).

username

IPMI/BMC username for authentication.

password

IPMI/BMC password for authentication.

hostname

Specify a name that will be assigned to the server during OpenShift cluster installation, for example, myhost-a.example.com. This host name is for management interface and not IPMI interface.

role

Node role in the cluster: master for control plane nodes or worker for compute nodes. For single node and compact clusters, all nodes must be master.

Run the Playbook#

The Ansible playbooks need sudo access to copy the ISO file to the Apache HTTP server directory.

  1. To run any Ansible command, activate the Python virtual environment previously created in the Software Prerequisites section:

    source .h4m/bin/activate
    

    Note

    To exit the virtual environment at any time run:

    deactivate
    
  2. In the same terminal, export the offline access token generated earlier (refer to OpenShift Access Token).

    export OFFLINE_ACCESS_TOKEN=<token>
    
  3. In the same terminal, run the Ansible playbook using the following command:

    ANSIBLE_LOG_PATH=./installation_$(date +'%Y%m%d_%H%M%S').log ansible-playbook main.yml --ask-become-pass -vv
    
    • The -vv flag enables verbose output for debugging. You can change it to -v for less detail or -vvv or -vvvv for increasingly more detailed logs.

  4. When prompted, enter the current user password to grant sudo access.

After completion of the playbook, the final Ansible task prints the clusters’s OpenShift web console login details on the terminal:

Installation automation completion confirmation

Save the console URL, username, and password. They are used later to generate the OpenShift API Token from the OpenShift web console.

Download Kubeconfig#

It is recommended to download the kubeconfig file as Red Hat maintains it only for 20 days from the cluster installation.

  1. Login to the Red Hat console and click Cluster List.

    Red Hat Console - Select Cluster List
  2. Click the name of the cluster.

    Red Hat Console - Cluster list showing Ready status
  3. Click the Download kubeconfig button to download the kubeconfig file.

    Red Hat Console - Download kubeconfig button
  4. Copy the kubeconfig file to ~/.kube/config on the jump node to access the cluster.