Please note, below deployment happened in a virtual environment. vbmc or sushy-tools may be used per your preference. vbmc and sushy-tools can be installed using pip3 install <module> command.
Prerequisite:
Each node should equip with 2 network ports. One should be able to route to internet. The other one is for provisioning purpose.
DHCP: to assign static IP to the network port which is used for internet access
DNS: only 2 records are required.
Sample configuration is as below. (10.7.21.51 is the address of my DNS server)
$TTL 600
@ IN SOA test.ocp.qct. com.www.ocp.qct. (
2019052801 3H 15M 1W 1D ) ;
@ IN NS test.ocp.qct. ; DNS
test.ocp.qct. IN A 10.7.21.51
api.test.ocp.qct. IN A 10.102.17.10
*.apps.test.ocp.qct. IN A 10.102.17.15
Sample install-config.yaml is as below
(machineCIDR is the one used for internet access. two additional IPs are required for API and Ingress)
(the full FQDN is <metadaya>.<baseDomain>. in below example, the FQDN should be test.ocp.qct)
(provisioningNetworkInterface is the interface name used for node provisioning. the provisioning service will run on all controller/master node. the interface name should be aligned among all controller/master node.)
(compute replicas: when set to 0, all nodes will be provisioned to master/worker hybrid mode. each node will act as master and worker. )
apiVersion: v1
baseDomain: ocp.qct
metadata:
name: test
networking:
machineCIDR: 10.102.17.0/24
compute:
- name: worker
replicas: 0
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: 10.102.17.10
ingressVIP: 10.102.17.15
provisioningNetworkCIDR: 172.22.0.0/24
provisioningNetworkInterface: enp1s0
provisioningDHCPRange: 172.22.0.10,172.22.0.100
bootstrapOSImage: http://10.102.17.23:8080/rhcos-47.83.202105220305-0-qemu.x86_64.qcow2.gz?sha256=d3e6f4e1182789480dcb81fc8cdf37416ec9afa34b4be1056426b21b62272248
clusterOSImage: http://10.102.17.23:8080/rhcos-47.83.202105220305-0-openstack.x86_64.qcow2.gz?sha256=94058cc4cff50e63ebeba8e044215c1591d0a4daea2ffdb778836d013290868e
hosts:
- name: openshift-master-0
role: master
bmc:
address: ipmi://10.102.17.23:6235
username: admin
password: admin
bootMACAddress: 52:54:00:64:a1:ab
bootMode: legacy
rootDeviceHints:
minSizeGigabytes: 10
hardwareProfile: default
- name: openshift-master-1
role: master
bmc:
address: ipmi://10.102.17.23:6236
username: admin
password: admin
bootMACAddress: 52:54:00:eb:31:6f
bootMode: legacy
rootDeviceHints:
minSizeGigabytes: 10
hardwareProfile: default
- name: openshift-master-2
role: master
bmc:
address: ipmi://10.102.17.23:6237
username: admin
password: admin
bootMACAddress: 52:54:00:88:36:ca
bootMode: legacy
rootDeviceHints:
minSizeGigabytes: 10
hardwareProfile: default
- name: openshift-worker-0
role: worker
bmc:
address: ipmi://10.102.17.23:6245
username: admin
password: admin
bootMACAddress: 52:54:00:46:59:73
bootMode: legacy
rootDeviceHints:
minSizeGigabytes: 10
hardwareProfile: unknown
- name: openshift-worker-1
role: worker
bmc:
address: ipmi://10.102.17.23:6246
username: admin
password: admin
bootMACAddress: 52:54:00:f5:db:a0
bootMode: legacy
rootDeviceHints:
minSizeGigabytes: 10
hardwareProfile: unknown
pullSecret: ''
sshKey: ''
You may want to power off the machine after introspection. Simply issue following command to make it happen
oc -n openshift-machine-api patch bmh openshift-worker-2 -p '{"spec":{"online":false}}' --type=merge
Scale up/down
oc scale --replicas=<num> machineset <machineset> -n openshift-machine-api