1.5 Set up High Availability Red Hat Clusters

Red Hat High Availability allows you to connect a group of computers (called nodes) to work together as a cluster. Starting with version 8.0 SP2, Reflection for Secure IT is supported in a Red Hat Enterprise Linux 7 Cluster environment. Refer to Red Hat documentation for information about setting up and configuring the cluster. Review the following information to install and setup Reflection for Secure IT to run in the cluster.

Before you begin

Review the files used by the Reflection for Secure IT server to determine which files are required by your configuration. For a list of these files, see Files Used by the Server. Make a plan for where these files will be located in your cluster environment. For example:

  • If you plan on keeping all server files on the nodes, you will need to copy required files (such as the host key and files required for user public key authentication) to each node to ensure a seamless transition to a new node.

  • If you plan on locating some files on a shared file server, you can modify settings in sshd2_config (such as HostKeyFile and UserConfigDirectory) to point to the shared location.

To install Reflection for Secure IT in a Red Hat Cluster

  1. Install Reflection for Secure IT on each node.

  2. Copy required server files to each node. For example, if you have left the default value for HostKeyFile keyword, copy an identical host key file to the default location (/etc/ssh2/hostkey) on each node. If you have modified sshd2_config, copy the modified file to each node.

  3. Disable the Reflection for Secure IT server from automatically starting at reboot. The cluster will manage starting the server. For example:

    $ sudo chkconfig --del sshd

  4. List the LSB agents and confirm that sshd is there. For example:

    $ sudo crm_resource --list-agents lsb
    netconsole
    network
    rhnsd
    Sshd
  5. Create a pcs resource for sshd as an lsb agent. For example:

    $ sudo pcs resource create sshd_svc lsb:sshd op monitor interval=30s

  6. List out the defined resource and confirm that the new sshd_svc is listed. For example:

    $ sudo crm_resource --list
    dummy_svc (ocf::heartbeat:Dummy):Started
    sshd_svc (lsb:sshd):Started
  7. Stop the sshd service on each node. For example:

    $ /etc/init.d/sshd stop