HCL Workload Automation, Version 9.4

Scenario: Shared disk, passive–active failover on a master domain manager

This scenario describes how to configure HCL Workload Automation and a remote or local DB2® database so that a HACMP cluster is able to manage the failover of the active master domain manager.

Configuring HCL Workload Automation and a remote DB2 database

The following procedure explains how to configure HCL Workload Automation and a remote DB2 database so that a passive, idle node in the cluster can take over from an active master domain manager that has failed. The prerequisite for this procedure is that you have already configured HACMP.

Ensure that Installation Manager is installed in folders mounted on a shared disk and shared among all the cluster nodes, . If you do not change them during the installation, the default installation directories are:
 /var/hcl/InstallationManager
 /opt/HCL/InstallationManager
 /opt/HCL/IMShared

Install HCL Workload Automation using one of the installation methods described in Planning and Installation Guide.

Install HCL Workload Automation using a shared Installation Manager instance, so that any patching, installing, uninstalling, or upgrading activity can be performed on any HCL Workload Automation node.

During the installation, perform the followings configuration steps:
  1. Create the same TWS administrator user and group on all the nodes of the cluster. Ensure that the user has the same ID on all the nodes and points to the same home directory on the shared disk where you are going to install HCL Workload Automation.
    Example: You want to create the group named twsadm for all HCL Workload Automation administrators and the TWS Administrator user named twsusr with user ID 518 and home /cluster/home/twsusr” on the shared disk:
    mkgroup id=518 twsadm
    	mkuser  id=518 pgrp=twsadm home=/cluster/home/twsusr twsusr
    	passwd twsusr
    To install HCL Workload Automation in a directory other than the user home on the shared disk, ensure that the directory structure is the same on all nodes and that the useropts file is available to all nodes. Ensure also that the user has the same ID on all the nodes of the cluster.
  2. Start the node that you want to use to run the installation of HCL Workload Automation and set the parameters so that HACMP mounts the shared disk automatically.
  3. Install the DB2 administrative client on both nodes or on a shared disk configuring it for failover as described in DB2 documentation.
  4. Create the db2inst1 instance on the active node to create a direct link between HCL Workload Automation and the remote DB2 server.
  5. Proceed with the HCL Workload Automation installation, using twsuser as the home directory and the local db2inst1 instance.

After you installed HCL Workload Automation, run the cluster collector tool to automatically collect files from the active master domain manager. These files include the registry files, the Software Distribution catalog, and the HCL Workload Automation external libraries. The cluster collector tool creates a .tar file containing the collected files. To copy these files on the passive nodes, you must extract this .tar file on them.

To configure HCL Workload Automation for HACMP, perform the following steps:

  1. Run the cluster collector tool.
  2. From TWA_home/TWS/bin, run ./twsClusterCollector.sh -collect -tarFileName tarFileName

    where tarFileName is the complete path where the archive is stored.

  3. Copy tws_user_home/useropts_twsuser from the active node to the passive master domain manager from both the root and user home directories, to the other nodes.
  4. Replace the node hostname with the service IP address for themaster domain manager definitions, the WebSphere Application Server, the Dynamic workload broker and the agent, as described in the Administration Guide, section Changing the workstation host name or IP address .
  5. Copy the start_tws.sh and stop_tws.sh scripts from TWA_home/TWS/config to the TWA_home directory.
  6. Customize the start_tws.sh and stop_tws.sh scripts by setting the DB2_INST_USER parameter that is used to run the start and stop of the DB2 instance during the “failover” phase.
  7. Try the start_tws.sh and stop_tws.sh scripts to verify HCL Workload Automation starts and stops correctly.
  8. Move the shared volume on the second cluster node (if you have already defined the cluster group, you can move it by using the clRGmove HACMP command).
  9. Run the collector tool to extract HCL Workload Automation libraries. From the TWA_home/TWS/bin directory, run:
    ./twsClusterCollector.sh -deploy -tarFileName tarFileName 
    wheretarFileName is the complete path where the archive is stored.
  10. Configure a new Application Controller resource on HACMP using the customized start_tws.sh and stop_tws.sh scripts.
When invoked by the HACMP during the failover, the scripts automatically start or stop the WebSphere Application Server and HCL Workload Automation , and link or unlink all the workstations.

Local DB2

This scenario includes all of the steps described in Configuring HCL Workload Automation and a remote DB2 database but, you must also perform the following additional steps:
  1. Install the DB2 locally on both the nodes or on the shared disk, without creating a new instance.
  2. Create a new instance on the shared disk, define all the DB2 users also on the second node, and modify the following two files:
    • /etc/hosts.equiv

      Add a new line with just the Service IP address value.

    • <db2-instance-home>/sqllib/db2nodes.cfg

      Add a new line similar to the following line:

      0 <Service IP address> 0

  3. To stop the monman process used for Event Driven Workload Automation, add "conman startmon" and "conman stopman" to the start_tws.sh and stop_tws.sh scripts respectively.