HCL Workload Automation, Version 9.4

Configuring High Availability for Dynamic Workload Console

You can configure a cluster of console nodes in High Availability with identical configurations to evenly distribute user sessions.

Before you begin configuring your nodes in High Availability, refer to the section on configuring High Availability in the Dynamic Workload Console User's Guide.

By leveraging High Availability configuration on Dashboard Application Services Hub, it is possible to meet High Availability requirements for the Dynamic Workload Console. Therefore, the following topics describe how to set up High Availability configuration for Dashboard Application Services Hub, how to customize it for the Dynamic Workload Console, and how to upgrade an existing High Availability configuration on Dashboard Application Services Hub.

High Availability is ideal for Dashboard Application Services Hub installations with a large user population. When a node fails, new user sessions are directed to other active nodes.

You can create a High Availability configuration from an existing stand-alone Jazz for Service Management instance, but must export its custom data before you configure it for High Availability. The custom data is added to the central repository and subsequently replicated to new nodes as they are added to the cluster. The exported data is later imported to one of the nodes in the cluster so that it is replicated across the other nodes in the cluster.

The workload is distributed by session, not by request. If a node fails, users who are in session with that node must log back in to access the Dashboard Application Services Hub. Any unsaved work is not recovered.

Restriction: Before installing a fix pack in a load balanced environment, you must remove all nodes from the load balanced cluster. After removing all nodes from the cluster, you must install the fix pack on each node so that they are at the same release level of Dashboard Application Services Hub. You can recreate the load balanced cluster after updating each node.

Synchronized data

After High Availability is set up, changes in the Dynamic Workload Console that are stored in global repositories are synchronized to all of the nodes in the configuration using a common database. The following actions cause changes to the global repositories used by the Dynamic Workload Console. Most of these changes are caused by actions in the Settings folder in the console navigation.
  • Creating, restoring, editing, or deleting a page.
  • Creating, restoring, editing, or deleting a view.
  • Creating, editing, or deleting a preference profile or deploying preference profiles from the command line.
  • Copying a portlet entity or deleting a portlet copy.
  • Changing access to a portlet entity, page, external URL, or view.
  • Creating, editing, or deleting a role.
  • Changes to portlet preferences or defaults.
  • Changes from the Users and Groups applications, including assigning users and groups to roles.
Note: Global repositories must never be updated manually.

During normal operation within a High Availability configuration, updates that require synchronization are first committed to the database. At the same time, the node that submits the update for the global repositories notifies all other nodes in the High Availability configuration about the change. As the nodes are notified, they get the updates from the database and commit the change to the local configuration.

If data fails to be committed on any given node, a warning message is logged in the log file. The node is prevented from making its own updates to the database. Restarting the Dashboard Application Services Hub instance on the node resolves most synchronization issues, if not, remove the node from the High Availability configuration for corrective action.

Note: If the database server restarts, all connections from it to the High Availability configuration are lost. It can take up to five minutes for connections to be restored, and for users to continue performing update operations, for example, modifying or creating views or pages.

Manual synchronization and maintenance mode

Updates to deploy, redeploy, or remove console modules are not automatically synchronized within the High Availability configuration. These changes must be performed manually on each node. For deploy and redeploy operations, the console module package must be identical at each node.

When one of the deployment commands is started on the first node, the system enters maintenance mode and changes to the global repositories are locked. After you finish the deployment changes on each of the nodes, the system returns to an unlocked state. There is no restriction to the order that modules are deployed, removed, or redeployed on each of the nodes.

While in maintenance mode, any attempts to make changes in the portal that affect the global repositories are prevented and an error message is returned. Later, the only changes to global repositories that are allowed are changes to a user's personal portlet or widget preferences. Any changes outside the control of the console, for example, a form submission in a portlet to a remote application, are processed normally.

The following operations are also not synchronized within the High Availability configuration and must be performed manually at each node. These updates do not place the High Availability configuration in maintenance mode.
  • Deploying, redeploying, and removing wires and transformations
  • Customization changes to the Dynamic Workload Console user interface (for example, custom images or style sheets) using consoleProperties.xml.
To reduce the chance that users establish sessions with nodes that have different wire and transformation definitions or user interface customizations, schedule these changes to coincide with console module deployments.

Requirements

The following requirements must be met before High Availability can be enabled.
  • Install the software requirements:
    1. Install IBM Installation Manager.
    2. Install WebSphere Application Server.
    3. Install Jazz for Service Management with Dashboard Application Services Hub.
    4. Install the Dynamic Workload Console. All the nodes in the High Availability configuration must be at the same release level, must have synchronized clocks, and must be installed in the same cell name. After Dynamic Workload Console installation on each node, use the -cellName parameter on the manageprofiles command.

      If you are creating a High Availability configuration from a stand-alone instance of the Dynamic Workload Console, you must export its custom data before you configure it for High Availability. The custom data is added to the central repository and subsequently replicated to new nodes as they are added to the High Availability configuration. When you have configured the nodes, you can import the data to one of the nodes for it to be replicated across the other nodes.

    5. A high availability configurations requires that the Dynamic Workload Console is configured to use an external Lightweight Directory Access Protocol (LDAP). LDAP must be installed and configured as the user repository for each node in the High Availability configuration. Each node in the High Availability configuration must be enabled to use the same LDAP using the same user and group configuration. For information about configuring the Dynamic Workload Console in LDAP, see Configuring authentication using the WebSphere Administrative Console.

      For information about which LDAP servers you can use, see List of supported software for WebSphere® Application Server V8.5. For information about how to enable LDAP for each node, see Configuring LDAP user registries.

    6. Create a new or use an existing database. A supported version of DB2 must be installed within the network to synchronize the global repositories for the nodes defined in the Dynamic Workload Console High Availability configuration. Refer to the System Requirements Document at https://workloadautomation.hcldoc.com/help/topic/com.hcl.wa.doc_9.4/distrDDguides.html for the list of supported database versions. To create a new database, see Creating databases. To use an existing database, see Changing settings repository.
    7. Create the WebSphere variables, the JDBC provider and data source.
    8. Enable server to server trust. See Enabling server-to-server trust.
    9. Install any subsequent fix packs. The WebSphere Application Server and Jazz™ for Service Management application server versions must have the same release level, including any fix packs. Fixes and upgrades for the run time must be applied manually at each node.
    10. Verify the configuration. See Verifying a successful High Availability configuration.
  • Update the WebSphere Application Server services with the new Administrative user, specifying the new LDAP user ID as the WAS_user and the new LDAP password as the WAS_user_password. For more information about updating WebSphere Application Server services, see updateWasService.
  • A front-end Network Dispatcher (for example, IBM HTTP Server) must be set up to handle and distribute all incoming session requests. For more information about this task, see Setting up intermediary services.
  • Before joining nodes to a High Availability configuration, make sure that each node uses the same file-based repository user ID, which was assigned the role of iscadmins.