Configuring High Availability for Dynamic Workload Console
You can configure a cluster of console nodes in High Availability with identical configurations to evenly distribute user sessions.
Before you begin configuring your nodes in High Availability, refer to the section on configuring High Availability in the Dynamic Workload Console User's Guide.
By leveraging High Availability configuration on Dashboard Application Services Hub, it is possible to meet High Availability requirements for the Dynamic Workload Console. Therefore, the following topics describe how to set up High Availability configuration for Dashboard Application Services Hub, how to customize it for the Dynamic Workload Console, and how to upgrade an existing High Availability configuration on Dashboard Application Services Hub.
High Availability is ideal for Dashboard Application Services Hub installations with a large user population. When a node fails, new user sessions are directed to other active nodes.
You can create a High Availability configuration from an existing stand-alone Jazz for Service Management instance, but must export its custom data before you configure it for High Availability. The custom data is added to the central repository and subsequently replicated to new nodes as they are added to the cluster. The exported data is later imported to one of the nodes in the cluster so that it is replicated across the other nodes in the cluster.
The workload is distributed by session, not by request. If a node fails, users who are in session with that node must log back in to access the Dashboard Application Services Hub. Any unsaved work is not recovered.
Synchronized data
- Creating, restoring, editing, or deleting a page.
- Creating, restoring, editing, or deleting a view.
- Creating, editing, or deleting a preference profile or deploying preference profiles from the command line.
- Copying a portlet entity or deleting a portlet copy.
- Changing access to a portlet entity, page, external URL, or view.
- Creating, editing, or deleting a role.
- Changes to portlet preferences or defaults.
- Changes from the Users and Groups applications, including assigning users and groups to roles.
During normal operation within a High Availability configuration, updates that require synchronization are first committed to the database. At the same time, the node that submits the update for the global repositories notifies all other nodes in the High Availability configuration about the change. As the nodes are notified, they get the updates from the database and commit the change to the local configuration.
If data fails to be committed on any given node, a warning message is logged in the log file. The node is prevented from making its own updates to the database. Restarting the Dashboard Application Services Hub instance on the node resolves most synchronization issues, if not, remove the node from the High Availability configuration for corrective action.
Manual synchronization and maintenance mode
Updates to deploy, redeploy, or remove console modules are not automatically synchronized within the High Availability configuration. These changes must be performed manually on each node. For deploy and redeploy operations, the console module package must be identical at each node.
When one of the deployment commands is started on the first node, the system enters maintenance mode and changes to the global repositories are locked. After you finish the deployment changes on each of the nodes, the system returns to an unlocked state. There is no restriction to the order that modules are deployed, removed, or redeployed on each of the nodes.
While in maintenance mode, any attempts to make changes in the portal that affect the global repositories are prevented and an error message is returned. Later, the only changes to global repositories that are allowed are changes to a user's personal portlet or widget preferences. Any changes outside the control of the console, for example, a form submission in a portlet to a remote application, are processed normally.
- Deploying, redeploying, and removing wires and transformations
- Customization changes to the Dynamic Workload Console user interface (for example, custom images or style sheets) using consoleProperties.xml.
Requirements
- Install the software requirements:
- Install IBM Installation Manager.
- Install WebSphere Application Server.
- Install Jazz for Service Management with Dashboard Application Services Hub.
- Install the Dynamic Workload Console.
All the nodes in the High Availability configuration must be at the
same release level, must have synchronized clocks, and must be installed
in the same cell name. After Dynamic Workload Console installation
on each node, use the -cellName parameter on
the manageprofiles command.
If you are creating a High Availability configuration from a stand-alone instance of the Dynamic Workload Console, you must export its custom data before you configure it for High Availability. The custom data is added to the central repository and subsequently replicated to new nodes as they are added to the High Availability configuration. When you have configured the nodes, you can import the data to one of the nodes for it to be replicated across the other nodes.
- Configure the Dynamic Workload Console in
LDAP. Lightweight Directory Access Protocol (LDAP) must be installed
and configured as the user repository for each node in the High Availability
configuration. Each node in the High Availability configuration must
be enabled to use the same LDAP using the same user and group configuration.
See Configure the Dynamic Workload Console in LDAP.
For information about which LDAP servers you can use, see List of supported software for WebSphere® Application Server V8.5. For information about how to enable LDAP for each node, see Configuring LDAP user registries.
- Create a new or use an existing database. A supported version of DB2 must be installed within the network to synchronize the global repositories for the nodes defined in the Dynamic Workload Console High Availability configuration. Refer to the System Requirements Document at https://workloadautomation.hcldoc.com/help/topic/com.hcl.wa.doc_9.4/distrDDguides.html for the list of supported database versions. To create a new database, see Creating databases. To use an existing database, see Changing settings repository.
- Create the WebSphere variables, the JDBC provider and data source.
- Enable server to server trust. See Enabling server-to-server trust.
- Install any subsequent fix packs. The WebSphere Application Server and Jazz™ for Service Management application server versions must have the same release level, including any fix packs. Fixes and upgrades for the run time must be applied manually at each node.
- Verify the configuration. See Verifying a successful High Availability configuration.
- Update the WebSphere Application Server services with the new Administrative user, specifying the new LDAP user ID as the WAS_user and the new LDAP password as the WAS_user_password. For more information about updating WebSphere Application Server services, see updateWasService.
- A front-end Network Dispatcher (for example, IBM HTTP Server) must be set up to handle and distribute all incoming session requests. For more information about this task, see Setting up intermediary services.
- Before joining nodes to a High Availability configuration, make sure that each node uses the same file-based repository user ID, which was assigned the role of iscadmins.