Defining dependencies
About this task
- On successful completion of jobs and job streams: a job or a job stream, named successor, must not begin processing until other jobs and job streams, named predecessor, have completed successfully. For more information, see follows.
- On satisfaction of specific conditions by jobs and job
streams: a job or a job stream, named successor, must not begin processing until
other jobs and job streams, named predecessor, have met one, all, or a subset of specific
conditions that can be related to the status of the job or job stream, the return code, output
variables, or job log content. When the conditions are not met by the predecessor, then any
successor jobs with a conditional dependency associated to them are put in
suppress state. Successor jobs with a standard dependency are evaluated as usual.
You can also join or aggregate conditional dependencies related to different predecessors into a single join dependency. A join contains multiple dependencies, but you decide how many of those dependencies must be satisfied for the join to be considered satisfied. You can define an unlimited number of conditional dependencies, standard dependencies, or both in a join. Ensure that all the components in the IBMIBM® Workload Scheduler environment are at version 9.3 Fix Pack 1, or later. This dependency type is not supported on Limited Fault-Tolerant Agent IBM i. For more information, see Applying conditional branching logic, follows, and join.
- Resource: a job or a job stream needs one or more resources available before it can begin to run. For more information, refer to needs.
- File: a job or a job stream needs to have access to one or more files before it can begin to run. For more information, refer to opens.
- Prompt: a job or a job stream needs to wait for an affirmative response to a prompt before it can begin processing. For more information, refer to Prompt definition and prompt.
You can define up to 40 dependencies for a job or job stream. If you need to define more than 40 dependencies, you can group them in a join dependency. In this case, the join is used simply as a container of standard dependencies and therefore any standard dependencies in it that are not met are processed as usual and do not cause the join dependency to be considered as suppressed. For more information about join dependencies, see Joining or combining conditional dependencies and join.
- Internetwork dependency
- It is a simple and distributed based implementation. Use this
type of dependency when:
- The local HCL Workload Automation environment is distributed.
- You want to search for a remote predecessor job instance only in the plan currently running (production plan) on the remote environment.
- You need to match a predecessor instance in the remote plan, not that specific predecessor instance.
- You can wait for the polling interval to expire before being updated about the remote job status transition.
- You do not mind using different syntaxes and configurations based on whether the remote HCL Workload Automation environment is distributed rather than z/OS.
- You do not mind using a proprietary connection protocol for communicating with the remote engine.
- Cross dependency
- It is a more comprehensive and complete implementation. Use this
type of dependency when:
- Your local HCL Workload Automation environment can be either distributed or z/OS.
- You want to search for the remote predecessor instance also among the scheduled instances that are not yet included in the plan currently running on the remote engine.
- You want to match a precise remote predecessor instance in the remote engine plan. To do this you can use different out-of-the-box matching criteria.
- You want your dependency to be updated as soon as the remote job instance changes status. To do this the product uses an asynchronous notifications from the remote engine to the local engine.
- You want to use the same syntax and configuration regardless of whether the local HCL Workload Automation environment is distributed or z/OS.
- You want to use HTTP or HTTPS connections for communicating with the remote engine.