Customizing your job stream

This scenario describes how you can use the Dynamic Workload Console to create a job stream, schedule it to run at specified times, and change its behavior based on the day when it is scheduled to run.

Overview

A sales department manager needs to collect data on sales reports both at business unit level and at corporation level. For this reason the department manager needs reports both on a weekly and on a monthly basis. Report data is stored in two different directories, as follows:
  • Data for weekly reports is stored in a set of files located in the directory /reports/weekly.
  • Data for monthly reports is stored in a set of files located in the directory /reports2/monthly.

The job stream used for generating the reports has a dependency on the presence of these files. To collect the required data, the HCL Workload Automation administrator creates one job stream with two different run cycles, one scheduled to run on a weekly basis and one scheduled to run on a monthly basis.

Each run cycle references two different variable tables, which contain the variable and the related value used to define the path where the correct input files are located.

Creating the job stream and the related objects

To create all the database objects required to reach the business goal, the HCL Workload Automation designer uses the Workload Designer on the Dynamic Workload Console.
  1. He logs in to the Dynamic Workload Console and, from the navigation toolbar, he clicks Administration > Workload Design > Manage Workload Definitions. The Workload Designer opens.

    Using the New menu in the Working List section, the administrator can create all the necessary objects. He can also search for existing objects to edit and insert in the plan.

  2. The administrator selects New > Variable Table to create the two variable tables required to provide the two different values for the input file paths.
    1. He creates a variable table with name SC1_WEEKLY_DATA_TABLE. This table is the default table. The path to the files required to generate the weekly reports is indicated by the REP_PATH variable, to which he assigns the value "/reports/weekly".
    2. He creates a variable table with name SC1_MONTHLY_DATA_TABLE. The path to the files required to generate the monthly reports is indicated by the REP_PATH variable, to which he assigns the value "/reports2/monthly".
  3. The administrator selects New > Job Definition > Windows Job Definition to create the job definitions which generate the reports. All job definitions run a script each, which receives the value of the REP_PATH variable as the input value. He creates the following job definitions:
    1. The job definition named SC1_PARSE_DATA SCRIPTNAME runs on the relevant workstation logging in as root. It runs a script which contains the following statement: "/reportApp/parseData.sh ^REP_PATH^".
    2. The job definition named SC1_PROCESS_DATA SCRIPTNAME runs on the relevant workstation logging in as root. It runs a script which contains the following statement: "/reportApp/processData.sh ^REP_PATH^".
    3. The job definition named SC1_CREATE_REPORTS SCRIPTNAME runs on the relevant workstation logging in as root. It runs a script which contains the following statement: "/reportApp/createReports.sh ^REP_PATH^".
  4. The administrator selects New > Job Stream to create the job stream which contains the jobs. The job stream is named SC1_RUN_REPORTS and runs on the relevant workstation.
  5. The administrator selects Add to Selected > Run Cycle > Inclusive to define two run cycles for the job stream, as follows:
    1. The run cycle named SC1_WEEKLY_RCY uses the variable table SC1_WEEKLY_DATA_TABLE, which contains the value for the file path to be used for generating the weekly report. The run cycle also specifies that the job stream is to run once per week on Friday.
    2. The run cycle named SC1_MONTHLY_RCY uses the variable table SC1_MONTHLY_DATA_TABLE, which contains the value for the file path to be used for generating the monthly report. The run cycle also specifies that the job stream is to run once per month on the 27th day.
  6. The administrator selects Add to Selected > Add Dependency > File to specify a dependency from the files containing the data used for report generation. He uses the REP_PATH variable to define the path to the required files.
  7. The administrator searches for the job definitions he previously created (SC1_PARSE_DATA SCRIPTNAME, SC1_PROCESS_DATA SCRIPTNAME, SC1_CREATE_REPORTS SCRIPTNAME) and adds them to the job stream.
  8. The administrator creates a plan lasting 30 days to generate multiple instances of the job stream.

As a result, the variable REP_PATH is assigned different values depending on the run cycle that applies. The administrator defines two run cycles, each of which references a specific variable table.

In this way, the job stream instances have a dependency on a different set of files depending on the type of report they have to produce, either monthly or weekly, as follows:
  • The job stream instances that generate the weekly report have a dependency on the files located in the /reports/weekly directory.
  • The job stream instances that generate the monthly report have a dependency on the files located in the /reports2/monthly directory.
Furthermore, the name of the target directory is correctly replaced in the task string of the three jobs run by every job stream instance, as follows:
  • The jobs run by job stream instances that generate the weekly report run shell scripts with the directory /reports/weekly as an input argument.
  • The jobs run by job stream instances that generate the monthly report run shell scripts with the directory /reports2/monthly as an input argument.
The administrator can therefore define a single job stream with two different run cycles and ensure that the appropriate reports are generated on the correct dates without further user intervention.