Installation

Compatibility

The IJP Demo Job Scripts will work with an IJP version of 2.0.0 or greater.

Prerequisites

  • The IJP Demo Scripts distribution: ijp.demo-1.0.0-distro.zip
  • An ICAT Job Portal (server) installation, from which jobs can be submitted. This could be on the batch server itself (e.g. when using unixbatch) or could be remote.
  • A local IJP batch server. The sample unixbatch server may be useful for local development and testing.
  • The IJP server should be configured to submit jobs to the batch server. When testing, the IJP should be configured with only this batch server, so it is guaranteed to "choose" to submit jobs to it.
  • The batch server should be configured to receive jobs from the IJP server.
  • Python (version 2.4 to 2.7) installed on the batch server.
  • sleepcount is a bash script, so requires bash on the batch server.
  • An IDS client installed on the batch server.
  • An ICAT client installed on the batch server.

Summary of steps

  1. Note that when the IJP server and batch server are separate systems, the installation must be performed on both.
  2. Please follow the generic installation instructions
  3. Update the demojobs.properties file with target locations for jobtype descriptions and batch scripts. The scripts must be installed on the batch server. The jobtypes must be installed in the IJP configuration in the glassfish domain. When the IJP and batch server are separate systems, the installation must be performed once on each, with only the glassfish location and admin port defined for the IJP server, and only the scripts location defined for the batch server.
  4. Change the dataset types, facilty name and data format details in the properties file; the chosen values must be existing entities in the ICAT instance used by the IJP. See below for more details.
  5. Glassfish must be running when the jobtype descriptions are installed.
  6. The batch scripts folder must be visible to the batch system userids that will be used to run jobs (it should be in their PATH). Current IJP batch servers use sudo to execute scripts; one consequence is that the scripts folder must also be permitted by the sudoers security policy (typically, it should be one of the values in secure_path in the /etc/sudoers file.)
  7. Check that it works.

The demojobs.properties file

glassfish
When installing the jobtype descriptions in an IJP server, set this to the path to the glassfish instance in which the IJP server is deployed, e.g. /home/ijp/glassfish4. This property should be omitted when installing on a batch server that is not also an IJP server.
port
The admin port of the glassfish instance, e.g. 4848. When the glassfish property is specified, this must be specified as well.
ijp.scripts.dir
When installing on a batch server, set this to the full path to the folder into which the job scripts should be placed for the batch server. e.g. /opt/ijp/bin. This property should be omitted when installing on an IJP server that is not also a batch server.
dataset.type.1
Should be the name of an existing ICAT dataset type. Most of the jobtype descriptions refer to this. It is used to populate the list of available dataset types when the jobtype is selected in the IJP. The initial value is TestDatasetType.
dataset.type.2
Should be the name of an existing ICAT dataset type. Several of the jobtype descriptions refer to this (in addition to dataset.type.1). It is used to populate the list of available dataset types when the jobtype is selected in the IJP. The initial value is TestDatasetType2.
facility.name
Should be the name of an existing Facility in ICAT; it is used by the create_datafile and copy_datafile job scripts. The initial value is TestFacility.
data.format.name
Should be the name of an existing ICAT data format; it is used by the create_datafile job script. The initial value is TestDataFormat.
data.format.version
Corresponding version-string for data.format.name. The initial value is 0.1.

Check that the job scripts work

In a command shell on the batch server, go to the scripts installation directory (e.g. /opt/ijp/bin) and run:

test_args.py --datasetIds=1,2,3 --datafileIds=4,5,6

This should succeed and report the supplied dataset/datafile IDs. (It will also report that other properties have not been supplied, but this is OK.)

In a browser on any suitable system, launch the IJP by visiting https://ijp-server-name:8181/ijp (where ijp-server-name is the domain name of the IJP server), and login with the credentials defined in ijp.properties. Then:

  • Choose "date" from the list of Job Types
  • Click "Submit Job"; this should bring up a dialog "date Options" with buttons labelled Submit and Close.
  • Click "Submit". The resulting dialog should display a single Submitted Job ID. Note the ID, and click Close.
  • Click on "Show job status panel"; this should launch a separate dialog listing known jobs and their statuses.
  • At or near the top of the list should be an entry for the noted ID, with name "date". If the Status is Queued, wait until it changes to Completed. Once Completed, click Display Job Output.
  • The output display should show that the job has run; at or near the end should be a line that ends with "date ending with code 0".

Repeat for other job types, to test other aspects of the IJP, e.gs:

sleepcount
Asks for sleep duration and no. of iterations. It should be possible to observe the output before the job completes.
test_args_multi
Allows the user to select multiple datasets and datafiles and submit them either to a single job, or one job per selection. Output reports the IDs of the selected datasets/datafiles.
create_datafile
Allows the user to select one or more datasets; requests a filename and (one-line) contents. For each selected dataset, runs a (separate) job that creates the file in that dataset.
copy_datafile
The user should select a single target dataset and a single datafile (in a different dataset). The job will create a copy of the datafile in the target dataset. Note that the IJP allows selection of multiple datasets and/or datafiles, and allows the user to run multiple jobs; but in these cases the jobs will fail. Only a single dataset and a single datafile (in a different dataset) should be selected.