Installation

Compatibility

This will work with an ICAT version of 4.3.0 or greater and requires plugins implementing the IDS plugin interface 1.3.x.

Prerequisites

  • The ids distribution: ids.server-1.7.0-distro.zip
  • A suitable deployed container (here assumed to be glassfish though tomcat should be sufficient) to support a web application. Testing has been carried out with Glassfish 4.0. Glassfish installation instructions are available.
  • A deployed plugin or plugins for the storage mechanism you wish to use. Please see plugin to see the interface you must implement. You might also like to look at the file storage plugin as an example.
  • Python (version 2.4 to 2.7) installed on the server.

Summary of steps

  1. Please follow the generic installation instructions
  2. Check that it works.

The ids-setup.properties file

container
Values must be chosen from: TargetServer Though only Glassfish is working properly at the moment.
home
is the top level of the container installation. For glasssfish it must contain "glassfish/domains" and for wildfly it must contain jboss-modules.jar.
port
is the administration port of the container which is typically 4848 for glassfish and 9990 for wildfly.
secure
must be set to true or false. If true then only https and not http connections will be allowed.

The ids.properties file

General Properties

icat.url
The url of the machine hosting the icat service. It should normally just have the scheme, the hostname and the port. For example: https://example.com:443
plugin.zipMapper.class
The class name of the ZipMapper which defines the Zip file structure you want. The class must be deployed in the lib/applibs directory of your domain and must be packaged with all it dependencies.
plugin.main.class
The class name of the main storage plugin. The class must be deployed in the lib/applibs directory of your domain and must be packaged with all it dependencies.
plugin.main.properties
Optional property file for the main storage plugin.
cache.dir
The location (absolute or relative to the config directory of the domain) of a directory to hold mostly zip files.
preparedCount
The number of preparedId values from prepareData calls to remember.
processQueueIntervalSeconds
The frequency of checking the process queue. This is used both for cleaning old information from memory and for triggering movements between main and archive storage (if selected).
rootUserNames
A space separated list of users allowed to make the getServiceStatus call. The user name must include the mechanism if the authenticators have been configured that way.
sizeCheckIntervalSeconds
How frequently to check the cache sizes and clean up if necessary.
readOnly
If true disables write operations (put and delete).
linkLifetimeSeconds
The length of time in seconds to keep the links established by the getLink call. If this is set to zero then the getLink call will return a NotImplementedException.
reader
Space separated icat plugin name and credentials for a user permitted to read all datasets, datafiles, investigations and facilities. For example: db username root password secret.
key
Optional key value. If specified this contributes to the computation of a cryptographic hash added to the location value in the database. The ids plugins do not see the hash. The key must of course be long enough to be secure and must be kept private.
maxIdsInQuery
The number of literal id values to be generated in an ICAT query. For Oracle this must not exceed 1000.
log.list
Optional. If present it specifies a set of call types to log via JMS calls. The types are specified by a space separated list of values taken from READ, WRITE, LINK, MIGRATE, PREPARE and INFO.
jms.topicConnectionFactory
Optional. If present it overrides the default JMS connection factory.
logback.xml
This is optional. If present it must specify the path to a logback.xml file. The path may be absolute or relative to the config directory. The file ids.logback.xml.example may be renamed to ids.logback.xml to get started.

Properties for archive storage

If you are not using archive storage then all of these properties should be omitted.

plugin.archive.class
The class name of the archive storage plugin. The class must be deployed in the lib/applibs directory of your domain and must be packaged with all it dependencies.
plugin.archive.properties
Optional property file for the archive storage plugin.
writeDelaySeconds
The amount of time to wait before writing to archive storage. This exists to allow enough time for all the datafiles to be added to a dataset before it is zipped and written.
startArchivingLevel1024bytes
If the space used in main storage exceeds this then datasets will be archived (oldest first) until the space used is below stopArchivingLevel1024bytes.
stopArchivingLevel1024bytes
See startArchivingLevel1024bytes.
storageUnit
May be dataset or datafile and is not case sensitive. A value of "dataset" means that a whole dataset of files are zipped up to be stored as a single file whereas "datafile" causes datafiles to be stored individually.
tidyBlockSize
The number of datafiles or datasets to get back in one call for archive request when space on main storage is low.

Properties for file checking

When a datafile is added to the IDS its length and checksum are computed and stored in ICAT. File checking, if enabled, cycles through all the stored data making sure that they can be read and that files have the expected size and checksum.

filesCheck.parallelCount
This must always be set, and if non zero then the readability of the data will be checked. The behaviour is dependent upon whether or not archive storage has a been requested. In the case of single level storage this is done in groups of files where the group size is defined by this parameter. If archive storage has been requested then only the archive is checked. Each file in the archive holds a complete dataset and this filesCheck.parallelCount parameter then defines how many dataset files will be checked in parallel.

In the case of checking datasets in the archive storage these are unzipped on the fly to compute the checksum of each file inside the zip file as well as its length.

If the archive storage has a long latency then it is useful to have a "large" value, however a thread is started for each stored file so the value of this parameter should not be too large.

filesCheck.gapSeconds
the number of seconds to wait before launching a check of the next batch of datafiles or datasets.
filesCheck.lastIdFile
the location of a file which is used to store the id value of the last datafile or dataset to be checked. This is so that if the IDS is restarted it will continue checking where it left off. If this file is deleted the ids will restart checking from the beginning. The parameters filesCheck.parallelCount and filesCheck.gapSeconds should be set so that the data are all checked with the desired frequency but without excessive I/O. A nagios plugin might check that this file is being written periodically and that its contents change.
filesCheck.errorLog
the file with a list of errors found. The file is not kept open but instead is opened in append mode each time a problem is spotted and then closed. A nagios plugin might be set up to watch this file. Entries in the file are data stamped and new entries are simply appended without regard for the existence of an entry for the same file.

Check that the ids server works

Enter a url of the form https://example.com:443/ids/ping into a web browse and it should respond: IdsOK . Note the url is that of the machine hosting the IDS followed by "/ids/ping"