Overview

This document is made up of four sections, describing installation , post-installation work , performance and the admin interface

Installation

Prerequisites

  • The icat distribution: icat.server-4.8.0-distro.zip
  • Java 8
  • A suitable deployed container. The installation scripts support Glassfish and to a lesser extent WildFly. Testing has been carried out with Glassfish 4.0. Glassfish installation instructions are available.
  • A database as described in Database installation instructions installed on the server
  • Deployed ICAT authenticators.
  • Python (version 2.4 to 2.7) and the python-suds package installed on the server.
  • MySQL-python must be installed if you have an MySQL 4.2.x schema you need to upgrade.
  • cx_Oracle must be installed (in addition to an Oracle client) if you have an Oracle 4.2.x schema you need to upgrade.
  • icat-setup version 1.1.x or higher must be installed if you have a 4.2.x schema you need to upgrade and you wish to preserve existing rules.

Summary of steps

  1. Upgrade the database schema if you already have an ICAT installation.
  2. If you wish to install multiple servers each running an ICAT connected to the same database please see Installing a group of ICATs
  3. Please follow the generic installation instructions
  4. See if it works.

Installing a group of ICATs

If your facility depends upon a single ICAT instance then ingestion of data can be held up by a user making an expensive query.

To avoid this it is suggested that you install multiple servers each running a Glassfish with an ICAT but all sharing one database. However there are a number of opportunities to get things wrong in the setup so it is recommended that you install the central machine first and make sure that it works before adding in the satellites. Ingestion can be directed to one node and the other nodes can be load balanced for user access by, for example, an Apache web server. In this documentation one machine is referred to as the central one and the others are referred to as satellites.

All machines must use the same database. If the central machine is running the database then ports must be opened to the satellite machines. By default this is 3306 for MySQL and 1521 for Oracle. Ports must also be opened on the central machine for JMS and IIOP which are by default 7676 and 3700 and respectively. In addition it seems that the ORB (IIOP) makes unusual use of the ephemeral ports which require that they are also open on the central machine. The set of ephemeral ports is large - and for unix may be found at /proc/sys/net/ipv4/ip_local_port_range. Security is not enabled for either JMS or IIOP communication so the firewalls should normally be configured to only allow traffic from the set of satellite machines.

Authentication can either be carried out on the central machine - which has the advantage that you only have authenticator logs building up on that machine or it can be handled by the satellites so distributing the load better. Even if you choose to use central authentication the authenticators must, unfortunately still be installed on the satellites. Glassfish seems to get confused if the JNDI resources are not recognised locally on the satellites. To enable central authentication you need a line in the icat.properties file on each satellite machine for each authenticator of the form: authn.XXX.hostPort = <central machine>:3700 where XXX is one of the values listed in the authn.list property.

Lucene calls must also be directed to the central machine with a line in each satellite's icat.properties file of the form: lucene.hostPort = <central machine>:3700. This is the only lucene property that should be set on the satellites. It is essential that this property is set otherwise each node will only see changes made via that machine.

JMS messages must all be sent to the central machine. This is important otherwise cache refresh messages may be missed. All machines may produce these messages and all machines are listening for them. To configure this an entry is added to the icat-setup.properties file (not the icat.properties file) of the satellite machines of the form jmsHostPort = <central machine>:7676

You could then set up an Apache front end to do load balancing. This will probably just connect to the satellites leaving the central machine to handle ingestion of data. See Apache front end for one way of doing this.

Schema upgrade

Any existing lucene database should be removed. See the icat.properties file for the value of lucene.directory and ensure that the directory specified there is empty.

The database schema must be upgraded in steps depending upon how old your icat installation is

Upgrade 4.2.5 schema to 4.3.x

This is for upgrading a 4.2.5 schema to 4.3.2. If you have already upgraded to 4.3.x skip this step. Do not attempt to use this procedure on a 4.3.x schema!

  1. Back up the database in case it should get into a state from which recovery is impractical.
  2. Run the get_rules program to save the rules in the format accepted by icat-setup. This must be run as somebody who has read access to the rules. For example: ./get_rules.py https://example.com:8181 db username root password password The program should report how many rules it has saved and where.
  3. Ensure that nobody tries to use ICAT while it is being upgraded - the simplest approach is to undeploy the old one which can be done from the command line or by using a web browser and connecting on the admin port (typically 4848) and undeploying from there.
  4. For MySQL edit username, password, schema and dbhost at the top of the file ./upgrade_mysql_4_2_5.py and run it or or for Oracle, edit username , password and db at the top of the file ./upgrade_oracle_4_2_5.py and run that. Note that the procedure has been tested on ICAT 4.2.5 but should work for earlier 4.2 versions. The script will first check that everything should go work. If it reports problems fix them and try again. Once it gets past the checking stage it starts the conversion which can take a long time (many hours for a production system). At the end you should have a 4.3.2 database. Any indices which had been created manually will have been removed.
  5. Install the new icat
  6. Restore the rules using the icat-setup tool. For example:
    icat-setup -f rules.authz https://example.com:8181 db username root password secret

    This assumes that you are in the directory where you ran get_rules.py which will have created a file rules.authz. The credentials (keyword value pairs following the authenticator mnemonic) should be those of one of the users specified in the rootUserNames of the icat.properties file.

    Please check the rules.authz first as it will not work if it references entities that no longer exist. For example "Group" must be replaced by "Grouping" and InputDatafile is no longer part of ICAT. Also because some problems have been found with conditions containing dots (such as "InvestigationUser [user.name='fred']") in rules these must now be re-expressed without dots.

Upgrade 4.3.x schema to 4.4.0

You may increase the size of the "what" column of the Rule table to 1024 to match the size the column has on a brand new installation and you must modify the INVESTIGATIONUSER table as role values may no longer be null. Choose a name to use for the default role - for example 'member' and for MySQL:

UPDATE INVESTIGATIONUSER SET ROLE = 'member' WHERE ROLE IS NULL;
ALTER TABLE INVESTIGATIONUSER MODIFY COLUMN ROLE varchar(255) NOT NULL;
ALTER TABLE INVESTIGATIONUSER DROP FOREIGN KEY FK_INVESTIGATIONUSER_USER_ID;
ALTER TABLE INVESTIGATIONUSER DROP INDEX UNQ_INVESTIGATIONUSER_0;
ALTER TABLE INVESTIGATIONUSER ADD CONSTRAINT UNQ_INVESTIGATIONUSER_0 UNIQUE (USER_ID, INVESTIGATION_ID, ROLE);
ALTER TABLE INVESTIGATIONUSER ADD CONSTRAINT FK_INVESTIGATIONUSER_USER_ID FOREIGN KEY (USER_ID) REFERENCES USER_ (ID)

or for Oracle:

UPDATE INVESTIGATIONUSER SET ROLE = 'member' WHERE ROLE IS NULL;
ALTER TABLE INVESTIGATIONUSER MODIFY (ROLE varchar2(255) NOT NULL);
ALTER TABLE DROP CONSTRAINT UNQ_INVESTIGATIONUSER_0;
ALTER TABLE INVESTIGATIONUSER ADD CONSTRAINT UNQ_INVESTIGATIONUSER_0 UNIQUE (USER_ID, INVESTIGATION_ID, ROLE);

Upgrade 4.4.0 schema to 4.5.0

The mechanism for assigning unique id values for each entity in ICAT has been changed. Previously a sequence table (called SEQUENCE) was used to hold the last value used. This has been changed to make use of the native DBMS mechanism which is making the id columns AUTO_INCREMENT for MySQL or using a Sequence (rather than a table called SEQUENCE) in the case of Oracle. Do not omit this step otherwise the id values for new rows will not be set correctly and you will run into problems with duplicate values. For MySQL run mysql -u icat -p icat < upgrade_mysql_4_4.sql or for Oracle run sqlplus icat @upgrade_oracle_4_4.sql where in both cases it is assumed that the tables are owned by user "icat". The MySQL script is simply a list of alter table statements for each table in the 4.4.0 schema. The oracle script takes the last sequence number from the SEQUENCE table and uses this to initialize a sequence. Note that the increment for the sequence must be exactly 50 and the start value must be at least 51 more than the number in the old SEQUENCE table.

Upgrade 4.5.x or 4.6.x schema to 4.7.0

The Rule and DataCollection tables have changed. First dump the existing rules with: ./rules.py dump https://example.com:8181 db username root password secret > rules.ie i.e. run the rules.py script in the unpacked distribution directory with the "dump" parameter, the url of your icat and then the authenticator plugin and credentials to identify a user specified in the rootUserNames list in icat.properties. This will redirect the dump into a file called rules.ie. Then undeploy the existing icat ./setup uninstall -v

Then for MySQL run mysql -u icat -p icat < upgrade_mysql_4_7.sql or for Oracle run sqlplus icat @upgrade_oracle_4_7.sql where in both cases it is assumed that the tables are owned by user "icat". The script will drop the Rule table and add a DOI column to the DataCollection table. Next install the new ICAT (which will recreate an empty Rule table) Finally run ./rules.py load https://example.com:8181 db username root password secret < rules.ie which will populate the Rule table. Only run this script once or you will get duplicate entries in the table.

In addition the Log table is no longer used and may be dropped after you have extracted any information from it that you need.

Upgrade 4.7.0 schema to 4.8.0

A column must be added to the RULE_ table. This can be done or MySQL or MariaDB by mysql -u icat -p icat < upgrade_mysql_4_8.sql or for Oracle run sqlplus icat @upgrade_oracle_4_8.sql where in both cases it is assumed that the tables are owned by user "icat". This can be done be done while icat.server 4.7.0 is running as the old code is unaware of the addition of the new column.

The icat-setup.properties file

container
Values must be chosen from: TargetServer Though only Glassfish is working properly at the moment.
home
is the top level of the container installation. For Glassfish it must contain "glassfish/domains" and for JBoss (wildfly) it must contain jboss-modules.jar.
port
is the administration port of the container which is typically 4848 for Glassfish and 9990 for JBoss.
secure
must be set to true or false. If true then only https and not http connections will be allowed.
db.driver
is the name of the jdbc driver which must match the jar file installed in the container and matching your database.
db.url
url to connect to your database. For example: jdbc:mysql://localhost:3306/icat
db.username
username to connect to your database.
db.password
password to connect to your database.
db.target
This is optional and may be used to control the SQL generated by the JPA. Values must be chosen from: TargetDatabase
db.logging
This is optional and if set to one of the values in Eclipse Link logging.level controls the logging of JPA generated SQL statements.
jmsHostPort
This is optional. It should only be specified on satellite machines when you have a group of machines working together as described at Installing a group of ICATs . It takes the form machine:port where the port will normally be 7676.

The icat.properties and icat.logback.xml files

If you wish to modify the provided logging levels then rename icat.logback.xml.example to icat.logback.xml and update the icat.properties file to reference it as explained below. The icat.properties file may need other changes:

lifetimeMinutes
Defines the lifetime of an ICAT sessionid. You should avoid making it have a long duration as this increases the risk if it is intercepted, lost or stolen.
rootUserNames
Is a space separated list of user identifiers having full access to all tables. The format of the user identifier is determined by the chosen authentication plugin. The authn_db and authn_ldap plugins may be configured to either return the simple user name or to prepend it with a name identifying the mechanism. For example if there is a an entry "root" in the database then if the authn_db authenticator is configured without a mechanism then the user name to consider will be just "root", however if it has been configured with a mechanism of "db" then the string "db/root" must be specified.
maxEntities
Restrict total number of entities to return in a search or get call. This should be set as small as possible to protect the server from running out of memory. However if you set it too small it may prevent users from doing reasonable things.
maxIdsInQuery
For handling INCLUDEs, ICAT may generate queries which are not acceptable to the database system. To avoid this problem such queries are broken down. This is the maximum size of each chunk which must not exceed 1000 for Oracle.
importCacheSize
the size of a cache used during import to avoid an excessive number of calls to the database. The cache is dropped after each call to import to ensure that authorization rules are enforced. As the cache is short-lived, modifications to ICAT are unlikely to result in stale information being used from the cache.
exportCacheSize
the size of a cache used during export to avoid an excessive number of calls to the database. The cache is dropped after each call to export to ensure that authorization rules are enforced. As the cache is short-lived, modifications to ICAT are unlikely to result in stale information being used from the cache.
authn.list
is a space separated set of mnemonics for user to select the plugin in the login call. This must not reference plugins which are not installed as plugins are checked when ICAT performs its initialisation; if plugins are missing ICAT will not start.
authn.<mnemonic>.jndi
is the jndi name to locate the plugin. When you installed the plugin a message would have appeared in the server.log stating the JNDI names. For example for authn_db you would expect to see java:global/authn_db.ear-1.0.0/authn_db.ejb-1.0.0/DB_Authenticator. There must be one such entry for each plugin.
authn.<mnemonic>.friendly
is optional. It gives a name that a tool might use to label the plugin.
authn.<mnemonic>.admin
is optional. Set to true if you wish to indicate that this authenticator should only be advertised to administration tools.
authn.<mnemonic>.hostPort
is optional. It should only be specified on satellite machines when you have a group of machines working together as described at Installing a group of ICATs and when you want to perform authentication on the central machine. The value takes the form machine:port where the port will normally be 3700.
notification.list
is optional. It is a space separated set of Entity names for which you with to generate notifications. For each one there must be another line saying under what conditions you wish to generate a notification for the entity.
notification.<entity name>
a string of letters taken from the set "C" and "U" indicating for which operations (create and update) you wish to be notified for that kind of operation on the entity.
log.list
is optional. If present it specifies a set of call types to log via JMS calls. The types are specified by a space separated list of values taken from READ, WRITE, SESSION, INFO.
logback.xml
This is optional. If present it must specify the path to a logback.xml file. The path may be absolute or relative to the config directory.
lucene.directory
This is optional. It is the path to a directory (whose parent must exist) in which to store the lucene index. If this is specified then lucene.commitSeconds and lucene.commitCount must both be specified. If it is omitted and lucene.hostPort is also omitted then lucene indices will not be created and the searchText() call will return nothing.
lucene.commitSeconds
the interval in seconds between committing lucene changes to disk and updating the index. If you set it to 300 then searchText() calls will see what was available at some time in the past (up to 5 minutes ago) and which is also currently present.
lucene.commitCount
the number of changes to accumulate before committing them to disk. If the number is too high there can be memory problems. Currently this is only used by the call to lucenePopulate.
lucene.hostPort
This is optional and if set any other lucene settings will be ignored. It should only be specified on satellite machines when you have a group of machines working together as described at Installing a group of ICATs The value takes the form machine:port where the port will normally be 3700.
key
This is optional but if there is an IDS server in use and it has a key for digest protection of Datafile.location then this key value must be identical.

Check that ICAT works

A small test program, testicat, will have been installed for you. This is a python script which requires that the suds client is available. This connects as one of the root users you defined as 'rootUserNames' in the icat.properties file. Invoke the script specifying the url of the machine on which the ICAT service is deployed (something like https://example.com:8181), the mnemonic for the chosen authentication plugin followed by the credentials for one of the root user names supported by that plugin. These credentials should be passed in as pairs of parameters with key followed by value. For example: testicat https://example.com:8181 db username root password secret

It should report:

Logged in as ... with 119.9... minutes to go
Login, search, create, delete and logout operations were all successful.

This script can be run at any time as it is almost harmless - it simply creates a "Group" with an unlikely name and removes it again.

In case of problems, first erase the directory /tmp/suds and try the testicat again. If it still fails, look at the log files: server.log and icat.log which can both be found in the logs directory below your domain. Look also at the relevant authenticator log.

Post-installation work

Fresh Install

If this is a fresh install then you can use the import facility to do the initial icat population or you could use the icat manager to create rules, a Facility and other high level entities.

If you are using Oracle the type NUMBER(38, 19) will have been used for all floating point numbers. This constrains the values that can be stored - they may be truncated or rejected. To fix this please execute the SQL statements in fix_floats_oracle.sql

In all cases

Populate the lucene index by using the icatadmin tool.

Performance

To improve performance:
  • Consider creating the indices defined in indices.sql. Indices can make a huge difference to the database performance but there is also a small cost for each index.
  • Make entities readable by anyone if they contain no sensitive information. This is generally the case for those entities that implement an many-to-many relationship. For example InvestigationUser relates Investigation to User but has no attributes. By making it world readable no access to Investigation or User is granted. An in memory cache of world readable entities is maintained by ICAT.
  • Add entries to PublicStep to allow the INCLUDE mechanism to be less costly. PublicStep is explained in the ICAT Java Client User Manual. Its contents are also held in an in-memory cache for performance.

The icatadmin tool

Administration operations have been added to the ICAT API and are accessible via the icatadmin tool which will have been installed by the setup.py script. It should be invoked as:

icatadmin <url> <plugin> <credentials>... -- <command> <args>...

to run a single command or

icatadmin <url> <plugin> <credentials>...

to be prompted for a series of commands as shown below. In either case if you specify '-' as the password you will be prompted for it. Note that in the single command case the "--" marker is needed to terminate the list of credentials. For example:

icatadmin https://example.com:8181 db username root password secret -- properties
Only users mentioned in the rootUserNames of the icat.properties file are authorized to use this command.
populate [<entity name>]
re-populates lucene for the specified entity name. This is useful if the database has been modified directly rather than by using the ICAT API. This call is asynchronous and simply places the request in a set of entity types to be populated. When the request is processed all lucene entries of the specified entity type are first cleared then the corresponding icat entries are scanned to re-populate lucene. To find what it is doing please use the "populating" operation described below. It may also be run without an entity name in which case it will process all entities. The new lucene index will not be seen until it is completely rebuilt. While the index is being rebuilt ICAT can be used as normal as any lucene updates are stored to be applied later.
populating
returns a list of entity types to be processed for populating lucene. Normally the first item returned will be being processed currently.
commit
instructs lucene to update indices. Normally this is not needed as it is will be done periodically according to the value of lucene.commitSeconds
clear
stops any population and clears all the lucene indices.