This document builds on the Core Fascinator documentation regarding the
07 October 2011
The easy answer is 'everything', and it is generally not too bad an idea to take a complete copy of the entire application periodically just in case something happens that this process/procedure does not account for.
This may not be possible for everyone however, and even if it is you'll be backing up a lot of stuff that may not be particularly important, so it may not be feasible for nightly backups (if you are doing those... no idea how frequently people would want to backup their servers); so this section details the really critical parts of the system with which you can do a complete system rebuild, upgrade and/or migration.
In order of priority:
note1 NOTE: Because pre-v1.1 installs used a much older version of Solr (1.4 vs. 3.3+) it is possible, although untested, that you can't just drop old audit logs in place on newer versions. Having said that only from v1.2 has ReDBox started to submit meaningful ReDBox-specific audit entries to the log. Older logs will just contain the system generated entries from the core Fascinator platform. Because these don't provide a lot of useful information, we didn't even recommend backing up your audit logs on those versions of the product.
There are three types of security plugins used in Mint/ReDBox. You might want to backup some additional files if you have made significant customisations in this area.
The ActiveMQ messaging system that ReDBox and Mint use to send system messages to each other, and internally for their own processes, is backed into a datastore on disk. Obviously it would be bad to destroy/lose this datastore if there are enqueued messages waiting to process. For the most part this isn't an issue though, since enqueued messages indicates heavy load, something that you should only ever see if sysadmins are performing administrations actions like a harvest or system-wide reindex. Just don't backup your system whilst you do this kind of stuff to it... but I'm fairly certain that's obvious.
You can see message activity on the administrative queues in the web portal. Something like this: /redbox/default/queues (you need to be logged in as an admin). You want to see '0' for all entries in the 'Size' column.
Presumably most backups will be scheduled to occur during off-peak periods, so you may even want to consider scheduling a graceful shutdown of the server beforehand. This would serve the dual purpose of preventing message loss, but also avoid active file locks from the web server in storage conflicting with your backup. This might occur if (for example) you have an ANDS harvester hitting you at night time and touching numerous objects in storage. These sort of considerations are really up to individual institutions to consider, since your backup process may even be careful enough to avoid these file lock issues anyway... I'm by no means a sysadmin.
We have deliberately avoided the issue of local customisations and configuration above. It is presupposed that whatever you do to an installation after building it is either:
This is generally pretty simple; out-of-the-box from v1.2 the default configuration look like this:
It assumes that you are rebuilding a system in place (no harvest remap), with the possibility of a version upgrade. The migration script is smart enough to notice when form data is out-of-date and it will try to upgrade it to v1.2. It will also backup your form data inside the object as a separate datastream before making an alterations, and add a line to the object's audit log letting you know it was altered by the upgrade script.
Migration - If you'd like to perform a migration to another system, another path on your server, or change the username under which you run the server, you MUST enable the '
Hopefully, this is actually the easy part. If you've gone to the trouble of ensuring that your backups are performed correctly a system rebuild should be nice and smooth, with little extra overhead beyond a standard installation:
Presuming nothing explodes and the universe remains on an even keel, you'll find some logging output in '
NOTE: There were substantial changes to logging with the release of v1.1, and during the dev server upgrade prior to release I noticed this traffic going to the wrong log ('