I am going to blog about this parallel session in reverse order, because that way it makes more sense to me.
At the end of the session (but at the beginning of this post!) Devan Ray Donaldson of the University of Michigan reminded us what ‘trust’ (as in Trustworthy Digital Repositories, or TDR’s) is all about: end users (those that have had no involvement in either production or archiving of a document) need some assurance that the document they are getting from an archive is, in fact, authentic, that it is what it is supposed to be, and has not been tampered with or altered in any way. [BTW: that does not mean that the archive guarantees that the information in the document is reliable. The archive does not know that. The only thing an archive can do is assure that what the end user gets is the same thing that originally came into the archive.]
Archives know that end users care about trust, about authenticity. So Donaldson wants to study how we communicate with the end user about that authenticity. If we put some seal of approval on a document, will the end user trust it more than if we do not put any seal on it? That is an interesting question. Donaldson intends to use HathiTrust documents to test this, and, to me, that is the only ‘flaw’ in his plan – if such is the word, that is. HathiTrust contains digitized book pages, and that type of document is a lot easier to trust and be regarded as “authentic” than, e.g., e-mail. Donaldson agreed, but, as he said: you’ve got to start somewhere.
Next (in whichever order) came Olivier Rouchon of CINES, a large data centre in France (photo right, “Cannot I even have lunch without being photographed?” J). CINES finds itself in a strange political situation: as an organization CINES has a remit for only four years, but it also has the express mandate to do long-term preservation and its clients ask for 30-year guarantees. This is a strange dichotomy and CINES has decided to seek certification as a trusted repository to a) lock the mission, and b) attract larger volumes of data to be preserved.
CINES went through various (self-) audits to attain ever higher levels of certification. That took a lot of work. Rouchon estimates that 1 fte of his 11 fte’s is constantly busy with audits. But, says Rouchon, ‘that should not stop you from doing it.’ First of all, it is mostly a lot of work the first time around. Once you have a good system in place, the next audits become business as usual. Secondly, CINES is using the audit system as an internal quality assessment instrument to keep improving the quality of the service. By comparing the outcome of audits over time the organization can measure its progress.
The EU is now building a three-tiered certification system: the first level is the relatively lightweight Data Seal of Approval, then comes a self-audit, and the highest level of certification is awarded by an external audit. The APARSEN project recently did a number of test audits, a.o. at CINES, and will publish the results shortly.
Steve Knight from the National Library of New Zealand enquired how we know that we can trust the auditors doing the auditing. Rouchon trusts his own (internal) auditors and part of the aim of the APARSEN test audits was to train auditors.
Having talked about trust, and about auditing trust, I now come to the last (first) presentation. Basically, it was about building all the capabilities you need to assure trust and prove trustworthiness into your system. It was also about not dealing with digital preservation as an issue (and a system!) that stands apart from the rest of your organization, but to build an information system for your organization that integrates digital preservation requirements, make them ‘ubiquitous’. Christoph Becker of TU Wien(photo right) told his audience that we have lots of models and concepts and frameworks (OAIS, TRAC, RAC, Drambora, Platter, etc. etc.), but ‘we still lack a holistic view.’ His team takes its cue from frameworks from the IT industry, such as ‘enterprise architecture’, and COBIT (goal-oriented, process-oriented, control-based) to build a Maturity Model based on CMM – you measure your maturity by a set of criteria to identify places for improvement … and then I lost the story. My mind tends to switch off when the discussion becomes abstract and high-level. It is a flaw, I know, but one I have to learn to live with. The basic idea, however, integrating digital preservation, is a good one, and so is using existing industry frameworks, so for those of you who are better at high-level discussions, do check out Becker’s paper in the proceedings which come online soon. The paper is called “A Capability Model for Digital Preservation: Analysing Concerns, Drivers, Constraints, Capabilities and Maturities”.