More Metadata, Less Process?
The following post was submitted by students enrolled in LIS2407 – Metadata at the University of Pittsburgh School of Information Sciences. For more information on the series, see the introductory post.
By Kira Condee-Padunova and Laureen Wilson
While there are compelling arguments for the use of standardized metadata schemas, many institutions still choose to rely on local metadata creation rules. This blog post discusses the pros and cons of standardized and local metadata schemas using the idea of “More Product, Less Process: Revamping Traditional Archival Processing,” first introduced by Greene and Meissner in a 2005 article in The American Archivist regarding processing standards in archival repositories.
Greene and Meissner’s article has had a major influence on the archives field since its publication ten years ago. However, the concepts suggested by MPLP, as the concept is often abbreviated, have not permeated into the related, and sometimes overlapping, metadata field. Both archivists and metadata creators deal with backlogs of work that needs to be completed in a timely manner, and neither archivists nor metadata creators start projects to lock items up in some kind of processing purgatory. The idea is to make information accessible to users. Of course, many of the issues Greene and Meissner cite with archival processing procedures, such as the fastidious removal of metal fasteners, have no relevance for metadata creators. Even so, it may be worthwhile for metadata creators to consider the bigger implications of MPLP and figure out what would constitute a “golden minimum” for metadata creation.
The purpose of this blog post is not to recreate Greene and Meissner’s work in the metadata field, but, rather, to reconsider common practices among metadata creators and question whether or not the results produced are worth the time it takes to perform them. Perhaps the most common practice among metadata creators is the preference for using standardized metadata schemas rather than locally created metadata schemas. Standardized metadata schemas have obvious benefits, but can also consume large amounts of time and create problems for their users.
An institution that wishes to use a standardized metadata schema when creating its own records must first choose which standard they plan to use. This requires research on the part of staff, which will likely be time-consuming, especially if the institution has no staff members with experience in the metadata field. Even with careful research, there is no guarantee that the standard chosen will be the best choice for the institution. Institutions may have a difficult time finding a standard that contains all the information relevant to their particular needs. Committing to a standard also requires that the institution remain up-to-date with any changes made to that standard.
Smaller institutions are more likely to lack a dedicated employee for metadata and often rely on a small number of employees or volunteers to cover all types of work. Because of this, the time and effort involved in choosing and applying a standardized metadata schema may not be worth the potential benefits. Large institutions may also have their own problems with standardized metadata schemas. Although more likely to have staff trained in and dedicated to creating metadata, large institutions are also more likely to rely on their institution’s IT department to assist in metadata creation – departments who might not even be familiar with standardized metadata schemas.
On the other hand, relying on a local metadata schema can create its own problems for the institution and delay the release of items for users. The beauty of standardized metadata schemas is that everything is already set-up. Instead of having to discuss which fields to add to a record, the most common fields are already there, waiting for data. There is not a risk of new metadata creators coming in and complicating the record by adding in more descriptors. Once each field has been populated (or not, as the case may be) the record can be published and the item made available to users. A standardized metadata schema might not provide the most in-depth information about an object, but it gives enough for users to find the object they need.
Even in situations where a local metadata schema is already in use, switching over to a standardized metadata schema should improve the ability of users to discover relevant results. Indeed, metadata schemes are often very friendly to both creators and researchers. Dublin Core is lauded for using easy-to-understand descriptors in its schema so that those filling out a record can immediately understand what should fit in each field. This allows for more simplicity on both ends, meaning that users can not only access the products faster, but also understand the titles for each description to see if the item is what they need. Since they’re also clear from the creator’s side, it means that the creation of metadata records can be left to volunteers or interns – after a sufficient training period – so that metadata creators can focus on curating and maintaining their collection.
If the focus is less on perfection and more on production – essentially More Product, Less Process – then standardized metadata schemes offer more freedom to institutions. With any new project there will be growing pains, but, once over the hump of implementing a new scheme, standardized metadata schemes allow institutions to collaborate more efficiently. The simplicity of the generalized descriptors may require some institutions to try to fit into a more narrow terminology, but the end result could be more helpful to the people who would use the resources that those institutions are offering.
Greene, Mark A., and Dennis Meissner. “More Product, Less Process: Revamping Traditional Archival Processing.” The American Archivist 68, no. 2 (2005): 208-63. Accessed July 10, 2015. http://www.archivists.org/prof-education/pre-readings/IMPLP/AA68.2.MeissnerGreene.pdf.