Fast Forward: Painting from the 1980s
I had the opportunity to attend a curator-led staff tour of one of the Whitney’s current exhibitions before lunch. Not directly related to project work, but interesting to get some context on the work in the show and trends of the era.
Standardizing Museum Provenance for the Twenty-First Century
Prof. Pattuelli notified me of a livestreaming talk on 02/27/17 at the Yale Center for British Art about the Carnegie Museum Provenance project.
Newbury, D. (2017 February 27). Standardizing Museum Provenance for the Twenty-First Century. New Haven, CT: Yale Center for British Art. Retrieved from https://youtu.be/YKJqINwZ–o
David Newbury, who gave the talk, is the lead project developer of the ArtTracks project at the Carnegie, which had been ongoing for the past 3.5-4 years
The Carnegie project has taken much longer than Newbury had anticipated. Coming from an animation and data visualization background, Newbury guessed the project would only a matter of weeks or months, not realizing the complexity of provenance data.
The Carnegie’s Use and Rationale for Linked Data
The Carnegie Museum’s linked data was not necessarily envisioned as linked open data.
The Carnegie does, however, recognize the need for sharing data across institutions.
What is the value of museums sharing their data with the public?
Are museums:
- Publishers? (Yes)
- Researchers? (Yes)
Fundamentally, however, museums are collectors.
Despite earlier hopes of researchers, linked data has not been successful in enabling web-scale AI (i.e. it hasn’t made the internet machine-readable).
Nor does it enable interoperability/easier collaboration.
Nor does it automate reconciliation/reduce workload.
Linked data is one of multiple ways of potentially representing data, each with its own pros and cons.
Aim of Carnegie project was to standardize provenance data so it could be presented as:
- Linked data
- JSON
- Text
- End goal: shared online scholarship
Need to preserve the nuances of provenance data contained within text
Four Advantages of Linked Data
Advantage One: Allows for linking to other authorities
Linking to outside authorities lends the museum itself authority (i.e. certifies that museum is providing accurate information to patrons).
Name authorities allow museums to provide authoritative info without having to expend money on research
Name authorities = Museum saves money!
A museum is best suited to asserted authority – being an authority on the objects in its own collection, its own exhibitions, and other events that have taken place during the course of the museum’s existence.
Being an authority on things not within the sphere of an individual institution and its collection are better delegated to other sources. This delegated authority can be covered by various name authority sources (VIAF, ULAN, etc).
Reluctant authority – when you have to be the authority on a subject for which there are no authority records. For example, obscure constituents that only the curatorial department knows about, i.e. random art galleries from the 1930s.
As a reluctant authority, you are taking the reigns as an authority until someone else publishes more definitive information.
“Temporary authority held out of necessity, not desire”
Carnegie Tools
museum_provenance (https://github.com/arttracks/museum_provenance):
“The museum_provenance library is the core technology developed for this project. It takes provenance records and converts them into structured, well-formatted data.”- http://www.museumprovenance.org/
Tool for parsing provenance relationships from text fields (such as TMS Provenance field). For my purposes, probably more useful at later stage of project.
Elysa (https://github.com/arttracks/elysa):
“The Elysa tool is a user interface designed for museum professionals. It assists with verifying, cleaning, and modifying provenance records.”
It’s a GUI for extracting provenance information from text fields
MicroAuthority (https://github.com/arttracks/microauthority): Linked open data publication tool developed for smaller institutions looking to create URIs for constituents who don’t have records on any name authority sites. Given how obscure many of the Acquisition-Related constituents in the Founding Collection are, creating URIs for these people could be a goal of my project.
provenance_interactive (https://github.com/arttracks/provenance-interactive): Tool for creating visualizations of provenance information. Probably not too exciting for the Whitney’s purposes, since everything in the Foundation Collection is American and made within the last couple centuries.
Baring_art_sales (https://github.com/arttracks/baring_art_sales): Data on the purchases of the Baring family in CSV and JSON form. Doesn’t really extend to the Whitney Founding Collection era (latest dates 1917), but interesting because it employs the Carnegie’s Acquisition Method Vocabulary, an ontology created for the project.
https://github.com/whosonfirst-data/whosonfirst-data
The Carnegie SKOS (http://www.museumprovenance.org/acquisition_methods.ttl) for their ontology seems mostly focused on very specific acquisition methods. It seems like it would make more sense as supplement to CIDOC than as a stand-alone conceptual model.
Ruby on Rails
All of the Carnegie’s apps are built on Ruby. To install it, I followed these instructions: http://railsapps.github.io/installrubyonrails-mac.html
Gems in Ruby: http://guides.rubygems.org/what-is-a-gem/
Ruby is…time consuming to install
Trying to install the Getty’s Elysa app, but it keeps using an older, incompatible version
Eventually uninstalled/reinstalled Ruby, but was still unable to run Foreman server
OpenRefine
The localhost IP address for OpenRefine, for future reference: http://127.0.0.1:3333/