Perma got off the ground in 2013 as one piece of the digital preservation puzzle: helping prevent authors’ work from succumbing to the ephemeral nature of the internet by creating records of what on the web they use and reference. iPres, the longest running conference dedicated to digital preservation was founded 15 years ago, brings together pieces of the same puzzle from around the globe. The field is without doubt evolving quickly so we were very excited this year to attend iPres 2018, which was held in our home town of Boston. We shared our progress with others and participated in some exciting workshops along the way.

Some big takeaways from the week:

As this year marked 15 iPres conferences, there was a great deal of reflection going on. Maureen Pennock, Barbara Sierman, Sheila Morrisey sat on a “looking back” panel. In their eyes, digital preservation is an ongoing and iterative process in which  great communication, planning, and having buy in from the right people is essential. For example, they see outside funding as an amazing catalyst in the digital preservation world. They also emphasized the merits of having every person in an institution who works with digital materials being aware of and participating in preservation. Going forward, they hope to see the use of machine learning and AI to help take on some of the entity and metadata extraction that is now a big drain on time. Finally, they hope that organizations will strive for collaboration and mutual problem solving by contributing to things like COPTR.

What’s  COPTR, you ask? It stands for Community Owned (digital) Preservation Tool Registry, and it acts primarily as a finding and evaluation tool to help practitioners find the tools they need to preserve digital data. Anyone can contribute and edit the registry. Cool stuff. Perma has an entry now!

Finally, our friends over at Rhizome have some exciting stuff going on at their project, Webrecorder. Lead developer Ilya Kreymer is a friend of LIL and a former summer fellow here at the lab. Webrecorder, much like Perma, does not work like a traditional crawler that other archiving services use to capture the web. Instead, we’re both creating what we call “high fidelity” captures, created using a headless browser that records aspects of the code such embedded media, Javascript and other interactive content that crawlers miss.

This process creates a much higher quality capture, but has limitations in scale since the process is user-triggered. But, we were super excited about seeing that Ilya is also experimenting with capturing entire sites and platforms in this high fidelity way. You can check out his work with scalar now, although you must request access since it is not open to the public yet. Check out Ilya’s own words here.

We’ve always known here at Perma.cc that our effort to save the web’s citations from link rot was only one piece of the puzzle, and we loved hearing about how the other the puzzle pieces are iterating, growing and collaborating. Looking forward to next year!