Where does the born- and reborn-digital material take the Digital Humanities?
Today's guest post is by Chris Penfold, UCL Press Commissioning Editor.
On 18 May 2017, Niels Brügger, Professor of Internet Studies and Digital Humanities at Aarhus University in Denmark, and co-editor of The Web as History, delivered the third lecture in the UCL Centre for Digital Humanities annual Susan Hockey lecture series. With a focus on archiving, the lecture investigated the different types of digital media and explored how each type can be used for scholarly purposes.
Understanding the web’s function as an archive requires a grasp of its scale, yet the amount of data added to the web on any given day is difficult to fathom. Google processes over 20 petabytes of digitised data, born-digital data and reborn-digital data every 24 hours – that’s over 20 million gigabytes. But how do we archive this volume of information? How can we preserve the contents of news websites that have a shelf life of a day, or even an hour?
The web is where, and how, future researchers will learn about the 21st century, and so the importance of archiving – deciding which parts of the web should be preserved, how often, and by whom – increases with every petabyte of new data. As with any collection of documents, the ways in which they are collected and curated determines how they can be used by future researchers, across the Digital Humanities and beyond. The web is the equivalent of the letters, novels and artworks of the past, yet it offers a place in history for not only the artists and writers of our time but for everyone who uses it.
Anyone interested in the topic should read The Web as History, available to download for free here.