This resource is an introduction to Digitisation Methods for Material Culture. The resource explores basic topics with regards to the study of material culture, while also looking at types of media as means to communicate and share information about it, as well as digitisation methods to capture material culture data.
This resource provides guidance on how to use digital storytelling, deploying 3D data, annotations and combining media to enable users to access and explore information about digital heritage assets over the web.
The conference aimed to examine the possibilities of connecting information sciences and computer science with performing arts, focusing on three thematic blocks: archiving, artistic practices and scholarly research. The international scientific and professional conference is part of the project of the same name by the DARIAH-EU Working Group Theatralia, which is dedicated to the research of digital technology in the performing arts and the digitization of theatralia, financed from DARIAH-EU funds.
A partnership between Kazerne Dossin and EHRI was established to enable sharing of metadata with a broader audience. This partnership resulted in changes to the practices of cataloguing archival materials within Kazerne Dossin. Using the example of the Lewkowicz family collection, this article focuses on the revolution Kazerne Dossin went through while standardising descriptions, and on the tools EHRI provided to optimise the workflow for collection holding institutes.
Many Galleries, Libraries, Archives, and Museums (GLAMs) face difficulties sharing their collections metadata in standardised and sustainable ways, meaning that staff rely on more familiar general purpose office programs such as spreadsheets. However, while these tools offer a simple approach to data registration and digitisation they don’t allow for more advanced uses. This blogpost from EHRI explains a procedure for producing EAD (Encoded Archival Description) files from an Excel spreadsheet using OpenRefine.
The Fortunoff Visual Search is a tool for both data visualisation and collection discovery from the Fortunoff Video Archive for Holocaust Tesimonies. This blogpost demonstrates the Visual Search tool in the Fortunoff Video Archive, including the search and filtering interface, as well as interpreting the resulting visualisations
This blog discusses the applicability of services such as automatic metadata generation and semantic annotation for automatic extraction of person names and locations from large datasets. This is demonstrated using Oral History Transcripts provided by the United States Holocaust Memorial Museum (USHMM).
In the late 1930s, just before war broke in Europe, a series of chaotic deporations took place expelling thousands of Jews from what is now Slovakia. As part of his research, Michel Frankl investigates the backgrounds of the deported people, and the trajectory of the journey they were taken on. This practical blog describes the tools and processes of analysis, and shows how a spatially enabled database can be made useful for answering similar questions in the humanities, and Holocaust Studies in particular.
This blog post from EHRI introduces 'quod' (querying OCRed documents), a prototype Python-based command line tool for OCRing and querying digitised historical documents, which can be used to organise large collections and improve information about provenance. To demonstrate its use in context, this blog takes the reader through a case study of the International Tracing Service, showing workflows and the steps taken from start to finish.
This blog examines TEITOK, which is a corpus framework used as an alternative to Omeka. TEITOK is centered around texts and is similar to the Omeka interface – both allow you to search through the documents, and display the transcription. The main difference is that Omeka treats the transcription as an object description, whereas TEITOK not only shows that a word appears in a document, but also where it appears and how it is used.
This video tutorial provides a step-by-step guide through the DARIAH-DE Publikator, a tool that enables its users to upload data(-sets) into the DARIAH-DE Repository and index them with metadata. The tool is part of the larger DARIAH-DE Data Federation Architecture, aiming to support the FAIRification of research data with regards to the research data life cycle.
The SSHOC-DARIAH Train-the-Trainer Research Data Management Bootcamp ('Research Data Management Bootcamp' for short) took place over two half-day workshops that gave access to experts in the field and allowed for real-time activities between the sessions. It was co-organised by the SSHOC project and the DARIAH 'Research Data Management' Working Group.