(Foto: Northeastern University)
(Foto: Northeastern University)
03.03.2020, 19:00 Uhr
Northeastern University, USA
Julia Flanders is a Professor of the Practice in the Department of English and the Director of the Digital Scholarship Group in the Northeastern University Library. She also directs the Women Writers Project and serves as editor in chief of Digital Humanities Quarterly, an open-access, peer-reviewed online journal of digital humanities. She has served as chair of the TEI Consortium and as President of the Association for Computers and the Humanities. She is the co-editor, with Fotis Jannidis, of The Shape of Data in Digital Humanities, and also the co-editor, with Neil Fraistat, of the Cambridge Companion to Textual Scholarship. Her research interests focus on data modeling, textual scholarship, humanities data curation, and the politics of digital scholarly work. | more
„From Modeling to Interpretation“
Modeling and interpretation suggest profoundly different social geometries of meaning in digital humanities. How do different forms of power animate and authorize these forms of scholarly agency and their outcomes? How are human and machine agents situated–together and separately–in relation to modeling and interpretation? And why might this matter for the cultural politics of digital humanities?
UC Santa Barbara, USA
Alan Liu is Professor in the English Department at the University of California, Santa Barbara, and an affiliated faculty member of UCSB’s Media Arts & Technology graduate program. Previously, he was on the faculty of Yale University’s English Department and British Studies Program.
His research begain in the field of British romantic literature and art. A first book, Wordsworth: The Sense of History (Stanford UP, 1989), explored the relation between the imaginative experiences of literature and history. Theoretical essays in the 1990s then explored cultural criticism, the „new historicism,“ and postmodernism in contemporary literary studies. In 1994, when he started his Voice of the Shuttle Web site for humanities research, he began to study information culture as a way to close the circuit between the literary or historical imagination and the technological imagination. Books published since then include The Laws of Cool: Knowledge Work and the Culture of Information (U. Chicago Press, 2004), Local Transcendence: Essays on Postmodern Historicism and the Database(link is (U. Chicago Press, 2008), and Friending the Past: The Sense of History in the Digital Age (U. Chicago Press, 2018). | more
(Foto: Priscilla Leung, 2017)
06.03.2020, 14:00 Uhr
„Humans in the Loop: Humanities Hermeneutics and Machine Learning“
As indicated by the emergent research fields of computational “interpretability” and “explainability,” machine learning creates fundamental hermeneutical problems. One of the least understood aspects of machine learning is how humans learn from machine learning. How does an individual, team, organization, or society “read” computational “distant reading” when it is performed by complex algorithms on immense datasets? Can methods of interpretation familiar to the humanities (e.g., traditional or poststructuralist ways of relating the general and the specific, the abstract and the concrete, the structure and the event, or the same and the different) be applied to machine learning? Further, can such traditions be applied with the explicitness, standardization, and reproducibility needed to engage meaningfully with the different Spielraum – scope for “play” (as in the “play of a rope,” “wiggle room”, or machine-part “tolerance”) – of computation? If so, how might that change the hermeneutics of the humanities themselves?
In his keynote lecture, Alan Liu uses the example of the formalized “interpretation protocol” for topic models he is developing for the Mellon Foundation funded WhatEvery1Says project (which is text-analyzing millions of newspaper articles mentioning the humanities) to reflect on how humanistic traditions of interpretation can contribute to machine learning. But he also suggests how machine learning changes humanistic interpretation through fresh ideas about wholes and parts, mimetic representation and probabilistic modeling, and similarity and difference (or identity and culture).