Hello all! I’m Liz Shayne, a 3rd year graduate student in the English Department at UC Santa Barbara. I specialize, broadly speaking, in the transformation of 19th century texts at the hands of contemporary technology (accepting, for now, the dictum that our technology is an extension of our hand). I am also the Graduate Fellow for the Transcriptions Center at UCSB, which is itself devoted to the exploring the culture of literature and information technology as they go hand in hand.
Both of those interests dovetail nicely with my current project on this blog. This blog, as our About Page says, was originally created as a platform to post and discuss experimental forms of visualizations. It has become, at least for me, a repository of my experiments in different forms of analysis and a record of the tools that I have learned, am learning and mean to learn to use. As the Transcriptions Fellow, I believe I need a working, though not comprehensive, understanding of a variety digital tools currently available. The goal is not to become a jack-of-all-trades overnight (though I confess that would be delightful), but to ensure that one of the resources Transcriptions provides is someone who can use the software installed on the computers. Part of our department’s pedagogical mission is to foster an environment of interdisciplinary exploration and collaboration and I want Transcriptions – and, by extension, myself – to be a part of making the Digital Humanities accessible as well as interesting.
As a staging ground for my current experimentation, my post also have the chance to develop into a larger collection of approaches to one text. Nearly all of my experiments are performed on George Eliot’s Daniel Deronda and I am hoping, as this research continues to be fruitful and multiply, to pull them together into a larger document that can examine the pedagogical values of these different approaches. Given the intersecting constraints of how much time a visualization requires to create, how much training in a piece of software is needed to produce the analysis and how interesting the results are, where is the place of data-viz in the classroom? And what are the implications of these software choices beyond simply “well, they’re free and (sometimes) open source”? When we choose to approach a text using a specific tool, what ideological framework and hither-to unquestioned assumptions come along with it?
So please join me in my digital excursions and comment as you see fit. I have deliberately chosen to write these posts as informal conversations and I hope that you will join in.
If you are interested in the original version of this page, written in April of 2012, it can be found here.