Playful Visualizations at Work, Working Visualizations at Play

Two weeks worth of struggling with R and putting in my own texts (feel free to guess which one I used) has left me feeling less accomplished than I would have liked, but less filled with encroaching terror as well. I am capable of following instructions and getting results, so while the art of doing new things (and really understanding the R help files) is still beyond me, I think I have enough material to start talking about Daniel Deronda again.

Daniel Deronda is a text that seems split into two halves. One of the things I discover when I reread this book is that there are many more chapters than I remember with both Deronda and Gwendolen “on screen together”. So are these two separate stories or are they two utterly intertwined texts?

In order to test how separate the two storylines are, I looked at the word frequencies of both “Deronda” and “Gwendolen” in each chapter to see whether they were correlated. So, in this case, a positive value means that Deronda showing up in a chapter increases the likelihood of Gwendolen showing up while a negative correlation means the opposite.

The correlation between Deronda and Gwendolen is -0.465. (As a reminder, correlations run from 1 to -1). So that’s actually pretty high, given that book chapters are complex objects and I know that they interact a fair amount over the course of the book. But there’s actually a better way to test for significance. We can look at the likelihood of this correlation having occurred by random. Again, drawing on Text Analysis with R, by Matthew Jockers, I had R rearrange the appearance 10,000 times and then generate a plot of what the correlations were. Unsurprisingly, it looks like a normal curve:

Deronda_Gwendolen_Histogram

So if the frequency of each name per chapter was distributed randomly, you would be statistically likely to see little correlation between them. For those interested in some more specific numbers, the mean is -0.001858045 and the standard deviation is 0.1200705, which puts our results over 3 standard deviations away from the mean. That little blue arrow is -0.465.

All that says, of course, is that it’s highly unlikely that these results occurred by chance and that they are, in some sense, significant.* Which, to be fair, no kidding. My initial, subjective reading told me they were negatively correlated as well. And there has to be a better reason to do this kind of work than just to prove one’s subjective reading was right.

Which is where our next graph comes in. Now that I know that the two are negatively correlated, I can turn to the actual word frequency per chapter and see what the novel looks like if you study character appearance.

And, for fun, I threw in two other characters who I see as central to the plot to see how they relate.

Final Bar Graph of Name Frequencies

 

I highly recommend clicking on the graph to see a larger view.

Here’s where things get interesting to the human involved. The beginning of the novel happened exactly as expected – Eliot starts the story in medias res and then goes back to first tell us Gwendolen’s history and then Deronda’s. And then the name game gets more complicated about halfway through when Mirah and Mordecai** enter the picture. By the last few chapters, there is very little Gwendolen and the story has settled firmly around Deronda, Mirah and Mordecai. All of this, again, makes sense. But it is nice to see the focus of the book plotted out in such a useful manner and it invites two kinds of questions.

The first is based on the results; going to chapters with a surprisingly high mention of a certain character, like Deronda’s last few chapters, and attempting to figure out what might be going on that causes such results. Why, after all, is Daniel the only one to venture up into the 1.2% frequency? Is there something significant about the low results around 50 and 51? What’s going on there?

The second kind of questions that this graph invites are questions about me. Why did I choose these four characters? I think of them as the four main characters in the story and yet there’s certainly a good argument to be made for at least one other character to be considered “main”.

If you’ve read the book, feel free to guess who.

Why did I leave out the frequency data for Henleigh Mallinger Grandcourt?

Honestly, I completely forgot he was important. It’s not that I don’t remember that the Earl of Grantham had an evil streak in his youth, it’s simply that I don’t think of Grandcourt as a main character in the book. That might be because one doesn’t usually think of the villain as “the main character” or it might be because I am more interested in the story of Deronda and 19th century English Jewry.

As it happens, I noticed Grandcourt’s absence because of that odd little gap in Chapter 12 where absolutely no one is mentioned. What was going on there?

I went on Project Gutenberg, checked the chapter and said “Oh. Oops.” This is the only chapter entirely (and possibly at all) from Grandcourt’s perspective, hence no mention of any other character. So why didn’t I redo the graph with Grandcourt included, given that he’s important enough to have his own chapter?

Okay, yes, sheer laziness is part of the answer, but there is another reason. Chapter 12 is the chapter in which Grandcourt announces his intention to marry Gwendolen. And notice whose name entirely fails to appear in the chapter…

This data doesn’t exactly tell us anything new – we have ample proof from Eliot that Grandcourt is one of the nastiest husbands in the British canon. But this detail invites a way of looking at people’s interactions categorized by recognizing another person by the simple act of naming them, which makes this the second time that randomly playing around with visualizations has led me towards the question interpersonal interpellation as related to empathy. 

So what do you all think? What does the graph say to you? Do you think this is a valuable way of approaching a text? And am I getting kinda hung up on this question of simply naming as a measure of empathy?

Comment below!

* With the obvious caveat that this was a book written by a woman rather than a random letter generator so of course its results did not occur by chance, what this graph really lets us see is whether the negative correlation between the two characters allows for meaningful critical discourse. Anything under -0.5 is not really considered significant in scientific terms, primarily because it’s not useful for predictive validity, but because we’re not interested in predictive validity, we’re interested in the possibilities of storyline division, the graph validates the hunch that there’s some kind of distinction.

**SPOILER ALERT – Mordecai is actually the combined occurrence of the names Mordecai and Ezra, for reasons obvious to anyone who has read the book.

 

There’s not much to report on the visualization front this week. I have created a couple of elementary (actually, closer to Kindergarten) graphs in R by following the instructions in Matthew Jockers’ excellent book, Text Analysis with R for Students of Literature, which is currently in draft form, but an excellent resource nonetheless. So I have learned some things about the relative frequencies of the words “whale” and “Ahab” and, more importantly, I’m gaining some insight into what else I could do with my newfound knowledge of statistical programming. But my studies in R are still very much at the learning stage and I have yet to reach a point where I can imagine using it in a more playful, exploratory sense. While this is not true of every tool, R is one of the ones that must be mastered before it can be misused in an interesting manner. Which is not to say that it cannot be used badly – I am getting good at that – but the difference between using a tool badly and playfully is a critical distinction. A playful framework is one that eschews the tool’s obvious purpose in order to see what else it can produce; a framework that validates a kind of “What the hell, why not?” approach to analysis. Playfulness exists when we search for new ways to analyze old data and disconcerting methods for presenting it. It can be found in the choice to showcase one’s work as a large-scale three dimensional art-project and in the decision to bring the history of two Wikipedia articles about the author into one’s examination of the text. It is not, more’s the pity, found in code that fails to execute.*

All this adds up to an apology: I have no intriguing word clouds for you this week. I don’t even have any less-than-intriguing word clouds this week. But I do have some thoughts about the nature of this blogging endeavor, nearly a year and a half after it was started.

This blog began as a way to record our visualization experiments in a forum where we could treat them as part of a larger group and where we would be forced to engage with them publicly. It was a way to hold ourselves accountable to our professor, to each other and to ourselves. At the same time, it was a way to provide all our visualizations (even the ones that did not make it into our final seminar papers) with a home and a life beyond our hard drives and walls.

The class has ended and the blog lives on. Last year, it was a place for me to think through a social-network graph of William Faulkner’s Light in August; a project that grew out of the work I did on Daniel Deronda. This year, it’s serving as a repository for experiments that I perform as part of my work in UCSB’s Transcriptions Center.

And throughout those different iterations, one element of common purpose stands out to me. The blog is a place for scholarly work-in-progress. It’s where projects that need an audience, but are not meant for traditional publication can go. It’s where projects that have reached a dead end in my mind and require a new perspective can be aired for public consumption. It is, at its most basic level, a way of saying “This work that I am in the process of doing is meaningful”.

And that, I think, is the real key to why I find maintaining this blog – despite my sporadic updating during my exam year – so valuable. Blogging about my work gives me a reason to do it. This might sound absurd, if not simplistic, but bear with me for a moment. Academia is a goal-oriented endeavor. We begin with the understanding that we finish our readings on time in order to have meaningful conversation about them in order do well in a course. We do our own research in order to write a paper about it in order, once again, to do well in a course or in order to present it at a conference. (Obviously, I’m not arguing that the only reason that anyone reads anything is for a grade, but the fact that graduate students turn said research into a well-argued paper within a reasonable time-frame is tied to the power of the grade.)  The books we read, the programs we learn, the courses we teach are oriented towards the dual goals of spreading knowledge in the classroom and publishing knowledge in the form of an article or monograph.

So where does practical knowledge within the digital humanities fit in? In the goal-oriented culture of academia, where is the value in learning a program before you have a concrete idea of what you will use it for? Why learn R without a specific project in mind? Why topic model a collection of books if you’re not really interested in producing knowledge from that form of macroanalysis? My experience with academia has not really encouraged a “for the hell of it” attitude and yet a number of the tools used specifically within the digital humanities require one to invest time and practice before discovering the ways in which they might be useful.

There are several answers to the above question. One that is used to great effect in this department and that is becoming more popular in other Universities as well is the Digital Humanities course. I am thinking in particular of Alan Liu’s Literature+ course, the seminar for which this blog was originally created. By placing digital training within the framework of a quasi-traditional class, we as students are introduced to and taught to deploy digital forms of scholarship in the same manner that we learn other forms of scholarly practice. If we master close-reading in the classroom, we should master distant reading in it as well.

And yet, what does one do when the class is over? Styles of human reading are consistently welcome in graduate seminars in a way that machinic readings are not. And there are only so many times one can take the same class over and over again, even assuming that one’s institution even offers a class like Literature+.

The alternative is to take advantage of blogging as a content-production platform. The blog takes over as the goal towards which digital training is oriented. Which is a very long way of saying that I blog so that I have something to do with my digital experiments and I perform digital experiments so that I have something to blog about. Which seems like circular logic (because it is), but the decision to make blogging an achievement like, albeit not on the same level as, producing a conference paper is one that allows me, once again, to hold myself accountable for producing work and results.

This year, “Ludic Analytics” will be my own little Literature+ class, a place where I record my experiments in order to invest them with a kind of intellectual meaning and sense of achievement. Learning to count incidences of “Ahab” and “Whale” in Moby Dick may not be much, but just wait until next week when I start counting mentions of “Gwendolen” and “Deronda”…

*I apologize for the slight bitterness, I spend half an hour today combing through some really simple code trying to find the one mistake. There was a “1” instead of an “i” near the top.

MALLET redux

I considered many alternative titles for this post:

“I Think We’re Gonna Need a Bigger Corpus”

“Long Book is Long”

“The Nail is Bigger, but the MALLET Remains the Same”

“Corpo-reality: The Truth About Large Data Sets”

(I reserve the right to use that last at some later date). But there is something to be said for brevity (thank you, Twitter) and, after all, the real point of this experiment is to see what needed to be done to generate better results using MALLET. The biggest issue with the previous run–as is inevitably the case with tools designed for large-scale analysis–was that I was using a corpus that consisted of one text. So my goal, this time around, is to see what happens when I scale up. So I copied the largest 150 novels out of collection of 19th and early 20th century texts that I happened to have sitting on my hard drive and split them into 500 word chunks. (Many many thanks to David Hoover at NYU, who had provided me with those 300 texts several years ago as part of his Graduate Seminar on Digital Humanities.. As they were already stripped of their metadata, I elected to use them.) Then I ran the topic modeling command in MALLET and discovered the first big difference between working with one large book and with 150. Daniel Deronda took 20 seconds to model. My 19th Century Corpus took 49 minutes. (In retrospect, I probably shouldn’t have used my MacBook Air to run MALLET this time.)

Results were…mixed. Which is to say that the good results were miles ahead of last time and the bad results were…well, uninformative. I set the number of topics to 50 and, out of those 50 topics, 21 were not made up of a collection of people’s names from the books involved.*  I was fairly strict with the count, so any topic with more than three or so names in the top 50 words was relegated to my mental “less than successful” pile. But the topics that did work worked nicely.

So here are two examples. The first is of a topic that, to my mind, works quite well and is easily interpretable. The second example is of a topic that is the opposite of what I want though it too is fairly easy to interpret.

Topic #1

First

So, as a topic, this one seems to be about the role of people in the world. And by people, of course, we mean MEN.

Topic #2:

Second

Now, this requires a some familiarity with 19th century literature. This topic is “Some Novels by Anthony Trollope”. While, technically accurate, it’s not very informative, especially not compared to the giant man above. The problem is that, while it’s a fairy trivial endeavor to put the cast of one novel into a stop list, it’s rather more difficult to find every first and last name mentioned in 150 Victorian novels and take them out. In an even larger corpus (one with over 1,000 books, say), these names might not be as noticeable simply because there are so many books. But in a corpus this size, a long book like “He Knew He Was Right” can dominate a topic.

There is a solution to this problem, of course. It’s called learning how to quickly and painlessly (for a given value of both of those terms) remove proper nouns from a text. I doubt I will have mastered that by next week, but it is on my to do list (under “Learn R” which is, as with most things, easier said than done).

In the meantime, here are six more word clouds culled from my fifty. 5 of these are from the “good” set and one more is from the “bad”.

Topic #3:

Third

Topic #4:

Fourth

(I should note, by the way, that party appears in another topic as well. In that one, it means party as a celebration. So MALLET did dinstinguish between the two parties.)

Topic #5:

Fifth

Topic #6:

Sixth

Topic #7

Seventh

Topic #8:

Eighth

There are 42 more topics, but since I’m formatting these word clouds individually in Many Eyes, I think these 8 are enough to start with.

So the question now on everyone’s mind (or, certainly on mine) is what do I do with these topic models? I could (and may, in some future post) take some of the better topics and look for the novels in which they are most prevalent. I could see where in the different novels reading is the dominant topic, for example. I could also see which topics, over all, are the most popular in my corpus. On another note, I could use these topics to analyze Daniel Deronda and see what kinds of results I get.

Of course, I could also just stare up at the world clouds and think. What is going on with the “man” cloud up in topic 1? (Will it ever start raining men?). Might there be some relationship between that and evolving ideas of masculinity in the Victorian era? Why is “money” so much bigger than anything else in topic #6? What does topic #7 have to say about family dynamics?

And, perhaps the most important question to me, how do you bring the information in these word clouds back into the texts in a meaningful fashion? Perhaps that will be next week’s post.

*MALLET allows you to add a stopwords list, which is a list of words automatically removed from the text. I did include the list, but it’s by no means a full list of every common last name in England. And, even if it was, the works of Charles Dickens included in this list would leave it utterly stymied.

Seeing Liz’s post yesterday reminded me that it has been awhile since I have posted here on LuAn.  Surely, I thought, there must be something I could include about my current work, as I’ve missed frequenting this friendly space.  And while I could dedicate a post about some of my students’ projects from the course I taught this summer or a few of the new visualizations I’ve created about La tumba (if you recall, Agusín’s La tumba to me is like Daniel Deronda to Liz—our default textual subjects), something else came to mind:

My good friend just moved to Portland, Oregon.

A random thought, indeed, but I plan to show how it is connected to this world of Ludic Analytics (or perhaps, how this world has taken over my mind and it now affects how I relate to any and all texts I come across).  Since my friend was new to the Portland area, she was looking for a place to live and would send me craigslist ads to get my opinion on possible new roommates.  She would also send me some of the ads she found crazy or ridiculous (of which, there were shockingly many…it’s probably more a “craigslist thing” than a “Portland thing”).  Then to help out, I began to search the Portland craigslist ads for her, in an effort to find her the perfect place to live in her new city.

It’s been a few years since I have looked for housing, so, I was not up to date on my local craigslist ad situation, but it seemed to me that the Portland posters had some common themes that kept popping up and were distinctly “Portland” compared to the “Santa Barbara” ads to which I was more aware.  Primarily, the Portland posters needed evidence that you were employed or had a steady job–which is definitely a good quality in a roommate. It seemed to me, however, that this statement was disproportionally included in the Portland ads.   The other commonalities that I perceived from reading the ads were that there were more vegetarians and self-identified “420 friendly” posters in Portland than in Santa Barbara.  However, I wondered: is my sense about this correct?  I decided to investigate by creating some visualizations of the ads and comparing the results.  (Thank you Many eyes).

Keep in mind that this is not the most scientific of experiments, but I was just curious, and I had the tools at the ready (focus more on the ludic here than the analytic).  I compared text from the first 11 posts from each city, Portland, Oregon and Santa Barbara, California.  In these ads, people were looking for roommates to fill their house.  Someday it might be fun to do a more formal analysis (with a bigger sample set, and more rigorous methodologies), but until then, consider these word clouds:

Portland:

portland word cloud

Santa Barbara:

santa barbara word cloud

“Room”  and “House” are (logically) prominent in both clouds. “Kitchen” is more evident in Santa Barbara, while “Work” or “working” does seem to have a higher prevalence in the Portland as I suspected. However, the “420” is actually bigger in the Santa Barbara cloud.  School related terms are also more present in the Santa Barbara cloud, perhaps suggesting the large population of students in our much-smaller-than-Portland town.

The clouds did not allude to as much information as I had hoped (despite looking cool) so I decided to check out some more visualizations:

Portland 2-word tag cloud:

portland 2 word tag

Santa Barbara 2 word tag cloud (with phone numbers removed):

tag cloud numbers removed santa barbara

Some observations from these visualizations: 1) it’s cheaper to live in Portland ($600 vs $800) 2) People in Portland do in fact “cook meat” and tend to name their dog “Roxy” (or one person with a dog name Roxy mentions said dog numerous times in the same ad)  3) My perception that self-identified “420” posters in Portland were more prevalent appears to be wrong.  Of course, one of the caveats of this type of visualization is that it could be misleading.  It might say “no” before, and change the meaning, like in the following example of a Santa Barbara phrase net diagram:

bring the party sb craigslist

Hmm.  Interesting.  It’s important to Santa Barbara Craigslist posters that you both “Share THE bathroom” and “Bring THE party”.  However, upon closer investigation, it’s actually “DON’T bring the pary”:

dont bring the party screen shot

So, there you go.  I guess sometimes data can be misleading (which we already knew).

And just so you know, in Portland it’s important to:

share the costs portland

Share THE costs.

Did these visualizations help my friend find a house?  No.  But they were fun to make and she definitely appreciated the effort.  It also solidified in my mind the fact that the process can be just as important as the results, and that it has come to the point where I make visualizations for the amusement of myself and my friends (a good thing?  I hope so).

Said friend eventually found a room in a nice house with an amicable roommate; although, unfortunately her new place does not actually come with a dog named Roxy.

This time, we are using a MALLET!

(I apologize for the pun, but it does not seem to get old).

MALLET stands for MAchine Learning for LanguagE Toolkit and is proof that, among other things, there is no such thing as an impossible acronym. MALLET is a Java-based package designed for multiple kinds of natural language processing/machine learning, including what I used it for – Topic Modeling.

So what is Topic Modeling? Well, let’s say that texts are made up of a number of topics. How many? That depends on the text. So every word in that text (with the exception of common words like “an” ) should be related to one of those topics. What MALLET does in topic modeling mode is it divides a set of texts up into X number of topics (where X is your best guesstimate on how many there should be) and outputs all the words in that topic, with a shorter list of top words for each topic. Your job, as the human, is to guess what those topics are.

For more on the idea behind topic modeling, check out Matthew Jockers’ Topic Modeling Fable for the decidedly non-technical version or Clay Templeton’s Overview of Topic Modeling in the Humanities.

Now for the second question – why am I doing it? Beyond the “well, it’s cool!” and “because I can,” that is, both of which are valid reasons especially in DH. And my third reason is a subset of the second, in a way. I want to test the feasibility of topic modeling so that, as this year’s Transcriptions Fellow*, I can help others  use it in their own work. But in order to help others, I need to first help myself.

So, for the past two weeks or so, I’ve been playing around with MALLET which is fairly easy to run and, as I inevitably discovered, fairly easy to run badly. Because of the nature of topic modeling, which is less interested in tracking traditional co-occurrences of words (i. e. how often are two specific words found within 10 words of each other) and more interested in seeing text segments as larger grab-bags of words where every word is equidistant from every other**, you get the best topic models when working with chunks of 500-1000 words. So after a few less-than useful results when I had divided the text by chapters, I realized that I needed a quick way to turn a 300,000+ word text file into 300+ 1000 word text files. Why so long a text? Well, George Eliot’s Daniel Deronda is in fact a really long text. Why Daniel Deronda? Because, as the rest of this blog demonstrates, DD has become my go-to text for experimenting with text analysis (and, well, any other form of analysis). So I have MALLET, I have Daniel Deronda, I now also have a method for splitting the text thanks to my CS friends on Facebook and, finally, I have IBM’s “Many Eyes” visualization website for turning the results into human-readable graphics. All that’s missing is a place to post the results and discuss them.

I knew Ludic Analytics would not let me down. So, without further ado, I present the 6 topics of Daniel Deronda, organized into word clouds where size, as always, represents the word’s frequency within the topic:

Topic 1:

Topic1

Topic 2:

Topic2

Topic 3:

Topic3

Topic 4:

TOPIC4

Topic 5:

Topic5

Topic 6:

Topic6

 

You will notice that the topics themselves do not yet have titles, only identifying numbers. Which brings us to the problem with Topic Modeling small text sets – too few examples to really get high quality results that identify what we would think of as topics. (Also, topic modeling is apparently better when one uses a POS (parts of speech) tagger and even gets rid of everything that isn’t a noun. Or so I have heard.)

Which is not to say that I will not take a stab at identifying them, not as topics, but as people. (If you’ve never read Daniel Deronda, this may make less sense to you…)

  1. Daniel
  2. Mordecai
  3. Society
  4. Mirah
  5. Mirah/Gwendolen
  6. Gwendolen

I will leave you all with two questions:

Given the caveat that one needs a good-sized textual corpus to REALLY take advantage of topic modeling as it is meant to be used, in what interesting ways might we play with MALLET by using it on smaller corpora or single texts like this? Do the 6 word clouds above suggest anything interesting to you?

And, as a follow-up. what do you make of my Daniel Deronda word clouds? If you’ve never read the text, what would you name each topic? And, if you have read the text, what do you make of my categorizations? 

*Oh, yes. I’m the new Graduate Fellow at the Transcriptions Center for Literature & the Culture of Information. Check us out online and tune in again over the course of the next few weeks to see some of the exciting recent developments at the Center. Just because I haven’t gotten them up onto the site yet doesn’t mean they don’t exist!

**This is a feature, not a bug. Take, for example, a series of conversation between friends and, in every conversation, they always reference the same 10 movies although not always in the same order. MALLET would be able to identify that set of references as one topic–one that the human would probably call movies–while collocation wouldn’t be able to tell that the first movie and last movie were part of the same group. By breaking a long text up into 500-1000 word chunks, we are approximating how long something stays on the same topic.

Though we have mostly gone our separate ways over the past year, I find that I am attached to the idea of the LuAn collective and want to keep it going just a bit longer. After all, you never know when you might need a data viz blog that you co-run.

As a second year student in the English department at UCSB, I am gearing up to take (i.e. reading madly for) my qualifying exams this June. As luck would have it, I am also finishing up my course requirements this quarter, so I find myself in the…unenviable position of writing a paper on a topic that would ordinarily lie far outside my interests in the 19th century English novel: William Faulkner. So I did what any digital humanist with an unhealthy interest in visualization would do in my situation – I made a graph.

I wanted to write a final paper for this course that reflects my theoretical interests and would allow me to continue developing a subset of my digital skills. Of course, trying to get all of my interests to move in more or less the same directions is like herding kittens, but I had been seeking another opportunity to think through a novel using a social network graph and, well, I wouldn’t have to start from scratch this time. I knew how my graphing software, yEd, worked and I knew how long it took to turn a book into a collection of Excel cells denoting conversations (20% longer than you think it will take, for those of you wondering). So why not create a social network graph of one story in Yoknapatawpha?

Don’t answer that question.

Light in August is widely considered to be the most novel-like of Faulkner’s novels, which made it a good choice for my project. After all, I had experience turning a novel-like novel into a social network graph and no experience whatsoever with a text like The Sound and the Fury. Much as I was intrigued by and even enjoyed The Sound and the Fury and Absalom, Absalom!, the prospect of figuring out the rules for graphing them was…intimidating to say the least.

For all its novelistic tendencies, Light in August is still decidedly Faulknerian and, in order to work with it, I found myself either revising some of my previous rules or inventing new ones. When I worked on George Eliot’s Daniel Deronda, I had used a fairly simple set of two rules: “A bidirectional interaction occurs when one named character speaks aloud (that is, with quotation marks) to another named character. A unidirectional interaction occurs when a named character speaks aloud about another named character.”

Here are the Faulkner rules:

  1. When one character speaks to another, that interaction is marked with a thicker, dark grey arrow.
  2. When one character speaks about another, that interaction is marked with a thin, dark blue arrow.
  3. When one character speaks to another within another character’s narration (i.e. X is telling a story and, in it, Y talks to Z), that interaction is marked with a thicker, light grey arrow
  4. When one character speaks about another within another character’s narration, that interaction is marked with a thin, green arrow.

There are several changes of note here. First, I learned more about yEd and figured out how to put properties like line size and color in the spreadsheet itself so that the software would automatically map color and line weight as appropriate. This meant I could make finer and clearer distinctions than last time, at least in terms of showing kinds of communication. Second, I changed the rule about quotation marks because quotation marks don’t necessarily connote audible speech in Faulkner, nor does their absence connote internal monologue. I relied entirely on the dialogue tags in the text to decide whether a sentence was spoken aloud or not. Finally, I changed the rule about named characters. All speaking characters are represented in the graph, regardless of whether or not we are ever told their names. Had I not changed this rule, the number of characters of color represented in this graph would have fallen from 15 to 3. There are 103 distinct nodes in this graph, which means 103 characters speak in this text.

Jeffrey Stayton, in an article entitled “Southern Expressionism: Apocalyptic Hillscapes, Racial Panoramas, and Lustmord in William Faulkner’s Light in August” (which, in the interest of full-disclosure, I am still in the middle of reading), discusses how Faulkner figures racial landscapes in Light in August as a kind of Southern Expressionism. It is fitting, of course, that one of Willem de Kooning’s expressionist paintings is based on and entitled “Light in August”. But this graph highlights the relationship between fading into the background and remaining unnamed, it shows how easily racial landscapes can become racial backgrounds and how easily it is to elide the unnamed. In the Victorian novel, a certain charactorial parsimony seems to ensure that everyone who speaks is named. Daniel Deronda is 800 pages long and contains 62 character nodes. Light in August is 500 pages long and contains 103. If you remove all the unnamed characters, there are 44 character nodes. (For those of you counting, thats 38/88, close to half of the white characters, and 12/15 or four fifths of the black characters. The other 8 are groups of people, who seem to speak and are spoken to fairly often in this text.)

There are several ways to interpret this difference and I am loathe to embrace any of them without, frankly, having done more work both with Faulkner and with the Victorian novels. One of the things I find striking, though, is that Light in August seems to be making visible (though only just) things that are either not visible or entirely not-present in Daniel Deronda. Light in August is told from different characters’ viewpoints and the narration always locates itself in their perspective and confines itself to what they know. So the graph becomes a record not only of what they have seen, but also of how they have seen it.

I can hear some of you grumbling “What graph? You haven’t shown us a graph yet!”

My apologies. For that, I will give you three. Anything worth doing is worth overdoing.

1) The first graph.

Light in August Social Network Organic DiskClick to see it in full size.

In this graph, color corresponds to importance, as determined by number of interactions. The darker the color, the more interactions that character has had. That dark red mark in the middle is Joe Christmas.

2) The graph without the unnamed characters

Light in August Social Network Organic Disk Sans Unnamed

Click for full size.

Colors mean the same here that it did in the previous graph.

There are several differences between the two graphs. Obviously, the second is legible in a way that the first one is not, which is not entirely a virtue. When it comes to graphing, legibility and completeness tend not to walk hand in hand. The more you leave out, the more you can see so, contra-positively  the less you can see, the less you have left out. The best-of-both-worlds solution is to use both images.

Interestingly enough, there are no unconnected nodes in the second image, even though I deleted half of the nodes in the graph. That surprised me. I expected to find at least one person who was only connected to the network through one of the unnamed characters, but there’s no such person. And many of the people who remain are not characters I would consider to be important to the story (Why has the entire history of the Bundren family remained more or less intact? Who is Halliday, anyway?)

These are questions to be solved, or at least pondered. They are, at any rate, questions worth asking. If the network remains intact without these characters, what does their presence signify? What has changed between the first graph and the second?

After all, I do have a paper to write from all of this.

I promised you a third graph, did I not? This one moves in a rather different direction. As part of its ability to organize and rearrange your graph, yEd has a grouping functionality and will divide your graph into groups based on the criteria you choose. I had it use natural clustering.

A grouping into natural clusters should fulfill the following properties:

  • each node is a member of exactly one group,
  • each node should have many edges to other members of its group, and
  • each node should have few or even no edges to nodes of other groups.

yEd gave me 8 distinct groups, two of which had only two nodes in them.

Light in August Social Network Grouped

As always, click for full-size.

I assume that when yEd said that the groups would have few or no edges to nodes in other groups, it was doing the best it could with the material I gave it. I then had yEd rearrange the positions of the nodes so that the centrality of a node’s position within a group indicates how many connections it has.

What I love about this graph is how it divides Light in August into a set of six interconnected but distinct narratives. Each group larger than two centers around a specific character or group of characters involved in one thread of narrative. Joe Christmas, who is arguably the main character, has one section (along with a plurality of the other characters of color), Lena Grove, Bryon Bunch and Joe Brown are all grouped together in another and, while they talk about the characters in Joe Christmas’s section quite often, they have only three conversations with the characters in that group. Those are the two largest groups. Percy Grimm, for all that he only appears in one chapter, manages to collect 7 other nodes around himself and does seem, in his own way, to be the protagonist of his own story who just walked into this one for one chapter and then left again. He is also the only named character in his section.

Social network graphs are, for me, a way of re-encountering a text. They strip away most of the novel and model only a small portion of what is present in the text, but that portion becomes both visible and analytically available in a new way. (I think seeing and visibility will become a theme in this paper, once I write it.) The title of this course is “Experimental Faulkner”. I like to think that this qualifies.

Last week, I posted a “quilt” of sorts, made by digitally stitching together images of both the English and Spanish History flow results for “José Agustín.”   The inspiration for that visualization was two-part: 1) an opportunity to play around with the History Flow tool more (which I have mentioned before and with which I have enjoyed creating beautiful and colorful designs) and 2) Creating some kind of graphic representation for the bicultural influences Agustín shows in his work.  In some of Agustín’s early novels, he incorporates elements of American Popular culture from the early 60s and in some interviews he has cited the Beat Generation as an influence in his work.  Thus, I wanted to demonstrate not only Agustín’s perspective as a young Mexican writer in the early 1960s witnessing the border crossing of different cultural elements, but also of my perspective as an American student in the 21st century looking at Agustín’s perspective as a young Mexican writer looking at….(you get my point).  Wikipedia seemed like a good starting point to find a collective “definition” of Agustín, and there are search options in both English and Spanish.  It is important to point out that I recognize that “English” Wikipedia does NOT equate “American” and similarly, that “Spanish” Wikipedia does NOT equate “Mexican.”  So yes, this may not be the most Scientific study in terms of my initial goals (i.e. creating a visualization in terms of examining different cultural elements), but, I believe it introduces interesting questions about the differences and similarities of both History Flow designs.  Here is a refresher of the History Flow results for Agustín:

What do these results mean?  I can’t say for sure.  I could explain the different edits that have taken place, the arguments and deletions, and the creation of new secondary pages.  For example, in the English version, the page appears to lose a large amount of content about half-way on the x-axis.  This is because information about selected novels was deleted and re-posted on a newly created, linked page dedicated to the novels.  The black vertical gap on the Spanish result?  That is most likely vandalism, where the page was fully deleted by one editor.  Could we say that because there is evidence of vandalism on the Spanish result and not on the English, that Agustín has a more polemic presence in the Spanish-speaking world?  We could say that, but that is a big assumption, and there are many other factors at hand.  In fact, I would urge against any “conclusions” and instead look for any trends or patterns.  Comparing the Agustín pages piqued my curiosity about the English and Spanish results of other Wikipedia pages.  I decided to look at the results for other writers of the Onda (Gustavo Sainz and Parménides García Saldaña), the Literature of the Onda (searched by “La Onda” in English, and “Literatura de la onda” in Spanish) and just for fun, Mexican Literature (or, in the Spanish search, “Literatura de México).  Here are the results presented in a grid (x-axis L->R English, Spanish; y-axis Wikipedia search):

To me, what is of interest is: 1) Lack of entry for Parménides García Saldaña in English and 2) the English result for Mexican Literature: very few editors and changes, and then a big deletion.  I did not see any major trends or patterns save that the Spanish entries are more detailed.  This makes sense, I searched Mexican authors and topics.  This spurred my next comparison, American authors and novels.  I settled on Jack Kerouac (of the famed-Agustín-inspiring Beat generation) and “The Catcher in the Rye” (because I recently picked it up again, and comparisons have been drawn between some of Agustín’s first novels and this Salinger work):

Both the English and Spanish results for Kerouac and “The Catcher in the Rye” are similarly detailed and color-rich.  The global recognition of this particular author and this particular work is greater than that of the Onda writers, and it is important to take this fact into consideration.

Looking at all of the images, to me,  the English and Spanish versions of “The Catcher in the Rye” are the most similar.  They both seem to follow a similar pattern and have a similar amount of zig-zagging.   In contrast, The “Mexican Literature” results are the most different (excluding the García Saldaña page, for obvious reasons).

Earlier I advised against making conclusions, and I stand by that statement.  I think this particular exploratory exercise dwells more in the ludic, and less in the analytic, and I’m okay with that.