In the first blog post of this semester, I began with this diagram mapping research questions and methods:

In some ways, I my ideas have remained the same. I’ve stuck with mapping, although I am looking into options other than networks to represent affect. I’ve also narrowed the scope of my project–I’m focusing on Stratford-upon-Avon rather than England as a whole. However, the level of preparedness I feel now versus in that first week of the semester is nearly indescribable.

In both our first and last classes, we were asked about what anxieties we had going into this course. Mine was that I was unsure if I’d be able to come up with my own idea to plan a detailed enough project to write the NEH Digital Humanities Advancement grant proposal. Even though I had previously worked in a DH center as a project manager–which included being involved with grant writing–I was always just shaping other peoples’ work into a cleaner format. Coming up with my own idea was difficult, but by the end of the course I felt that I was much more clear on the process of formulating a digital project.

For me, the most useful and interesting part of the class was experimenting with the various digital tools during the asynchronous course and reading the case studies of projects that have used them successfully. I had used a few of them before, or experimented without direction. By actually having data to work through, it made my exploration more practical and I gathered a better understanding of what the different software could do. I’m hoping at some point to take a deeper-dive into Python, and I applied to a Python for Humanists summer school that I am still waiting to hear back on. Of all of these lessons, I probably enjoyed the mapping unit. Not only did this help me form the basis of my own project, but I found Pamela Fletcher and Anne Helmreich’s article “Local/Global: Mapping Nineteenth-Century London’s Art Market” describing their own project combining mapping and networking enlightening. It helped me decide that this wasn’t what I wanted to do for my own project, it helped me better clarify my research questions and determine my own methodologies.

In terms of next steps, I found out today I was accepted to the Immersive Visualization Institute (IVI) for this summer. I will being working in the 360 Degree Panoramic Room in the Library’s Digital Scholarship Lab. I’m hoping to use this time to begin thinking through the ways I can represent affect in my visualization, and also due to it’s pedagogy element I can start thinking through what that proposed element of my project might look like. I am also in the process of completing my application for the Cultural Heritage Informatics (CHI) Fellowship. If accepted, I hope I would be able to turn the wireframe I have into a fully-fledged protype of the Stratford-upon-Avon portion of my digital dissertation project. Although I don’t need it for the graduate certificate and I am techincally done with course work, I am hoping to talk to my advisor about possibly taking the Digital Humanities Pedagogy class offered next spring. I’d be interested to see how I can work some of the tools and methods we learned this semester into my own classroom.

One of the things I was most surprised by was how much my grant proposal project changed in the last month of the course. In addition to meeting with both Dr. Leon and Dr. Fitzpatrick to discuss ways to improve the project, the feedback from my classmates after they watched my pitch video was incredibly valuable. Originally, I had proposed a Level I project collecting data to eventually create a multimedia map. By the final draft, I was proposing the creation of a layered deep map that showed the town of Stratford-upon-Avon at significant points in history and included pins at locations that link to archival material related to the performances that occured there. Additionally, I was better able to articulate what I was looking for in representing affect in the visualization (after a quick re-read of Data Feminism). Although I do not have enough data to make a protoype per se, I did determine that I wanted to create the map in Omeka S and include a project blog along with lesson plans.

It seems incredible that at the beginning of the semester I only had a vague idea of how I wanted to incorporate a digital project into my dissertation research, and now I do not only have a fully fleshed-out idea but one with an actionable plan in place. Overall, I really enjoyed the course. I appreciated the nice balance of reading through the theoretical approaches and case studies alongside acutally trying out the tools. I look forward to seeing how I can build on this class during the rest of my time at MSU and beyond.

Global DH

After nearly a year of work, it’s hard to believe the Global Digital Humanities Symposium is finally here! I first became involved with the Symposium as a presenter in 2018, and I joined the planning committee in 2019. This year, I’m also a member of the communications subcommittee and the volunteers subcommittee. During the Symposium this week, I’m handling the conference’s Twitter account which has been a new experience, but it has been really interesting to see all of the conversations about the presentations that are going on out there on social media. Anyhow, on to this week’s readings…

In Alex Gil and Elika Ortega’s chapter “Global Outlooks in Digital Humanities: Multilingual Practices and Minimal Computing,” they discuss the challenges of translation in DH spaces and suggest possible solutions through their work with Global Outlook::Digital Humanities (GO::DH). They discussed the origins of GO::DH as developed out of a need for de-centering the Global North in DH communities. Primarily, the organization works to facilitate the establishment of relationships between DH communities across the world. Some of this work involves finding ways to circulate scholarship across language barriers, and Gil and Ortega site some of their objectives as being “focused on valuing the character of the community, showcasing its multilingual and multicultural wealth, raising awareness about the different academic cultures and norms, and processing ways to ensure local-global work is reviewed in a culture- and language- sensitive manner” (27). Through their creation of a Translation Toolkit, GO::DH is able to address some of these goals and identify that the DH community spans many languages and many volunteers are willing to help with translations. They advocate for minimal computing as a way for increasing access.

Roopika Risam’s chapter “Decolonizing the Digital Humanities in Theory and Practice” positions decolonization as a process and one that can be facilitated through digital technology. Like Gil and Ortega, Risam points to the de-centering of the Global South and Indigenous cultures in DH communities and recognizes the continual colonial violence that occurs without conscience efforts to destabilize current hierarchies. She suggests that …

when invoking the relationship between decolonization and the digital humanities–the central question is not how digital humanities itself could be decolonized but how digital humanities has contributed to the epistemic violence of colonialism and neo-colonialism. This is evident in both its implication in colonial forms of knowledge production and the ways digital humanities has contributed to historical processes of decolonization. Its further possibilities lie in resisting neo-colonialism in projects and tools. (Risam 80).

Risam offers a variety of potential steps to decolonization through a combination of theory and practice. She emphasizes the importance of understanding DH in local contexts, which stems from postcolonial theories. The DH community also needs to reckon with the fact that methods of knowledge production/transmission that are developed in the Global North are not the only way of doing things.

The interview with Chao Tayiana Maina, “Digital History, Grit and Passion…” posted to the People’s Stories Project blog provided excellent context for the keynote she gave at the opening of the Global Digital Humanities Symposium on April 12. I was particularly interested in Maina’s response to the question: “How would you define the term ‘digital humanities’ in line with your development of ADH?” She replied:

I’m trying to stay away from the more academic descriptions, but I see African Digital Heritage as a way to rethink, to re-animate, to respond. At the centre of everything is the culture, and technology revolves around that. I’ve always been very clear about centering the history and then the technology as opposed to saying, ‘wow, I have this cool new gadget, what history do I apply to it.’ When I think about digital humanities, I think about humanities as being the core part, and digital being an interface, or a supplement that works to strengthen the humanities as opposed to the other way around.

-Chao Tayiana Maina

This definitely came through during her keynote, in which she discussed the ways in which context is key in dealing with archives. In particular, Maina noted that working within colonial archives involves unpacking oppressive frameworks and working to develop empathy. She warned listeners of the risk of replicating colonial violence through digitization practices, and emphasizes that digital humanities practitioners must be extremely deliberate when working with these materials and developing projects from them.

I’m very much looking forward to the rest of the Global Digital Humanities Symposium this week, and plan to start exploring the presented scholarship through a decolonial lens.

Algorithms, Surveillance, and Data Ethics

This week’s unit on algorithms, surveillance, and data ethics took an intriguing look at the various ways people are monitored and represented in the digital sphere. I somewhat knew of the prevalence of these issues in DH in a sort of an abstract way previously, but they started to come into focus more when I read Ruha Benjamin’s edited collection Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life as part of my ENG802 Race, Gender and the Human class in Spring 2020. It provided a vast number of examples of how surveillance operates in society and particularly targets marginalized communities alongside generative ways of responding. The book felt particularly relevant at the time do to the massive shift online due to the COVID-19 pandemic, but I’ve been wanting to learn more since then.

Sharon Block’s article “Erasure, Misrepresentation and Confusion: Investigating JSTOR Topics on Women’s and Race Histories” for Digital Humanities Quarterly describes the various ways in which the academic journal database JSTOR’s algorithm often mislabels scholarship relating to women and BIPOC issues and how their attempts to rectify the issue are passive by relying on scholars to do the extra work of reporting these incorrect classifications. Block points to several searches in which she pulled up essays about Women’s History where the top keyword was “Men.” Rather than acknowledge there might be a problem in need of fixing, JSTOR responded by saying that the term just happened to be used more which, with a search of the text, Block was able to prove was false with female-associated words being used much more frequently. Similarly, Block found that JSTOR often problematically conflates terms associated with Black women, skewing the perceptions of articles in their database and misrepresenting Black women’s histories. Block acknowledges that while faculty might be able to deduce these problems and work around keywords, scholarship is ultimately being misrepresented and is at risk of being misused by students or non-academic researchers attempting to navigate JSTOR’s search.

In her essay “Finding Fault with Foucault: Teaching Surveillance in the Digital Humanities” for The Journal of Interactive Technology & Pedagogy, Christina Boyles outlines the importance of separating theoretical conversations of Foucault from surveillance, particularly when teaching students who are fairly new to the topic. Although Boyles herself initially used Foucault when helping student conceptualize what surveillance means in society, she has realized that this approach is not fully representative of today’s surveillance state and how it disproportionately impacts Black and Brown bodies. Boyles advocates for adopting a decolonial approach for understanding surveillance and developing ethical communal values for dealing with issues of surveillance. She suggests that this can happen by implementing lessons that incorporate assessments of non-digital as well as digital modes of surveillance and by implementing an ethics of care while maintaining awareness of positionality and levels of risk.

Safiya Umoja Noble’s highly influential book Algorithms of Oppression: How Search Engines Reinforce Racism has played a big role in shaping scholars’ understandings of problematic algorithms (and was in fact mentioned by Block and Boyles in their work). Noble demonstrates the ways in which algorithms display bias and the danger of separating this technology from the people who program it. In particular, Noble highlights how Google Searches demonstrate racism by generating results that are harmful to BIPOC groups–for instance, her search of the term “black girls” initially primarily produced pornographic sites, although Google has since fixed this. She complicates the notion that search engines are apolitical, and points to the way searches are primarily controlled by paid advertisers despite the fact that very few users of search engines are aware of this fact. Noble attempts to address the ethical issues of portrayals of identity on the web along with the right to be forgotten. She also problematizes terms such as the digital divide, and discusses the issues of placing the responsibility of placing the responsibility on younger generations to fix the situation when there are plenty of female and BIPOC coders, social scientists, and humanists who could help improve the field. Yet, Google continues to brush of responsibility, claiming it cannot control the algorithm even when it has demonstrated that if there is enough push back (or laws made against certain search results) it can influence the algorithm–although this also leads to issues of what is being recognized as being excluded. Ultimately, Noble hopes that “this book can open up a dialogue about radical interventions on socio-technical systems in a more thoughtful way that does not further marginalize people who are already in the margins. Algorithms are, and will continue to be, contextually relevant and loaded with power” (171).

Open Access and Scholarly Communication

I was excited to finally get to the unit on open access, as it’s something I’ve been somewhat aware of but haven’t fully had the opportunity to explore. When I worked at SIUE’s library, I was able to sit in on some meetings and learn about the university’s green open access option: the SPARK institutional repository. I also was able to learn a bit about the open access publication requirements of certain funders, but this was very specific context.

In Open Access and the Humanities: Contexts, Controversies and the Future, Martin Paul Eve breaks down the current trends of open access in the communities and addresses the stances of both the pro-OA and anti-OA points of view on a variety of issues. Eve provides a helpful introduction that breaks down commonly used terms around open access (including green vs gold, subject/institutional repositories among others) and traces the history of open access from its inception through its more prominent presence in the sciences to the ways it has slowly been incorporated into humanities publishing. Personally, I found the chapter “Digital economics,” the most useful as it explores the benefits and risks of open access publishing. It discusses the commercialized aspects of the academic publishing business and the costs of different models on both the publisher and user sides of journals and monographs. The most eye-opening element of this chapter for me was the discussion regarding economic capital versus social capital:

…systems of economics and value in scholarly communication/publishing are determined not solely in financial terms but also in the exchange of symbolic capital…although interdependent, these systems can be broken down into questions of quality and value as socially ascribed and questions of finance in terms of labour value and capital (even if the latter are, also, social at their core). (43)

Ideas of symbolic value, specifically in terms of perceived prestige, permeate the rest of the book’s discussion on open licensing, monographs, and innovations in open access. The book ends with discussions of looking forward to the future of open access and different possibilities for feasible models in the humanities; however, it was published in 2014 so it would be interesting to further explore the progress that has been made since then.

Kathleen Fitzpatrick’s “Giving It Away: Sharing and the future of Scholarly Communication” explores open access within the humanities with a call for openness and generosity. The essay points to complex nature of open access in the scholarly communication in that while many publications are for an audience of a small group of fellow scholars, the work is necessary to continue research within the community of academics. Fitzpatrick acknowledges the limitations of cost in scholarly communication and addresses some of the issues of the exclusivity of information in the ivory tower. Thus, she proposes an ethical way of participating in scholarly communication:

We teach, as we were taught; we publish, as we learned from the publications of others. We cannot pay back those who came before us, but we can only give to those who come after. Our participation in an ethical, voluntary scholarly community is grounded in the obligation we owe one another, an obligation that derives from what we have received. (355)

This pushes against the capital of perceived prestige that Eve pointed to as one of the biggest barriers to open access in his book, and through the spirit of “giving it away” is the only way to eliminate profit from the equation. Fitzpatrick proposes multiple possible models for moving in this direction, and closes with a series of questions asking readers to spur them into action to be part of this solutions.

The chapter “Crowdsourcing in the Digital Humanities” by Melissa Terras goes in a slightly different direction from the other readings this week by exploring the ways in which the public is engaged in humanistic research activities through crowdsourcing projects. Terras breaks down the major issues of crowdsourcing, identifying two key problems: information management and ideation. To explore the issues and benefits of crowdsourcing in the humanities, Terras investigates several heritage projects that rely on this method of gathering or sorting information. While institutions fear the risks of potentially bad or mismanaged information (possibly due to volunteer beliefs), it is suggested that they should rather view crowdsourcing as a manner of fulfilling missions of creating and maintaining digital collections. Ultimately, “crowdsourcing in the humanities is about engagement, and encouraging a wide, and different, audience to engage in processes of humanistic inquiry, rather than merely being a cheap way to encourage people to get a necessary job done” (430). While issues of ethics and sustainability do arise, Terras believes that these can be prevented within the digital humanities with mindful efforts to build bridges between communities and scholars in the humanities.

Since I’m still in the early stages of my PhD program, I have not given much thought to publications. However, this week made me consider the choices I will need to make as I begin to submit to journals. Issues of prestige have definitely been on my mind as I am constantly wary of the job market and see where my peers publish their work, and it makes me wonder how to navigate negotiating open access. I was given hope with the suggestion in Eve’s book that early career scholars in 2014 were already more aware of and thinking of open access options, and now that I have more information I can start making a plan as to how I will ethically publish my work when the time comes.


When I started working through this week’s readings and activities, the last thing I anticipated was gaining a neww appreciation for bar charts. I’ve always been frustrated with all of the options in trying to represent data, and when working with programs such as Excel I’ve always been under the impression that the more flashy the graph the better. However, this week I developed a better understanding of the importance in weighting the clarity of data over more aesthetic choices as well as how those aesthetic choices influence an audience’s interpretation of the data.

In Johanna Drucker’s essay “Humanities Approaches to Graphical Display,” she outlines the various ways in which digital humanists can make use of data to create visualizations while maintaining awareness of the problems or implicit biases built into these tools. Drucker calls attention to the idea of data as capta, a concept that appears repeatedly throughout this unit:

Capta is ‘ taken’ actiely while data is assumed to be a ‘given’ able to be recorded and observed. From this distinction, a world of differences arises. Humanistic inquiry acknowledges the situated, partial, and consitutive character of knowledge production, the recognition that knowledge is constructed, taken, and not simply given as a natural represntation of pre-existing fact.

Drucker walks readers through the way data is constructed and the various assumptions made when visualizing it. She uses examples of time and temporality to contrast how humanists might perceive these topics and need more freedom in visualiztions versus the way scientists and social scientists may view them. Drucker poses a potential model for humanitists creating visualizations, and invites them to embrace the ambiguity of humanities data rather than hiding it in existing representational models.

Steve Braun’s article “Critically Engaging with Data Visualization through an Information Literacy Framework” picks up this idea, and suggests that possibly the ACRL framework used by librarians. The framework consists of six different “frames,” but in the case of digital humanities visualization Braun argues that “authority is constructed and contextual” and “information creation as a process” are the most crucial in using data as humanists. He also breaks down a series of design dichotomies of visualization to better assess meaning. Braun describes a “Choose Your Own Adventure” book that he uses with students to encourage them to consider data visualizations as “forms of dialogue rather than statements of fact,” and I hope I might have the opportunity to incorporate this kind of activity into my own pedagogy at some point.

In “Racism in the Machine: Visualization Ethics in Digital Humanities Projects,” Katherine Hepworth and Christopher Church address the biases built into many digital tools. They give the example of TayTweets, and discuss how an algorithm was quickly able to learn racism and hatred from the internet before pointing to the fact that all data visualizations are algorithmic. By comparing two digital mapping projects that focus on lynchings in America, Hepworth and Church are able to point to the ways projects can communicate similar information in different ways–ranging from the selection of data used, the explanations provided for the choices made in creating the project, and the aesthetic elements of the visualization.

The section on visualizations in Exploring Big Historical Data: The Historian’s Macroscope provides introduction to different kinds of visualizations, the different kinds of data that can be visualized, how certain elements of a visualization can influence audience interpretation, and tips on how to make a visualization as impactful as it can be. The case studies on the “Six Degrees of Francis Bacon” networking project and Michelle DiMeo and A. R. Ruis’s work with epistemic network analysis (ENA) show the nuances of data visualization in practice. I’ve always wanted to try out a network visualization project ever since I did a Gephi workshop, so I was interested in examining the pros and cons and learning the differences in different kinds of networks .

I end with the bar graph I created with the same data from the maps I used last week in my attempt at making a choropleth map; however, I took the advice of several of the articles and narrowed down the scope of the data when representing it visually. I have a new appreciation for the simplicity and clarity of bar charts, and how they make data more digestible for a wider audience.


The following screen shots are from a map that I created with StoryMaps JS as part of my ENG 818 course in Spring 2020, and it was the last project I started before the COVID-19 pandemic shutdowns hit. In fact, I was in East Lansing’s Espresso Royale with Dr. Scott Michaelsen on March 11 discussing the various directions I could go with this project when the university sent out an email announcing a case had been discovered on campus and everyone would be going home noon that day. Funnily enough, this project was for a Climate Fiction course, which of course addressed the potentiality of world changing plague. This short proposal for a map was all that ever came of this project, so I was excited to explore mapping again this week.

The introduction to my map of the Little Ice Age in Early Modern Europe.
Example 1: Paul Gerhardt’s “Occassioned by Great and Unseasonable Rain”
Example 2: William Shakespeare’s King Lear

What has drawn me to wanting to participate in mapping projects in the digital humanities is the dynamism of visuals along with the bending of spatial and temporal logic to portray a more humanistic version of these kinds of narratives. As Todd Presner and David Shepard wrote in “Mapping the Geospatial Turn,” “Maps and models are never static representations or accurate reflections of a past reality; instead, they function as arguments or propositions that betray a state of knowledge, Each of these projects is a snapshot of a state of knowledge, a propositioned argumnet in the form of dynamic geo-visualizations” (207). The humanistic reconceptualizations of space and time are incredibly interesting to me, and I’m hoping to eventually include some sort of mapping element in my dissertation project.

I was probably most interested in the case studies we read for this week, although I enjoyed the various examples shared by Tim Hitchcock in his blog post “Place and the Politics of the Past,” particularly his discussion of how he worked with recreating London maps to more clearly represent his project data as this is something I’m hoping to do with early modern theatres in England. Fletcher & Helmrecih’s project, which was discussed in their essay “Local/Global: Mapping Nineteenth-Century London’s Art Market,” provided an example of how maps can be effectively combined with other forms of visualizations (in this case, networks) to better represent the scope of the layers of people and places that were key to their research. Cameron Blevins’s “Space, Nation, and the Triump of Region: A View of the World from Houston” (and web companion) detailed the ways space can be produced and combined textual analysis of newspapers with mapping visualizations to represent how a specific publication represented the country to its readers. I’ve currently been engaging with Henri Lefebvre’s The Production of Space as part of my research, so seeing it appear in multiple readings this week in terms of how it can apply to a digital humanities project was incredibly useful.

Since I had previous experience with both StoryMaps JS and the Geolocation module in Omeka, I dedicated most of my time this work to experimenting with map creation in Flourish. Below, you can see my attempts at creating a data map with pins and a choropleth map. Flourish is pretty user friendly and after watching Albert Cairo’s tutorials I felt fairly confident in my ability to create the map with pins using the data from the Alabama Slave Narratives CSV. I struggle more with the creating the choropleth map, as for some reason I had a difficult time getting the matching of columns between the JSON and CSV files to run. However, I was eventually able to get it to run, and it resulted in the second of the two maps below. I ended up focusing on the portion of the workbook dedicated to religious organizations by denomination.

Overall, I think this week was very influential in helping me think through how I might use mapping visualizations in my dissertation project. It also made me rethink what I might want to do for the grant writing project for this course. I’m looking forward to next week’s unit on visualizations to see how I might make connections between these different elements!

Text Analysis II

This week’s approach to text analysis felt more complex than last week’s. I was familiar with Voyant and Tableau, although the dive into topic modeling was a bit deeper than what I’d encountered before. My only experience using Python was at an SSDA Intro to Python workshop last year, and it was very (as would be expected) social sciences based so I struggled using it in ways I never would for my own research; therefore, I was very excited to dig into this week’s focus on Text Analysis with Python.

Nguyen et al.’s article “How We Do things With Words: Analyzing Text as Social and Cultural Data” provided a comprehensive overview of the steps someone might need to take in creating a text analysis research project. The specificity of this article was helpful in imagining how I might formulate my own text analysis project by working through developing research questions, conceptualization, data, operationalization, and analysis. The example of using Reddit to examine hate speech was very compelling, particularly with everything that has happened in the last year. The section I found most compelling was probably the one on operationalization–the sub-section on modeling considerations alone demonstrated how much I was leaving out of what it takes to develop this kind of project.

The Natural Language Processing with Python book was probably my most frustrating experience in this class so far, as I tried to work through the exercises and the provided tutorials as I went along. After installing Anaconda, I had a lot of trouble getting Jupyter notebook to run the command to download the suggested corpora (in total it took 3 hours–about 2.5 hours more than I’d care to admit). This hands-on approach to working with text analysis was enlightening, and I felt more comfortable with these exercises than the ones I participated in at the SSDA workshop I mentioned above. I was able to run commands to compare texts, determine word frequencies, and even reorder or combine sentences among many other things. I also must share that I was very entertained by running commands that compared Monty Python and the Holy Grail to the Book of Genesis. The later chapters, as acknowledged by the Preface, are much more in-depth and rely on more specialized Python and linguistic knowledge, but I think if I have time over the summer I might want to try and work through the entire book rather than just read it.

Sandeep Soni et al.’s article, “Abolitionist Networks: Modeling Language Change in Nineteenth-Century Activist Newspapers,” was very helpful in that it provided an example of a specific project using this kind of methodology. The data and results sections included several ways of visualizing the numerical outputs produced by the tests. For instance, figures on pages 29 and 31 represent leader-follower pairs, while Figure 6 on page 32 visualizes page-rank scores. Ultimately, the research team was able to trace shifts in the meanings of words and how these changes were diffused across newspapers.

Honestly, I feel that I could spend many more weeks working on this unit. Much of the mathematical representation used across all of these works was over my head, and I know that two days of working through Python tutorials is nowhere near enough for me to even begin to understand all of the ins and outs that make it useful in this kinds of project. However, I definitely think I’ve found my summer project…

Text Analysis I

As an Early Modernist, text analysis has come up quite frequently in my studies. Ted Underwood discusses in “A Geneaology of Distant Reading” that distant reading is not some recent creation fueled by computers. Scholars have been counting rhyme endings, specific images, and manually modelling topics for decades before Franco Moretti (who also made use of Shakespeare’s works in his study of distant reading). I think what is most interesting about Underwood’s essay is the contention he highlights between the digital humanities and distant reading–they’re not the same thing, but they are for some reason often conflated. Additionally, Underwood points to the issue of distant reading often being aligned with a more social sciences approach to text, whereas the digital humanties are constantly pushing back against the assumption the the increase of technology in scholarly work does not inherently mean that threads of humanistic inquiry are moving toward the sciences.

In this essay, Underwood acknowledges that authorship analysis is perhaps the main area in which computres have deeply impacted the landscape of distant reading. Probably the most obvious and significant example of the developments of textual analysis in Shakespeare studies is the New Oxford Shakespeare, which participates in a new and updated computational analysis of authorship in the canon. In addition to modern spelling and critical reference editions of the complete works, the editors included an Authorship Companion. As Underwood states in the other essay of his we read this week, “Seven ways humanists are using computers to understand text,” “Part of the reason statsitcal models are becoming more useful in the humanities is that new methods make it possible to use hundreds of thousands of variables,” which seems to be what the editors of the New Oxford Shakespeare are attempting to do in finding every possible instance of a new hand being introduced into each play.

The 2016 New Oxford Shakespeare seems to realize the fears of many early modern scholars, as the plays and poems have been run through a computer and become a series of graphical models with accompanying essays on methods rather than focusing on critical readings of the language of the text. Admittedly, it goes much further beyond the example Froehlich mentioned in regarding the misunderstandings of textual analysis–“Shakespeare’s plays are about kings and queens”–but there is the risk of seeing all of the tables and graphs and fearing that the humanistic element has been removed from the plays. The Oxford University Press describes the collective edition as “an entirely new consideration of all of Shakespeare’s works, edited from first principles from the base-texts themselves, and drawing on the latest textual and theatrical scholarship.” This “entirely new consideration” might begin to explain why 18 works in the table of contents of the Modern Critical Edition include additional authors—twelve in total, not counting various credits to Anonymous—whose names are sometimes accompanied by a question mark or merely credited as an adapter of or contributor to the text.

The Authorship Companion attempts to expand on an extensive history of authorship studies in Shakespeare scholarship, but claims that this edition can be set apart from the rest due to the advancements in technology allowing a more thorough examination of writing patterns. In the introduction, editor Gabriel Egan describes the intrigue of attempting to edit works with questionable authority: “Shakespeare’s case raises particular difficulties: there are no manuscripts of any of his undisputed works in his own undisputed handwriting, and no completely reliable early definition of his canon” (v). The editors have divided this book into two sections: in the first, they share essays describing their methodology of their process for determining authorship while the second includes a variety of case studies detailing their justification for crediting certain authors to specific works. Additionally, a section of datasets accompanies the traditional works cited and index at the end, as the editors hope that it will “enable and inspire future research” (Egan vi).

The New Oxford Shakespeare and its Authorship Companion is one of the more recent (and prominent) examples of a massive text analysis process that made an important impact on interrogating the way we consider authorship and canon in early modern literature. I’m curious if, in my exploration of text analysis for this class, I will be able to use their datasets to participate in some of the “future research” imagined by the editors.

Content Management

Selecting a content management system to fit your DH project is crucial; and, as pointed out in the “Choosing a platform” essay for this weeks reading is determined by several things: functionality, familiarity, community, support, and cost. Personally, I have the most experience with WordPress (which I’m using for this blog) and Omeka (which I’m using for a digital collections grant I’m working on)–although I do have some experience with Scalar, which I feel works nicely for more free form projects and is easy to use with students who are new to developing digital projects.

I had not heard of Mukurtu before, and I was surprised that was the case. With Mukurtu’s ethical considerations regarding how to treat collections containing sensitive material taken from indigenous communities, it seems like a good choice for collections that include material from communities that are not part of the university or library structure. Kimberly Christen et al. developed Mukurtu when, following their exploration of other content management systems, “discovered a set of unmet needs, including: cultural protocol driven metadata fields, differential user access based on cultural and social relationships, and functionality to include layered narratives at the item level.” As such, this CMS has attempted to addressing these issues by providing space for knowledge from the community to be included alongside more “traditional” metadata and distinctions as to who can access certain material based on their position in the community (or as an outsider).

Additionally, Christen discusses the creation of Traditional Knowledge licenses and labels in “Tribal Archives, Traditional Knowledge, and Local Contexts.” Since, in many instances, copyright law works against Indigenous communities or falls on the side of public domain, these licenses help contributors and users be more mindful in the way they produce and consume products of digital archives using this material. While they do not provide legal protection, I feel they do important work–especially considering the level of detail included in helping users determine what they need when creating them.

Lauren G. Kilroy-Ewbank’s essay “Doing Digital Art History in a Pre-Columbian Art Survey Class” was extraordinarily helpful in envisioning how to incorporate the use of content management systems into the undergraduate classroom in a way that scaffolds toward a final project while accepting the various limitations of this kind of project. In the past, I have seen undergraduate literature classes try and use both Omeka and Scalar for final projects, but without proper context students and professors alike felt frustrated at the lack of progress. While I am not a historian–or art historian for that matter–I’m hoping that I can use some of Kilroy-Ewbank’s strategies in the summer online class I will be teaching that requires students to curate a kind of digital anthology that includes a variety of multimedia materials.

Shapes of Data

On October 4, 2018 at about 10 am, I joined my colleagues in an event space at the University of Kansas to participate in the Digital Frontiers conference. That morning, Lauren Klein was giving the keynote lecture “Data Feminism: Community, Allyship, and Action in the Digital Humanities,” and she started the talk with a look at Periscope’s data visualization of U.S. Gun Deaths in 2018. I remember the emotional tension in the room as we watched each point fall too soon along the x-axis, and this sensation returned this week as I read Chapter 3, “On Rational, Scientific, Objective Viewpoints from Mythical, Imaginary, Impossible Standpoints,” of Klein and Catherine D’Ignazio’s book Data Feminism. Last semester, I participated in an independent study supervised by Dr. Fitzpatrick focusing on Cultural Heritage, Digital Humanities, and Affect, which naturally had some overlap with feminist digital humanities work; and I’ve been particularly intrigued by the question of how emotion fits into digital humanities work.

With the affective turn, scholars have taken a lot of the work feminists scholars have done on emotion and, in some ways, made it more “acceptable.” Historically, emotion has been considered to have no place in the academy, but feminist scholars and then affect theorists have begun to make the case for emotion as a kind of knowledge. In digitial humanities specifically, there is already a tension between humanities work and the perceived clean, cold products of technology, and so what happens when we invite emotion into these projects and spaces? To specifically quote Klein and D’Ignazio’s rephrasing of Alberto Cairo, for example, “Should a visualization be designed to evoke emotion” (4)?

In some ways, I feel like this question could engage more specifically with the work of postcolonial digital humanities to become more fully intersectional. As D’Ignazio and Klein discuss, visualizations are traditionally structured in ways that subscribe to hierarchies of power. As such, they often (albeit unintentionally) contribute to the oppression of certain groups of people–and no matter what, “when visualizing data, the only certifiable fact is that it’s impossible to avoid interpretation” (7). When considering this in the context of the digital humanities where the interpretation is often more important than the data itself (as pointed out by the majority of the articles and essays we read this week), it seems reductive to claim that emotion is getting in the way based on certain design choices.

Going forward, I want to think more about the possibility of representing uncertainty. I remember the conflict surrounding the representation of 2016 election data via the visualization of “meters” that had the moving hand, and am curious how that argument might look after the 2020 election. Although Biden did scrape through with the win, there was a lot of discussion about how this was another data failure due to the fact that Biden was predicted to win by a landslide rather than by the slim margin that actually happened. The whole process was uncertain for weeks, and people were uncomfortable with the fact that the data and visualizations could not be considered complete for that time. The need to represent uncertainty is crucial–we must “leverage emotion and affect so that people experience uncertainty perceptually” (19), or, viceralize the data.