On October 4, 2018 at about 10 am, I joined my colleagues in an event space at the University of Kansas to participate in the Digital Frontiers conference. That morning, Lauren Klein was giving the keynote lecture “Data Feminism: Community, Allyship, and Action in the Digital Humanities,” and she started the talk with a look at Periscope’s data visualization of U.S. Gun Deaths in 2018. I remember the emotional tension in the room as we watched each point fall too soon along the x-axis, and this sensation returned this week as I read Chapter 3, “On Rational, Scientific, Objective Viewpoints from Mythical, Imaginary, Impossible Standpoints,” of Klein and Catherine D’Ignazio’s book Data Feminism. Last semester, I participated in an independent study supervised by Dr. Fitzpatrick focusing on Cultural Heritage, Digital Humanities, and Affect, which naturally had some overlap with feminist digital humanities work; and I’ve been particularly intrigued by the question of how emotion fits into digital humanities work.
With the affective turn, scholars have taken a lot of the work feminists scholars have done on emotion and, in some ways, made it more “acceptable.” Historically, emotion has been considered to have no place in the academy, but feminist scholars and then affect theorists have begun to make the case for emotion as a kind of knowledge. In digitial humanities specifically, there is already a tension between humanities work and the perceived clean, cold products of technology, and so what happens when we invite emotion into these projects and spaces? To specifically quote Klein and D’Ignazio’s rephrasing of Alberto Cairo, for example, “Should a visualization be designed to evoke emotion” (4)?
In some ways, I feel like this question could engage more specifically with the work of postcolonial digital humanities to become more fully intersectional. As D’Ignazio and Klein discuss, visualizations are traditionally structured in ways that subscribe to hierarchies of power. As such, they often (albeit unintentionally) contribute to the oppression of certain groups of people–and no matter what, “when visualizing data, the only certifiable fact is that it’s impossible to avoid interpretation” (7). When considering this in the context of the digital humanities where the interpretation is often more important than the data itself (as pointed out by the majority of the articles and essays we read this week), it seems reductive to claim that emotion is getting in the way based on certain design choices.
Going forward, I want to think more about the possibility of representing uncertainty. I remember the conflict surrounding the representation of 2016 election data via the visualization of “meters” that had the moving hand, and am curious how that argument might look after the 2020 election. Although Biden did scrape through with the win, there was a lot of discussion about how this was another data failure due to the fact that Biden was predicted to win by a landslide rather than by the slim margin that actually happened. The whole process was uncertain for weeks, and people were uncomfortable with the fact that the data and visualizations could not be considered complete for that time. The need to represent uncertainty is crucial–we must “leverage emotion and affect so that people experience uncertainty perceptually” (19), or, viceralize the data.