The set of readings for this week were, for me, the perfect way to wrap up the semester, as they seemed to provide some answers to several topics that have been reappearing continuously during our class as well as offer some insight into the process of working on our final projects. For instance, Ratto’s article on critical making summed up the importance of making, which is hugely important in DH, and especially for me as I work on “making” my final project. The notions of making and building, while not always a part of what constitutes DH, are a critical part of creating meaning for Ratto. This is highlighted by Ratto’s emphasis on the process of “messing about,” and his insistence that the final prototype is not the site of analysis or the end goal – it is the making itself. I know that I’m metaphorically nodding in agreement with this as I work on my final project, and I’m probably not the only one doing so.
My main takeaway from the Chapman & Sawchuk article on research creation and “family resemblances” was the focus on issues of peer review when it comes to more creative presentations of research. If DH research is published on an open forum on the internet, does that take away some merit that it could have retained had it been published as a journal article? Not only have we been discussing this throughout the semester, but it’s also something we’re all taking into consideration as we work on our projects. While we don’t have a clear solution to this yet in the larger scheme of things, I think the way we are handling this in our own class, by turning in a rubric or guide for evaluation with our projects, is a good start.
I also thought Sayers’ article, “The MLab: An Infrastructural Disposition” provided the detailed description of a functioning DH lab that we have been searching for throughout the semester. Sayers writes that
“the development and maintenance of humanities labs must be informed by precedent, anchored in relations (e.g., with existing models), and understood as cultural practices (e.g., lab spaces are value-laden, and they persist through habits.”
This statement hearkens back to our previous class discussions on how DH labs are so often based on science labs, which doesn’t always work. Sayers’ decision to share the details of his DH lab on an open forum is, I think, a form of activism, and can help to make DH more accessible.
To include an example of “doing” in DH this week, it only seemed appropriate to take a look at the projects happening at Sayers’ Maker Lab at the University of Victoria. One of the projects that caught my eye is called “The Long Now of Ulysses,” which examines “how interpretations of literature change in the digital age” (Sayers). Divided into 17 panels to mirror the 18 episodes of Ulysses, the exhibit combines physical objects, 3D replications, and digital projects to bring Ulysses into a modern context. This project was really interesting to me because it was intriguing to imagine this project being made in the context of the lab infrastructure that Sayers described. While I’ll admit that I’ve never read Ulysses, I think this project is a model that could be adapted to many different texts.
Going into this week’s reading, I had no idea what to expect. Before cracking open Rosi Braidotti’s The Posthuman, the concept of posthumanism for me brought about images of robots, Google Glass, Siri, and of course that horrifying idea of downloading your brain and abandoning your body. Braidotti has very little time for these futuristic trifles in her book, and instead focuses on something much larger and more pressing – the current state of humanism and how it is essentially “behind on the times.” Little did I know, Braidotti would shake the bedrock of my life with her arguments for anti-humanism.
I’ve considered myself a humanist for as long as I remember knowing what humanism was. As an English major, I’ve consistently embraced and defended the humanities, maintaining a sense of pride for my position as a humanist. Maybe I’m naïve or just not well-read, but Braidotti is the first person to show me the dark side of humanism. She argues that humanism is in opposition to many of the theoretical frameworks that I am passionate about, such as feminist theory, queer theory, and post-colonial theory. By setting ‘Man’ as the ideological ideal, humanism actually creates and perpetuates the notion of the “other.” Braidotti writes:
“Humanism is neither an idea nor an objective statistical average or middle ground. It rather spells out a systematized standard of recognizability – of Sameness – by which all others can be assessed . . . This standard is posited as categorically and qualitatively distinct from the sexualized, racialized, naturalized others and also in opposition to the technological artifact.” (26)
While I’m wary of completely hopping on the Braidotti train after simply reading her book, this argument makes total sense to me. But I have to admit that I’m still a little stuck on humanism; I’m not ready to abandon it completely. Can I still be a humanist and just pick-and-choose the things I like about humanism and leave the negatives behind? Braidotti also makes several compelling points about post-secularism, but I still consider myself to be a secular thinker.
But enough about me. What impact does this have on the digital humanities? We’ve focused so much on the shift from the traditional humanities to digital humanities. Should we instead be discussing the digital posthumanities? I was surprised to see only one or two mentions of DH in The Posthuman, and both mentions were done in passing towards the end of the book. This suggests to me that the schools of thought surrounding the posthuman and the digital humanities are very separate, which is a bit surprising (kind of like when we found out that media archaeologists and DH-ers want nothing to do with each other).
To bring an aspect of “doing DH” into my post today, I took a look at the Mediated Matter section of projects on the MIT Media Lab website, since this seemed to mesh well with Braidotti’s idea of human relation with self-organizing matter. One of these projects, called Living Mushtari, seems to have the same mission:
“How can we design relationships between the most primitive and sophisticated life forms? Can we design wearables embedded with synthetic microorganisms that can enhance and augment biological functionality, and generate consumable energy when exposed to the sun? We explored these questions through the creation of Mushtari, and 3D-printed wearable with 58 meters of internal fluid channels. Designed to function as a microbial factory, Mushtari uses synthetic microorganisms to convert sunlight into useful products for the wearer, engineering a symbiotic relationship between two bacteria.”
While the connection between DH and the posthuman may not be explicit, it’s clear that both schools of thought are exploring similar concepts. I think Braidotti would absolutely be supportive of this joining up of the human and a more primitive life form such as bacteria. Recognizing that there can be a symbiotic relationship between human and bacteria goes against the basic tenets of humanism. Sorry humans, but you are not unique snowflakes.
Braidotti, Rosi. The Posthuman. Cambridge: Polity, 2013. Print.
M.I.T. Media Lab. “Research Groups and Projects.” https://www.media.mit.edu/research/groups-projects. Accessed 6 November 2015.
Georgie and I will be mostly focusing on chapters 1 and 4 for our presentation on The Posthuman, so if you find yourself short on time, these will be the most important chapters for next Monday’s class.
This week I found myself feeling somewhat resistant to Wolfgang Ernst’s “Media Archaeography: Method and Machine Versus History and Narrative of Media.” While this might just be the humanist within me talking, I am still a bit uncomfortable with the idea of completely separate the technological from the human. Ernst is immediately critical of what he calls “media stories,” writing:
“The cultural inclination to give sense to data through narrative structure is not easy for human subjectivity to overcome. It takes machines to temporarily liberate us from such limitations. Technology, according to Martin Heidegger, is more than instrumental; it transcends the human.” (Ernst 56)
While it may simply be a matter of personal opinion, I find these “media stories” to be one of the most fascinating aspects of technology. When we went to the Media Archaeology Lab as a class a few weeks ago, I continually found myself pondering the narratives of the machines in the hands of their previous owners. What physical properties could Kirschenbaum find on the micro-level of each computer and what could we learn about its story? As technology becomes a more and more dominating aspect of the human experience, I think these “media stories” become more and more important.
Personal opinions aside, I have to wonder how this separation of the technological from the human fits into the definition of Digital Humanities (a definition that we have yet to nail down but that I continue to struggle with). If we want to remove the human, does Ernst’s theoretical framework still count as Digital Humanities? It seems to me that we might have to remove the H from DH in this instance.
Ernst uses the example of the Milman Parry Collection of Oral Literature, writing that “Parry . . . went to Serbia and Montenegro to conduct a study in experimental philology, recording epic songs to discover how epics as long as Homer’s Iliad and Odyssey have been transmitted in culture without writing” (60). I found the collection online and was able to listen to a few recordings. Despite the fact that I was unable to understand what was being said and sung in the recordings, I still felt resistance to Ernst’s theory. Ernst writes that “the media-archaeological ear listens to radio in an extreme way: listening to the noise of the transmitting system itself” (68). Even when listening to recordings in a foreign language, I don’t find myself interested in the static created by the recording device. I listen to these recordings searching for information about the people of Serbia and Montenegro; I can’t shake my interest in the human here.
Ernst, Wolfgang. 2011. “Media Archaeography: Method and Machine Versus History and Narrative of Media.”
I was really interested in the feminist ideologies of the Donna Haraway reading for this week, “Situated Knowledges,” especially after last week’s class conversation on the sexism present in Stewart Brand’s book. Coincidentally, I was also assigned Haraway’s “A Manifesto for Cyborgs” for my critical theory class this week, so I couldn’t help but put that essay in conversation with the Digital Humanities, and specifically with Pickering’s The Mangle of Practice. At one point in Haraway’s manifesto, she seems to get at the root of Pickering’s argument regarding the relation of machines to humans and human intervention, as she states:
“The second leaky distinction is between animal-human (organism) and machine . . . But basically machines were not self-moving, self-designing, autonomous. They could not achieve man’s dream, only mock it. They were not man, an author to himself, but only a caricature of that masculinist reproductive dream. To think they were otherwise was paranoid.” (Leitch 2193)
Pickering says something eerily similar when he writes:
“Think of the field of machines that constitute the established material performativity of science at any given time. This machinic field does not exist in a human vacuum. Though the machines and instruments of science often display superhuman capacities, their performativity is nevertheless enveloped by the human realm. It is enveloped by human practices . . . by the gestures skills, and whatever required to set machines in motion and to channel and exploit their power.” (Pickering 16)
Both passages point out the power that exists within machines, but highlight the necessity of human intervention in order to harness that power. It’s incredibly interesting how Haraway interprets this as “mocking,” which brings up the element of frustration that this has the potential to bring out in humans. We can create an advanced machine, but not the perfect machine, since machines still require the “mangle” of a human touch in order to produce results.
This discussion of the distinction between human and machine by Haraway and Pickering harkens back to the notion of technological neutrality that we’ve discussed several times in class. To me, it appears that Pickering would argue that a machine cannot remain totally neutral due to its reliance upon human intervention. Not only do humans create machines, but they participate in the dance of “resistance and accommodation” with the machine that instills a human bias upon that machine.
Leitch, Vincent B. “A Manifesto for Cyborgs.” The Norton Anthology of Theory & Criticism. New York: Norton, 2001. 2190-2220. Print.
Pickering, Andrew. The Mangle of Practice: Time, Agency, and Science. Chicago: University of Chicago Press, 1995. Print.
I’m sure I’m not the only one who gets some weird enjoyment from looking back at people’s predictions for the future and judging if they were right or not. Like when kids in the 1950’s predicted what the world would be like in the year 2000. It’s fun to feel like you know something that they don’t know. You’ve lived through “the new millennium”, so you’re allowed to either laugh at how silly the majority of the predictions sound or marvel at how accurate a few of them are. As I delved into Stewart Brand’s The Media Lab: Inventing the Future at M.I.T., I found myself experiencing that same weird enjoyment. It became quickly apparent that MIT’s Media Lab circa 1987 was much better than 1950’s children at predicting the future.
This first hit me when Brand begins describing Negroponte’s workspace. He writes, “two personal computers glow expectantly on a corner table – all computers at M.I.T. are left on permanently; I don’t know why” (9-10). Upon reading this, I chuckled to myself, thinking, people used to turn their computers off?!? Think about it – when was the last time you turned off your laptop, your smartphone, or your tablet? For the majority of us, our devices don’t power down unless their batteries die or if someone forces us to turn them off on an airplane (unless you are savvy enough to avoid this by using airplane mode). And I think the reason for never powering off our devices is directly related to Negroponte’s “teething rings” diagram on page 10.
Granted, we are 15 years past the “year 2000” side of this diagram, but Negroponte’s predictions here have certainly come true. The overlapping of broadcasting and publishing with the computer industry means that we use the aforementioned computing devices to consume the media that is produced by the broadcasting and publishing industries. How do I watch Netflix? On my laptop. How do I read the news? On my smartphone. How do I read the latest popular novel? On my kindle. We can no longer separate these spheres as easily as in 1987. And with society’s overwhelming desire to consume as much broadcast and published media as possible, there really is no need to ever power off your devices. In fact, you probably carry an extra phone charger in case your phone dies so that you don’t have to suffer through any time without access to these forms of media.
After learning about the state of M.I.T.’s Media Lab in 1987, it was a logical next step to take a look at how the lab is doing now. It only takes a few clicks on their website to see that they have many interesting projects going on. One of the ones that grabbed my attention fell under the “Camera Culture” category, which promises to “build new tools to better capture and share visual information.” The project, entitled “Eyeglasses-Free Displays,” has created screen display technology that corrects the image so that it can be seen by the viewer without the need for corrective lenses. As a glasses-wearer, how cool is that? The site promises that this technology is low-cost, and can even correct some vision issues that are difficult to correct with glasses.
Brand, Stewart. The Media Lab: Inventing the Future at M.I.T. New York: Viking Penguin Inc, 1987. Web.
M.I.T. Media Lab. “Research Groups and Projects.” https://www.media.mit.edu/research/groups-projects. Accessed 11 October 2015.
While reading Latour & Woolgar’s Laboratory Life: The Construction of Scientific Facts, I experienced a few epiphanies regarding the humanities, science, and the position of the digital humanities – something that I did not at all expect going into this reading. Latour & Woolgar point out that although our culture tends to view the scientific process as very straightforward, this is often not the case: “We argue that both scientists and observers are routinely confronted by a seething mass of alternative interpretations. Despite participants’ well-ordered reconstructions and rationalizations, actual scientific practice entails the confrontations and negotiations of utter confusion” (36). It is a common societal belief that while the humanities generally deal with interpretations that can easily be countered, science fields deal strictly with data and the straightforward conclusions that are subsequently drawn from that data. Is anyone else having flashbacks to C.P. Snow’s The Two Cultures? Latour & Woolgar suggest that the line that society draws between science and humanities may not be so easily drawn.
Additionally, Latour & Woolgar point out the “currently widespread acceptance of the methods and achievements of science in the culture of which we are part” (39). While this book was originally written in the 1970s, this statement still rings true nearly thirty years later. It’s totally true; society loves science and the humanities don’t receive nearly that level of respect, which is probably why I am met with blank stares and confusion when I tell people I am pursuing a master’s in English. So that got me thinking – digital humanities seems to be the next big thing. Doing DH seems to garner more respect than just the plain old humanities in this day and age. Why is that? Could this be because DH takes the humanities and turns it scientific? Then I had a horrifying realization: is DH just the annoying, wannabe, little sister of science?
I may just be extra sensitive because I was a biology major for my first year of undergrad and that clearly didn’t pan out. But it seems to me that the reason we love DH so much may be because it takes the humanities and brings it several steps closer to science, the discipline that society loves and respects. Of course, I’m playing devil’s advocate here and I know that DH carries much more value than just being a wannabe science field. But I think Latour & Woolgar bring up some points about our culture’s view of science and humanities that have importance implications on the place of DH.
Latour, Bruno and Steve Woolgar. Laboratory Life: The Construction of Scientific Facts. Princeton, NJ: Princeton UP, 1986. Web.
This week’s readings by Patrik Svensson and Amy Earhart were mainly concerned with the issues that surround creating a digital humanities infrastructure or lab. Svensson advocates for the notion of the “humanistoscope” as a way to envision the potential for humanities infrastructure, as the current state of humanities infrastructure points to an issue with self-advocacy and “situated imagination.” Earhart advocates for a DH lab as a neutral space for collaborative digital scholarship, but recognizes the issues with basing such a lab on a science model. Both articles bring to the forefront a big-picture issue that DH currently faces (especially since these articles were both published earlier this year): how can DH envision infrastructure that effectively combines the digital with the human without simply following the footsteps of other purely scientific disciplines? This made me curious to see how current DH labs navigate this question.
The first lab I looked at was the Collaboratory for Research and Computing for Humanities (RCH) at the University of Kentucky. This lab has several facilities which, according to their website, are “ideal for concentrated workshop sessions, as well as extended project work.” They have two separate labs – the “Digital Research Incubator” and the “Projects Office,” both of which are equipped with numerous workstations and multimedia resources. True to its name, the focus of the ‘collaboratory’ seems to be on group work – especially that which combines individuals from several disciplines. Additionally, this lab seems to have a focused imagination when it comes to their infrastructure, as they claim to “provide physical and computational infrastructure, technical support, and grant writing assistance to university faculty who wish to undertake humanities computing projects.” Whether this infrastructure comes from the imagination of the “humanistiscope” is unclear.
I also took a look at the Hyperstudio Laboratory for Digital Humanities at the Massachusetts Institute of Technology (MIT). According to its website, Hyperstudio “focuses on questions about the integration of technology into humanities curricula within the broader context of scholarly inquiry and educational
practice” and is associated with the School of Humanities, Arts, and Social Sciences. From this information, it seems clear that Hyperstudio strives to come at their projects from a humanities-based perspective, but per their impressive list of software, they have an arsenal of digital infrastructure that supports the ‘digital’ end of this lab’s projects. Additionally, the “Process” section of the lab’s website describes the detailed workflow of each project that comes through the lab, including securing grant money, project roll out, evaluation, and project maintenance. Hyperstudio seems to have a good handle on each step of this process and appears to have the infrastructure to back it up.
Earhart, Amy E. “The Digital Humanities as a Laboratory.” 337-53. Web. 2015.
Svensson, Patrik. “The Humanistiscope – Exploring the Situatedness of Humanities Infrastructure.” 391-400. Web. 2015.
This week’s reading selection brought up several issues within the Digital Humanities that have not yet been discussed at length. Matthew K. Gold’s introduction included a pretty inclusive list of issues: “a lack of attention to issues of race, class, gender, and sexuality; a preference for research-driven projects over pedagogical ones; an absence of political commitment; an inadequate level of diversity among its practitioners; an inability to address texts under copyright; and an institutional concentration in well-funded research universities” (Gold). These all appear to be valid and problematic concerns for the DH community, but as I continued reading Debates in the Digital Humanities, I got the sense that there was an additional issue that should have been included, as different authors in this text repeatedly brought it up as a point of contention.
The issue I’m referring to is in regards to the definition of DH (something that seems to come up in nearly every article and class discussion we’ve encountered thus far). Specifically, there seems to be a division between the DH-ers who privilege MAKING and those who privilege INTERPRETING in their definition of DH. We’ve seen that there are many, many definitions for DH floating around, and they are generally open-ended enough to include both aspects of DH. But Debates in the Digital Humanities includes quotes from people like Stephen Ramsay, who certainly privileges MAKING over INTERPRETING. Ramsey stated, “Digital Humanities is not some airy Lyceum . . . Do you have to know how to code [to be a digital humanist]? I’m a tenured professor of digital humanities and I say ‘yes.’ . . . Personally, I think Digital Humanities is about building things” (Gold). While Ramsay did later take a step back from this divisive stance, he is not alone in this way of thinking which excludes DH-ers or wannabe DH-ers who (like me) do not know how to code.
I may be biased as a non-coder, but I take issue with this close-minded view of DH. That is not to say that I don’t see value in the MAKING part of DH – I think some of the most interesting work that is being done in labs and hackerspaces fits under the umbrella of MAKING. But I do think that in addition to ignoring the important work that comes from the INTERPRETING side of DH, this view fails to consider the privilege to goes along with knowing how to code. Except under unique circumstances, coding is only taught as part of a post-secondary education, which means that individuals who do not have access to a college education likely don’t have the opportunity to learn to code. While “coding boot camps” are popping up all over the world for those who want to learn to code quickly and without attending a university, these are still expensive and inaccessible for many. I’m optimistic that coding will eventually (someday) become a standard part of curriculum in middle schools and high schools, but at this point in time, excluding non-coders from DH makes DH a very privileged field.
Many of our readings and discussions seem to keep coming back to the issues that DH faces in terms of being legitimized in the eyes of traditional academia. I have to imagine that this divide between MAKERS and INTERPRETERS hinders this cause to become legitimized. It is certainly easy for me to say as an outsider (wannabe) of the DH community, but I would love to see these two sides of DH come together in order to better serve the community as a whole. While I agree with Kathleen Fitzpatrick that DH does not necessarily include “every medievalist with a website,” moving past this division and coming together through a common methodological outlook would likely serve the community much better.
Fitzpatrick, Kathleen (2012). “The Humanities, Done Digitally.” In Debates in the Digital Humanities, edited by M. Gold. Minneapolis: University of Minnesota Press. http://dhdebates.gc.cuny.edu/debates/text/30
Gold, Matthew K. (2012). “Introduction: The Digital Humanities Moment.” In Debates in the Digital Humanities, edited by M. Gold. Minneapolis: University of Minnesota Press. http://dhdebates.gc.cuny.edu/debates/text/2
This week’s readings on evaluating digital scholarship were an interesting follow-up to our recent class discussion wherein we asked the question “what is the digital humanist’s equivalent to the monograph?” While the digital scholar is certainly capable of producing high-quality scholarly digital work, the difficulty comes in having this work recognized as an academic equivalent to the traditional monograph. This point was driven home for me in the introduction of “Evaluating Digital Scholarship,” which stated that the Association of American University Presses (AAUP) had recently equated the process of peer reviewing materials available on the internet to “’social networking’ and ‘popularity’ contests that can too easily ‘be gamed’” (Schreibman et al. 130). This clear lack of regard for the scholarly nature of new media work is what leads to the “double standard” pointed out by Anderson and McPherson, wherein the digital humanist is forced to “produce traditional print work . . . in addition to their digital work in order to be taken seriously for tenure” (137).
While it’s extremely frustrating to read about the difficulties that digital humanists face in academia, I began to ask myself why it is that society (and especially the portion of society that is involved in evaluating academic work) is so hesitant to give credit to digital scholarship. Having grown up with access to the internet, I tend to take the things that I read online with a grain of salt unless I see a persuasive reason not to (i.e. a reliable source or credible evidence). The availability of the internet essentially means that anyone who wants to publish something online has the option to do so. While I can pick up almost any book in the library and safely assume that it is a well-researched and reliable source, it would be ridiculous to assume the same about any website, blog, or other project that can be found on the internet. While it’s clear that scholarly work does exist on the internet, have we trained ourselves to assume the worst about the nature of the work that is available in a public domain?
Anderson, Steve and Tara McPherson. “Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship.” Profession 2011.1 (2011): 136-51. Web.
Schreibman, Susan, Laura Mandell, and Stephen Olsen. “Evaluating Digital Scholarship: Introduction.” Profession 2011.1 (2011): 123-35. Web.
The assigned article by Cynthia Selfe immediately piqued my interest this week due my previous experience working in a college IT department where a large part of my role involved preparing and supplying computers for faculty, administrators, and students. I thought this experience might provide me with some personal insight into the topic of the article — until I realized that the article was published before I was born. This realization made Selfe’s first statement seem a bit ridiculous: “People who say that the last battles of the computer revolution in English departments have been fought and won don’t know what they’re talking about” (63). Of course these people don’t know what they’re talking about; this article was published before the “world wide web” was invented! Hindsight is 20/20, but it’s clear today that the “computer revolution” certainly wasn’t over in 1988.
Once I realized that I would have no personal insight into the happenings in technology and academia in 1988, I found it interesting to be reading this article from the position of a postinternet millennial. For example, by the time I went to college as a student, personal computers were provided to all administrators and faculty, and while most students purchased their own computers, numerous labs were available to all students as well. I’m so enmeshed in contemporary digital media culture that I find it difficult to imagine how an English department would operate in a time before computers or even during this liminal period that Selfe describes where computers are only accessible to some individuals.
Despite the fact that computers are much more available now than in the context of this article, Selfe still offers several relevant points. She identifies the importance of being deliberate with an English department’s use of computers. With all department employees now having access to a computer, it’s still necessary to put careful consideration into how the computers are used within the department, i.e. who has access to what information, which applications/software are supplied by the department, what type of hardware is purchased, etc. The issue of privacy is much more of a hot topic now than it was in 1988, but Selfe seems to have anticipated this issue, stating: “The concept of linking departmental members with an electronic network raises as many problems as it solves . . . preserving individuals’ rights to privacy must be a top priority” (66). Additionally, Selfe brings up the role of computers in power hierarchies, which is still relevant today. For instance, there is still a question of which software should be paid for by an English department. Consider for example a photoshop program (such as Adobe) versus a citation/bibliography processing program (such as EndNote). While Adobe Photoshop may be most useful for administrative employees in the department for marketing and communications, this type of program would be much less useful for faculty who may prefer a program like EndNote for managing their research. Since both applications require paid licenses, power hierarchies would likely come into play when deciding which software would be provided by the department.
Selfe, Cynthia. “Computers in English Departments: The Rhetoric of Technopower.” ADE Bulletin. 90 (Fall 1988): 63-7. Web.