Author Archives: jilliangilmer

Cousins and Gilmer: Final Project

Hi DHers! We’re excited to share our progress with you thus far on the MAL Virtual tour. Follow the links below to investigate the various aspects of this collaborative project:

Check out a beta-version of the tour produced with professional software:

Examine our process in the making with MIT’s “Build-in-Project” site, which we’ve used to document the project’s successes and failures. This blog will continue to be updated as the project progresses next semester:

Check out the successfully funded Kickstarter campaign:
See what inexpensive technology is capable of producing by becoming “immersed” the Occipital test run photo-bubbles:
Questions? Suggestions for the future? Feel free to share. We can’t wait to see where this project takes us in the Spring.

Gilmer Post 10: Striking a Balance with Sayers

This week, I had fun trying to piece together my own methodologies from Chapman and Sawchuk’s “family resemblance” paradigm: I’m still having difficulty finding my own place within the four categories, which is a testament to their extensive overlap. They also insist that these projects “must be assessed with a rigorous flexibility” (C&S 22). I love this oxymoronic diction, which effectively communicates the necessity of finding a balance between rigor (structure) and flexibility (non-structure) in the evaluation of digital humanities scholarship. Ironically, the essay itself fails to strike this balance, tipping entirely towards the spectrum’s traditional end. They acknowledge this as a “deliberate” choice: “we are also aware of an irony at the very core of this article, which has taken a traditional academic tone and style” (C&S 22). Unfortunately, I don’t accept their apology. Even if the article was conceived in traditional form, they could have focused on its process of production, revealing invisible layers of work that currently go unmentioned. And if DH is all about accessibility, community, and outreach, then this particular medium–one which reeks of canon–seems ineffective.

This is why I somewhat prefer the Sayers readings, which occur in an open online medium. I especially enjoyed what Sayers had to say about failure: “through what measures [is a project] deemed a success?” (Sayers, “Relevance of Remaking”). For me, this question becomes more relevant as the end of the semester approaches: what does “success” look like? Using Sayers’s logic, my project’s failure will depend on its evaluative context–which I am responsible for creating. This acts as a comfort, since true “failure” seems impossible in such a setting. And perhaps that’s what he’s getting at–there is no failure in a self-reflexive process of learning.

Sayers also gives voice to my attachment to physical space, remarking that central space allows unscheduled “Dialogue [to] emerg[e] in the margins. […] Without these conversations, the technical work we do would lack necessary conceptualization and contextualization. It would also feel a bit detached from the present, or a touch depersonalized and abstract.” Sayers emphasizes the importance of spontaneous dialogue, but remains moderately positioned: he argues that “the MLab is irreducible to its physical location and composition, even if infrastructure is entangled in practice.” Clearly, the laboratory’s intellectual capacity extends beyond its physical walls. But physical space necessitates the development of community, so to speak. And this is my favorite epiphany of the week: perhaps DH work can be completed outside of a centralized space, but that doesn’t necessarily mean it should.

Gilmer Post 9: Hark, A Cultural Critic Sings!

Braidotti’s The Posthuman (2013) serves the post-humanism dish in a way that miraculously makes it appetizing. Perhaps that’s because she uses humanism as a seasoning in appropriate places, arguing that “a focus on subjectivity is necessary because this notion enables us to string together issues […] scattered across a number of domains” (42). I found her critiques of the humanistic hierarchy both moderate and compelling, if a bit Yoda-esque: “individualism breeds egotism and self-centeredness; self-determination can turn to arrogance and domination; and science is not free from its own dogmatic tendencies” (30).

Humanism, as an intellectual paradigm, functions as a double-edged sword, exercising a “fatal attraction” to the Vitruvian ideal. This is where the humanities, and digital humanities in particular, get ugly. Citing Irigaray, Braidotti asserts that the “symbol of classical Humanity is very much a male of the species; it is a he. Moreover, he is white, European, handsome, and able-bodied” (24). I think we can all agree that the humanist realm continues to be dominated by the idealized white male figure, a trend that was indicated in both Matt Jockers’ and Brian Kane’s presentations. While each speaker engaged with digital cultures, artistic creation, and machinic data, they also glossed over significant aspects of cultural criticism, including the invisibility of graduate work, the gendered tech bubble, and the machine’s complicity with postcolonialsim. As digital scholars struggle to find a balance between humanist and post-humanist paradigms, crucial voices are inadvertently silenced. Therefore, it was extraordinarily satisfying to see Braidotti identify the ethical gap in digital media scholarship, arguing that “the reduction to sub-human status of non-Western others is a constitutitve source of ignorance, falsity, and bad consciousness for the dominant subject who is responsible for their epistemically as well as social de-humanization” (28). Hurrah! Alan Liu’s call for cultural perspectives in digital humanities is finally being answered!

But where Braidotti’s eye pays careful attention to ethics and infrastructure, she neglects the machine object and its past. Her call for a nuanced understanding of knowledge production in the sciences is contradicted when she “emphasize[s] the normatively neutral structure of contemporary technologies: they are not endowed with intrinsic humanistic agency” (45). I’m a fan of neutral spaces, but I’m still fairly certain I haven’t encountered one. How can the computer, or my iPad, or even my air conditioning unit, exist outside of human influence? How do these technologies and their interfaces constitute “neutral spaces” given a human sourcecode, human creator, and human user? I also took problem with her quick dismissal of retroactive analysis, eloquently summarized when she declares that “this is no time for nostalgic longings!” (45). I see what she’s getting at, I think–we shouldn’t get hung up on old-white-guy criticism–but her view seems to occlude subdisciplines like media archaeology. Maybe if Braidotti toured the MAL, she wouldn’t be so hasty to sweep the past and weak “nostalgia” underneath the rug.

That said, I really was in love with this week’s readings. I’ve had my fist in the air pumping out victory for a solid two days (shoulder getting sore). To carry on with the trend of postcolonial-savvy digital work, I’d like to share a project currently under work at the University of Maryland: MITH’s “Transforming the Afro-Carribbean World” (TAW). This is a work-in-progress, which makes the site valuable for its clear layout of the team’s efforts and hurdles as they assemble the final project. I love the portion about anticipated problems, technological equipment, and data processing. And the project itself promises to be incredibly wide-ranging, documenting the migratory movements of Afro-Carribbean laborers in the early 20th-century. I can’t wait to see this evolving piece in its final form. 

Gilmer Post 8: Souls, Sirens, and Sound

The New Materialism approach, while useful in its ability to push against human-centric scholarship, excludes the human entirely. I’m on board with the momentary suspension of the human bias, but I struggle to see how “human culture does not lose but rather wins” by silencing human voices in our approaches to media archaeology (72). I’m reminded (and like Erin, I’ll warn everyone that I’m basically going to slip back into narrative) of a moment in AMC’s Breaking Bad: Walter White, famed chemist and aspiring meth-dealer, breaks down the chemical composition of the human body on a chalkboard:

Hydrogen – 63%
Oxygen – 26%
Carbon – 9%
Nitrogen – 1.25%
Calcium – .25%
Iron – .00004%
Sodium – .04%
Phosphorous – .19%
This brings the total to 99.888042%, leaving .111958% unaccounted for. When his partner suggests the missing element must be the “soul,” Walt laughs uproariously and insists, “There’s nothing but chemistry here.”

This, for me, illustrates the fundamental one-sidedness of Ernst’s paradigm. Both Parikka and Ernst utilize Homer’s Sirens as an example of human bias—technology assists us in resisting their charms. But this metaphor fails to account for the human element imbedded within our media technologies. Like Kittler, Ernst argues that “the act of communication in its physical distributing and effective channeling of signals stands at the core of media” (Parikka 69). I can agree with this summary, but I’m thinking back to Jockers’ “big data” project: any slight manipulation of code affected the project’s representation on a large-scale. Moreover, the machine’s ability to analyze data is wholly informed by human programming, human language, and in many cases, human error. How can we view such projects as purely data-oriented? How can we say this signal “channeling” is a one-way-street? I can’t help but feel that discussions of purity in the academe are dangerous at their core. Ernst’s desire to write out the human pollutant, i.e. Walter White’s soul, in favor of “pure data navigation” (68) is reminiscent of the search for the ideal or purely authoritative text (which I’m studying in Thora’s course): pointless and violent.

To illustrate the effects of the human bias, Ernst provides the following thought experiment: “imagine an early phonographic recording. Surely we acoustically hallucinate the scratching, the noise of the recording apparatus” (Ernst 69). The point he’s making is this: our ear is intrinsically biased before we even perceive sound due to our subjective preconception of said sound. Along these lines, he argues that sound recorded by a synthesizer represents liberty from the bias of the human ear. I wonder if anyone who’s ever seen a live concert would agree. I’m curious as to how this freedom from bias aligns with Ernst’s claim that “the event of the voice itself [comprises] the materiality of culture” (71). Does a recording function in the same way? (I’m thinking about Plato’s critique of writing as disembodiment—does Ernst’s prioritization of aural culture, i.e. the spoken word or ‘sound,’ strengthen Foucauldian hierarchies of descent?) This is a very relevant question for me, as I spent the last week or so recording various “sounds” of the MAL to recreate a functional soundscape. If that effort is meaningless, please—someone let me know before I spend another 3 hours doing it.

Since we’re focusing on media archaeology this week, I’d like to set up a quick thought experiment for our own beloved MAL. I attended the Media Archaeology Lab’s homecoming tour last week and just so happened to record the Edison phonograph as it was “scratching,” to use Ernst’s phrase. They call that fate, ladies and gentleman. So I thought it would be fun (or cheesy, you know, whichever) to complete Ernst’s challenge. I’d like everyone to take approximately 5-10 seconds to imagine this sound—how loud is it? How quickly is the mechanism rotating? Is the record skipping?

Now let’s listen to the actual recording:

What were the differences? What preconceptions were deflated?

At this point, I’m sure you’re all confused. I just proved Ernst’s point after hating on him for three paragraphs. But the element of human error is, once again, inarguably present. The simple fact is that I couldn’t get the phonograph working! I didn’t wind it up long enough, fast enough, or move the needle appropriately. It took the magical hands of Aaron Angello and another very sweet lab worker to get the thing running properly. The point I’m making is this: there is no sound freed from human influence. Listen to the recording again—note that it gets slower and slower. This was because Aaron’s hands were physically on the machine, changing its pace.

Whether Ernst’s posthumanism represents freedom or imprisonment, it simply isn’t possible to cleanly separate the human from the machine at this stage. Nor do I anticipate such a clean-cut separation in the future. Ernst’s characterization of “human performativity” and “technological algorithmical operations” positions them as “two different regimes clash[ing]” (59), but doesn’t this reify Snow’s ‘two cultures’ on some level? New Materialism, as formulated by Ernst and Kittler, fails to acknowledge the human’s mutual role in orchestrating and ‘programming’ the machinic realm.

Gilmer Post 7: Implementing Infrastructure in Corporate America


In this post, I’d like to talk about General Motors–a story similar to the one Pickering presents on the GE-Lynn plant from the mid-1960s to the early 1970s. This period happens to be synonymous with the height of General Motors’ productivity: “in the 1960s, GM sold over half the cars in the United States” (NPR). Obviously, this success didn’t last. I’m sure we’re all familiar with the massive 2009 bailouts, estimated at approximately $50 billion for GM alone (Cook, Reuters). So this post will focus on the interim period (the mid-80s to the current day), when GM sent the American auto industry down in flames and nearly took the entire US economy with it. 

My approach to Pickering will largely focus on Chapter 5, “Technology: Numerically Controlled Machine Tools,” with structural supplement from Chapter 1. I’m also bringing in a podcast (old habits die hard) from NPR’s This American Life, which focuses on the GM-Toyota NUMMI plant–a miraculous site of collaborative production. But before I jump to the 80s, let’s quickly review GE’s Pilot Program, the focus of Pickering’s study, which was initiated in late 1968:

“GE management now expected the pilots effectively to act like traditional management consultants and, moreover, to implement their own recommendations, blurring their roles into the traditional roles of foremen, planners, programmers, quality controllers, and so on. […] Management, as it were, laid down the traditional reins of control.” (Pickering 163)

After mangling union relations, management decided to demolish traditional Taylorite hierarchy in the factory, allowing workers to define their own processes and functions. The lack of structure, which proved stimulating for some, eventually led to a leadership crisis. Management refused to give any direction, resulting in the dissolution of “definitional boundaries […] obtained before and after the Pilot Program” (173.) In his introduction, Pickering correctly identifies that “the performative idiom has a phenomenological warrant” (8), and in this sense, the Pilot Program transformed the factory into a site of identity politics. Workers no longer had job titles or specified hours, leading to massive confusion and problems with morale. More importantly, workers and N/Cs came together to “constitute a composite human/nonhuman agent, a cyborg”or a “sociocyborg” if you’re a fan of Pickering’s neologisms  (158-59). These dissolving definitions proved too much for the factory workers, and after years of struggle, GM formally repealed the program in 1975.

Now let’s fast-forward. It’s 1984, and Toyota offers GM an incredible deal**: a fully inclusive “training” session. At this point, GM was developing a reputation for putting out shoddy cars, while Toyota had a near-perfect production line. And GM jumped at the opportunity. Thus, the NUMMI (New United Motor Manufacturing Incorporated) plant was founded. Japanese workers (fifty of them!) flew to the US to teach American car-makers how to make cars.

The results of this experiment are both inspiring and troubling. Frank Langfitt, This American Life‘s auto correspondent, remarked that “the numbers coming out of the NUMMI plant were astonishing” (NPR). So in a way, the effort worked marvelously. The Toyota employees shared Pickering’s sentiments regarding real-time understanding of practice” (3). Unlike American managers, the Toyota newcomers seemed eager to communicate with practitioners on the floor. The Japanese would “make suggestions for a different kind of tool that would be better for the job, or a different place for bolts and parts to sit that would be easier to reach,” illustrating the necessity of communication and ground-up teamwork (NPR). The workers were thrilled by this change in managerial attitude.

But in other ways, the methodology was difficult to enact, and the plant ran into unexpected issues. Physically, “the Americans [were] so much larger than the Japanese […] they waste a second or two more each time they get in and out of the vehicles they’re building, making them 10-15% less productive than their Asian counterparts” (NPR). The height disparity (an average of 5 inches, I’m told) represents only one of several uncontrollable factors: Workers could only build cars as good as the parts they were given. At NUMMI, many of the parts came from Japan and were really good. At [other locations], it was totally different” (NPR). As GM attempted to implement the NUMMI system in other plants, communications between managers and unionized workers fell apart, leading to a myriad of strikes. Like the Pilot Program, NUMMI showed immense promise for restructuring an Iron-Wrought Infrastructure. And Like the Pilot Program, NUMMI was doomed to be swept under a rug. The plant formally closed in 2010, extinguishing a host of innovations and communicatory breakthroughs. 

So here we are again, seemingly at a dead end. Both GE and GM failed to maintain the bridge between the social and technological realms, leading to a disheartening repeated narrative. Pickering’s call for action is bold, but I’m not sure I buy what he’s selling: he claims “it is both possible and necessary to escape from the representational idiom [in which] people and things tend to appear as shadows of themselves” (6). Is he suggesting that technology will allow us to access the Real? While we have witnessed this in individualized circumstances, the representational has failed to infiltrated the corporate world. Has Brand’s dreamworld fallen victim to corporate infrastructure at its heart?

In the face of adversity, I’m determined to remain optimistic: although the NUMMI project didn’t last, it points to the fact that we can achieve when we collaborate beyond corporate picket fences. Pickering is right when he says that “this is by no means the end of the story” (165): so let’s hope the future of this particular story is a happy one.

Check out the podcast for the full NUMMI story:

**There were reasons for this perplexing decision, which the podcast explains in detail.

Gilmer Post 6: All Work and No Play Makes Jill A Dull Girl

Reading The Media Lab: Inventing the Future at MIT (1987) was like walking through Charlie’s Chocolate Factory, revealing a new dream-turned-reality at every turn of the page. Televisions that summarize the news for you when you get home? Typing in a Shakespeare play “and hav[ing] the computer act it out […] automatically” (112)? When scientists dream, they dream big. And I was amused by the accuracy of Brand’s predictions: he anticipates TiVo and its market adversaries, relating the concept of “narrowcasting” and its individualized, decommercialized viewing experience. He also argues that the future creation of DVDs (“high-definition feature-length film on a compact disk”) will lead to the “exit of the film-rental business” (80). Ding ding ding. Cue the rise of Netflix and the fall of Blockbuster.

But I’ve found myself unable to get past Brand’s disintegrating notion of “place.” In his discussions with Negroponte, Brand asks,

“When you’re on the road, who runs the Lab?” Three hours later I logged into the MIT system and found his reply: “The fact that I’m replying to you from Japan two hours after  your question from California somewhat begs the question. The Lab doesn’t know I’m gone.” I later confirmed the fact when I worked there. Administratively, he isn’t gone when he travels, any more than most of the Lab people are gone when they log in from home at odd hours. E-mail evaporates the tyranny of place, and to a considerable degree, of time.” (24)

Boom! Our ability to communicate technologically has “evaporate[d] the tyranny of place [and] time,” allowing us to occupy numerous spaces and roles at once. But simultaneously, doesn’t this remove the necessity of a designated work site?

Melissa also brought up this issue in class last week. How do we justify digital humanities labs as necessary to our work? How do we justify the material when “power [is] shifting from the material to the immaterial world” (42)? At first, I resisted this idea due to the implication of DH’s relative “unworthiness” to science disciplines. Why should a media lab, as opposed to a physics or biochem lab, have to prove that our lab space is useful, productive, and foundational to our work? But as the week went by, I became more and more convinced that Melissa is right. I think I speak for everyone when I say that the MAL tour was incredibly fun and informative, but how do we translate that magic into the language of a grant proposal? The MAL could certainly make an argument for its value as an archival space, a museum of retrograde electronics–but does an interactive museum qualify as a work space? Several of us commented on the lack of actual work tables available in the lab (a necessary side-effect of its wide-ranging collection). So my question, echoing Melissa’s, is this: if we’re capable of completing DH projects at home on our laptops, how can we claim that the laboratory is a site of necessary work rather than superfluous play?

My answer to this question is that play itself is valuable and foundational to what we do–that play and work are weaved together in a process of inseparable creative production. When I walked into the MAL, I experienced a sensation similar to what Brand experienced when he visited the Hennigan School: “I found the place so gleeful to be around that I went back a couple of more times later just for the pleasure of it” (121). This gleefulness is not only engaging on a personal and academic level, but leads to new discoveries in the materials and processes themselves. After struggling to play Pac-Man on an Atari, I complained to Erin that the controls were inverted. “Damn, I can’t play with inverted controls. I wish I could just go into Options and change it, like on an Xbox.” Erin silently reached over, picked up the joystick, and rotated it 180 degrees. Perhaps this is what Brand means when he says that laboratories are like “playpens of another level of understanding” (113).

In the spirit of play, I’ve attached a link to MIT’s Game Lab: This lab focuses on play as a productive methodology, arguing that gameplay “is the most positive response of the human spirit to a universe of uncertainty.” Check out their free Beta game, “A Slower Speed of Light,” which allows players to interact with the laws of special relativity in a three-dimensional arena: As someone who has tried (and largely failed) to study relativity and spacetime, I couldn’t imagine a more creative way to make this abstract field hands-on.

Here’s a trailer for those of you who don’t have the time or inclination to download a video game:

Gilmer Post 5: “Going Native” with Latour

In the last class meeting, I mentioned that I do work in data entry/analysis for Faculty Affairs. Part of the work I’m conducting right now requires me to familiarize myself with the work of CU’s renowned scientists (essentially, I’m curating their CVs electronically). For the first time in my life, I’m sifting through hundreds of scientific publications a day from such disciplines as physics, biochem, medicine, meteorology, etc. Usually, this leaves me attempting to remember vocab terms from my high school biology class. What’s the difference between tRNA and mRNA? Doesn’t nephritis have something to do with kidneys…? To help inspire this alienating feeling in you fine folks, I’ll provide a link to a Khachatryan article (I typically review hundreds of publications by this man in a day’s work):

This article, thrillingly titled “Search for the standard model Higgs boson decaying to bottom quarks in pp collisions at root s=7 TeV,” certainly qualifies as work outside my realm of expertise. (To quote Kel Mitchell from Good Burger, “I know some of these words!”) What’s further baffling is that the article cites something like one thousand authors. I can’t be precise about that number since I don’t have the time to sit down and count, but the authors and their 206 affiliations take up more space than the article itself.


The first time I read through one of these massive authorial lists, I tried to imagine the conditions of the article’s production. Laboratory Life claims that the lab’s “material environment very rarely receives mention” in the construction of published articles (69), and like Latour and Woolgar, I’m left wondering about how many “rats had been fled and beheaded, frogs had been flayed, chemicals consumed, time spent, careers had been made or broken” (88) in the production of such articles. As observers of the laboratory space and function, they describe “going native” to capture “daily encounters, working discussions, gestures, and a variety of unguarded behavior” (153) –almost imagining themselves as humanistic Jane Goodall’s watching chimps in their natural habitat. And while their observations speak to many of my questions, others go unanswered. For instance, Latour’s studies prioritize a single lab and its inhabitants, many of which are in constant daily communication. If you peruse my linked list of author affiliations, you’ll note a variety of laboratories and countries across the world. Do these people all work together, in one space? One thousand like-minded scientists, flying in from points all over the globe to discuss this not-so-groundbreaking article? Seems unlikely. For me, this raised a myriad of other questions. How many of these researchers have met? Communicated? Which of them are actually putting the article together? As Jaime mentioned to me, many scientists aren’t even notified when they are cited in publications like these. Would it be unreasonable to ask, therefore, which of the cited authors have even read the final product?

Latour and Woolgar describe the laboratory as a “system of literary inscription” solely focused on “the continual generation of a variety of documents” (105, 151), supporting this argument with a breakdown of laboratory finances. They calculate that “the cost of producing a paper was $60,000 in 1975 and $30,000 in 1976. Clearly, papers were an expensive commodity” (73). Their notion of “papers” as financial commodities/objects of production and certainly aligns with my own understanding of scholarly articles, and provides an explanation for scientists’ speedy rate of publication. But I feel that scientific authorship may have shifted since the release of Laboratory Life (1975) in ways that subvert the notion of a laboratory as a shared site of creation.

Gilmer Post 4: Neutral Space (And Other Figments of My Imagination)

Since their conception, the digital humanities have expressed a fickle relationship with the sciences. Scholarship ranges from wholehearted endorsement to scathing denunciation of the scientific model, fluctuating in a pendulumic rhythm: while our first week of readings asserted the importance of bridging the “two cultures,” this week’s scholars seem very hesitant to jump on the scientific bandwagon. In “The Digital Humanities as a Laboratory,” Amy Earhart explains that “the digital humanities lab is primarily imagined as science lab-like,” but rarely functions as such. Quoting Unsworth, Earhart argues that “our emulation may not actually bear that much resemblance to the reality of what goes on in science” (393). DH has essentially transformed into a knock-off handbag. This holds true for Stephen Ramsay, who professes an “obsession with building and making,” but ironically asserts that he hasn’t “really built or made anything” in his time as a digital humanist. What he does instead “is philosophize […] about digital humanities” (Ramsay, italics added); depressingly, any active roles typically relegate DHers to technical coding positions. And what an awful word he uses: he isn’t a doer, but a philosophizer. Digital humanities functions not a frontier of action, but as another great fat void of sitting around and thinking. Way to go, Ramsay: undermine the foundation of our entire class.

In light of these accusations of charlatanism, I find Kirschenbaum’s article very convincing: “When a federal funding agency flies the flag of the digital humanities, one is incentivized to brand their work as digital humanities” (10). Academic trends tend to go where the money goes, and at this current historical moment, the money is in the tech industry. As Earhart points out, there “remains a deep suspicion of bringing a science model to humanities work” (394). Are we, as DHers, simply wolves in sheep’s clothing? (Alpacas in sheep’s clothing? You get the point.)

Svensson’s “humanistiscope” emerges as a potentially useful paradigm, but I hesitate to conceive of DH studies on a humanistic foundation. Ramsay declares that DH should “break with the past”–a very different sentiment than Svensson, who advocates for a balance between the technical and humanist disciplines. I agree with Svensson that a “multiplex” methodology is necessary, but I question whether or not his model can meet the requirements of a “neutral space.” DH currently lacks the tools to articulate and construct an interdisciplinary infrastructure. Every time I try to mentally build this space, I get a very clear image of the border between North and South Korea: a highly guarded, miniscule strip of land that no one can access. 

Not such a pleasant thought, is it? Maybe I’m being too negative about the model–but if the point of DH is to create scholarship that is unrecognizable as traditional scholarship, then don’t we need an infrastructure that is equally unrecognizable as infrastructure?

To continue this week’s theme of constructing (and tearing down) disciplinary walls, I’ve attached a link to the Duke BorderWork(s) Lab ( This lab focused on national, communicative, and historical border-making, providing an amalgamation of border-work projects in a collaborative digital atmosphere. However, as you’ve probably noticed, I used the past tense–these guys shut down in 2014, it seems (thankfully past projects are still available on the site). I’m very interested in everyone’s opinions of this project, as I personally question whether or not this work qualifies as progressive. I’m a huge fan of anything digital, don’t get me wrong, but within the parameters of this specific lab, the digital element is fairly minimized. That’s not to say that the lab as a whole is conservative, and some of you may disagree with me after you take a look at the lab’s other projects. But when a project is advertised as being within the DH, I don’t expect to find a printed monograph among them: Is this a conservative approach to DH? Or is this what a “neutral space” looks like–an equal inclusion of text and digital? Personally, I doubt whether such a space is conceivable. Hopefully labs will prove me wrong!

Gilmer Post 3: “Alt-Ac”: Concretizing Plan B

Although this week’s readings cover a wide array of material, I’d like to focus on Julia Flanders’ “Time, Labor, and ‘Alternate Careers’ in Digital Humanities Knowledge Work”–an essay which tackles some of the questions I’ve been encountering personally this semester. I was struck by a mirrored epiphany we had:  

“[F]aculty positions make up only about 30 percent of all full-time employees at Brown, whereas 45 percent are some other kind of professional: technical, administrative, legal, executive, and managerial. Thus on the basis of pure statistics (and even allowing for my apparent level of education and socioeconomic positioning), I am much more likely to be anything but a faculty member.”

It’s all summed up in that last sentence: I could be a genius, or the hardest working person in the world, but the statistics simply aren’t on my side. And should my studies continue, I can expect more discouraging figures to appear on the horizon. As a graduate teaching fellow, Flanders’ “pretax income for the academic year was $12,500.” Granted, that was in 1991, so let’s account for inflation: in 2015, the number rises to $21,871.88. Embarrassingly, my first thought upon seeing this figure was “Hey, that’s pretty good!” This brings up another question, one raised often by academics–how willing are we to undercut ourselves to fulfill institutional expectations? Flanders notes that a common side effect of academic life is an “erosion of [the] boundary between the professional and personal space,” a symptom I’m certain we’ve all experienced. So where do we draw the line? How much time, money, and personal sacrifice can we invest before the balance tips?

For me, these questions are not defeating, but inspiring. I need to refocus, broaden my research, and rethink the term “Plan B.” Flanders’ career trajectory, while not traditional by any means, “mediat[es] usefully between purely technical information on the one hand (which did not address her conceptual questions) and purely philosophical information on the other (which failed to address the practicalities of typesetting and work flow).” Ideally, I’d love to do the same–it sounds so nice in writing–but I’m thinking back to the obsession with pragmatics saturating academic scholarship. After all, in 2012, Flanders herself had been an adjunct for 7 years. If that’s the idyllic future of interdisciplinary study, then count me out. And really, this is the concept I’m getting at: from a practical standpoint, where are these mythical interdisciplinary jobs, how much do they pay, and what do I need on my CV to land them? Maybe I’ll take Flanders’ advice and embrace “a truly alternative career: becoming a goat farmer.”

(By the way–I shared this on FB, but in case we aren’t friends yet, I’ll repost here. Take a look at this Alt-Ac careers article, courtesy of Dr. Emerson:


Flanders, Julia. “Time, Labor, and ‘Alternate Careers’ in Digital Humanities Knowledge Work.” Debates in the Digital Humanities. By Matthew Gold. Minneapolis: Univ Of Minnesota, 2012. N. pag. Print.

Gilmer Post 2: Humanities 3.0

When Kathleen Fitzpatrick casually described her “all-digital dossier” (196), I was completely arrested. For those of you who haven’t had the glorious opportunity to build a dossier, it is an arduous, old-fashioned process—at least at CU. I work for the English staff here, which means I’ve been in charge of putting together faculty dossiers for tenure evaluations. The process is extraordinarily frustrating: not only was I required to print every document, single-sided, but I had to do it twice (naturally, we needed a backup copy). The scanning took days, and the end result was numerous binders per person, each with more than 500 pages of material. Faculty were required to read the material, but for security purposes, were not allowed to remove the binders from the main office. Imagine being trapped in a dusty, poorly lit room for hours as you flip through a colleague’s academic career. The binder is heavy. The print is small. Also, there are five of them. Now imagine trying to assess digital materials in this environment. Fitzpatrick correctly identifies the resulting “career anxiety” felt by junior media scholars, who are “reluctant to challenge long-standing systems” for fear of rejection in a dying market (170). These words hit home for me as I waver between traditional scholarship or a new media project this semester. After all, if evaluation takes place in an environment like the one I’ve just described, it’s safe to assume that the project will be totally castrated by the time it ends up in a printed dossier.

I can’t help but agree with Nowviskie when she says “we come at these conversations backward” (169). In each of this week’s readings, critics use words like “practical” and “pragmatic,” insisting that we lay a concrete foundation for the evaluation of medial materials. But equally, each critic acknowledges the impossibility of such a task: Svensson argues that the disciplining of digital humanities results in a pathetically “patchy map,” borders which constrain, and definitions which fail to satisfy. Rather than becoming attached to an imaginary end product, he goes on to explain, scholars should embrace “the mapping activity itself” (181). Our attempts to define the digital humanities ironically undermine the creation of a bounded discipline. As Kirschenbaum notes, the process of exploring the ephemeral boundaries of this field “underscore[s] the limited and arbitrary nature of any medial ideology” (58). However, in our failed attempts to establish the borders of digital academia, we have discovered new methods of knowledge transfer. The invention of the hypertext necessitates the creation of a new model of reading, one that breaks away from the individualization of today’s internalized reading process. As reading becomes more interactive, collaborative, and sensory, we’re pushing further away from the dead text Plato originally reviled.

I’ve been looking into alternative reading platforms for some time, and this week, I’m particularly excited about UT’s Digital Writing and Reading Lab, which recently released its “Excitable Media” initiative. This interface focuses on the interactions of social media and academic scholarship. It also helped me answer a question we posed in our last class (“Will Twitter ever be on our CVs?”). Rather than positioning social media as “antithetical to critical thinking,” the DWRL attempts to explore “mainstream discourse” as a method of rhetorical reflection. I’ve attached my favorite essay from the series, Allie Thayer’s “The Q[WERTY] Question,” which features videos, screenshots, photographs, and all sorts of old social media posts from the author herself. I’d encourage everybody to check out the numerous other authors and essays currently posted on this interface, as many seem to be in conversation with each other!

“Excitable Media” main page:

Thayer’s essay:



Fitzpatrick, Kathleen. “Peer Review, Judgment, and Reading.” Profession 2011.1 (2011). Web.

Kirschenbaum, Matthew G. Mechanisms: New Media and the Forensic Imagination. Cambridge, MA: MIT, 2008. Print.

Nowviskie, Bethany. “Where Credit is Due: Preconditions for the Evaluation of Collaborative Digital Scholarship.” Profession 2011.1 (2011). Web.

Svensson, Patrik. “The Landscape of Digital Humanities.” Digital Humanities Quarterly 4.1 (2010): 1-36. Web.

Thayer, Allie. “The Q[WERTY] Question.” The QWERTY Question. University of Texas: Digital Reading and Writing Lab, 2015. Web. 12 Sept. 2015.

Gilmer Post 1: “Applied” Humanities

While reading C.P. Snow’s “The Two Cultures” (1959), I found myself laughing along: as someone who eschews Kindles to paperbound books, I certainly qualify as one of Snow’s “natural Luddites” (23). I also winced when he identified the source of disgruntlement felt by humanities scholars: “young scientists know […] they’ll get a comfortable job, while their contemporaries and counterparts in English and History will be lucky to earn 60 per cent as much.” Ouch! Snow’s got me pinned. I thought back to a recent conversation I’d had with my friend, Davis, a medical engineering graduate student at CU. Davis laughs when I talk about maintaining a 4.0, claiming that he “scrapes by with a B average.” Imagine my feeling of insult when he graduated last winter, landing a high-paying job in his field within days.

At some point, I mentioned to a (non-grad) friend that this disparity was extremely unfair. Why should a B-average scientist earn more/have more job security than a humanities scholar with a near perfect record? The response was quick and defeating: “Well, Jill, Davis designs artificial heart valves. You read books.” Ahh, yes. I’d heard this before. The old debate, and one that Snow also identifies: applied versus pure science. What, after all, do the humanities produce? The sciences are members of the machine, active participants in capitalist production, but the humanities hold themselves staunchly apart: “We [literary scholars] prided ourselves that the science we were doing could not […] have any practical use. The more firmly one could make that claim, the more superior one felt” (34). Snow has, in my opinion, recognized the noose strangling literary scholarship. Our feeling of preserving Art for Art’s Sake, our determination to rage against the Machine, has prevented our field from progressing effectively into the modern world.

I’m sure we’re all familiar with articles like these, which warn incoming college students that English degrees are entirely useless: “As a major, this is the road more traveled by, with not nearly enough writing, teaching, publishing or journalism jobs for all the students who graduate with a yen for the written word. It doesn’t help that many media fields have been upended by the digital revolution.” There it is! Instead of embracing the digital revolution, we have been “upended,” thrown totally for a loop. We’re thought of as a “yen” field, and herein lies the crux of the problem: notions of artistic purity only suffocate an already struggling academic study. To keep this field alive and breathing, we must find a way to bridge the digital gulf and supercede assumptions about our outdated intellectualism.


Newman, Rick. “The 10 Worst Majors for Finding a Good Job.” Yahoo Finance. Yahoo!, 18 June 2013. Web. 30 Aug. 2015.

Snow, C. P. The Two Cultures and the Scientific Revolution. Print.