Author Archives: georgie a

Archibald Post 10: Sustainability Practices in Maker/Hacker Culture

Garnet Hertz and Jussi Parikka’s article on “Zombie Media” discusses a host of significant issues relevant to our course concerns: for example, adhering to the notion of media as Foucauldian archive, the twin roles of ‘artist-archaeologist’ and ‘t(h)inker,’ and the metaphor of media archaeology as circuit bending—an inaccurate remaking that is a process and a becoming in itself. Most significantly, though, I was struck by their claim that the field of media archaeology now requires “a more thorough non-human view” due to the contemporary global ecological crisis (429); this is another of our readings that thereby echoes Andrew Pickering’s “mangle” of human and machinic agents. Hertz and Parikka claim “that media never dies: it decays, rots, reforms, remixes and gets historicized, reinterpreted and collected,” a cycle that drives their research regarding the reuse of electronic materials. Their investigation into planned obsolescence—the argument that many products are designed with an intentionally limited useful life, the merits of which remain contentious even when leveled at corporate giants such as Apple—makes me question how often we reuse ‘dead media’ materials, particularly in media labs. Jenterey Sayers asks helpful questions with regard to the University of Victoria’s Maker Lab: “what materials [should] we use for fabrication, where [do] those materials ultimately go, and how should we think proactively about waste and repurposing?” (http://maker.uvic.ca/remaking/). Although Sayers interrogates the notion of material wastes in the specific setting of a maker lab, the principles apply across the board for maker and hacker culture – whether a makerspace, hackerspace, or media lab, all seem connected by an underlying objective to empower communities to learn how things work and reflect on the processes and purposes of media (old and new) in contemporary society.

I began trawling the web for maker and hacker cultures employing sustainable practices and came across Access Space, a UK community media lab that was built by volunteers from locally recycled computers that run free, open source software (http://access-space.org/). Its web page asserts that

At Access Space, people interested in art, design, computers, recycling, music, electronics, photography and more meet like-minded people, share and develop skills and work on creative, enterprising and technical projects. […] Access Space is an inclusive environment. As well as working with artists, academics, creative technologists, programmers, other professionals and students, 50% of  the participation in Access Space’s activities are from people in danger of exclusion and on the margins of society, including: people with disabilities, homeless people, ex-offenders, asylum seekers, refugees and people with mental health issues. Through Refab Space, Access Space engages with self starters and entrepreneurs as well. One of the strengths of Access Space is that it brings people from different backgrounds together. (“about”)

As an organization, Access Space is therefore extremely self-aware about the communities it embraces and the values that it actively fosters (incidentally, Access Space and the University of Sheffield have just jointly published a great study on “Barriers to women’s involvement in hackspaces and makerspaces” found at http://access-space.org/wp-content/uploads/2015/10/Barriers-to-womens-involvement-in-hackspaces-and-makerspaces.pdf). In 2012 Access Space also expanded into a larger lab called ReFab Space which is intended to act as a more localized and better-equipped space for crafting and building. Concerning sustainability, Access Space and ReFab Space seem to have two major goals in mind: to enable communities to use their electronic media for longer by fortifying and repairing technologies, and to reuse old electronic materials to build new things.

featurelarge_bnraccessspace

Access Space, with the recycled computers built by volunteers visible in the background

accessRefabSapce

ReFab Space

Matt Ratto’s article neatly ties together the relationship between the practical, social, and theoretical aspects of the type of work that can take place in environments such as the Access and ReFab spaces. He promotes ‘critical making’ as a blending of the hobbyist and scholar, a way to connect tinkering and theorizing in multiple processes; for example, material prototyping can extend individual reflection and serve as a focus for community discussion and problem-solving. In my own work analyzing digital literature, I’ve been concerned with various discursive approaches to reconnecting the material and conceptual, but I’m not too flash on the actual tinkering and making side of things. I’m looking forward to applying to join CU Boulder’s hackerspace, the Blow Things Up Lab (http://www.btulab.com/), to start implementing more material processes in future scholarly projects. For a non-techie like me, it’s the perfect chance to explore the affordances of different media that I tend to use with my blinkers on in everyday life, as well as tinker around with the small cardboard box of “dead media” that currently sits in one of my bedroom drawers. Explosions away!

 

Works Cited:

Hertz, Garnet and Jussi Parikka. “Zombie Media: Circuit Bending Media Archaeology into an Art Method.” Leonardo 45.5 (2012): 424-30. Print.

Ratto, Matt. “Critical Making: Conceptual and Material Studies in Technology and Social Life.” The Information Society: An International Journal 27.4 (2011): 252-60. Print.

Advertisements

Archibald Post 9: The Posthuman Schizophrenic

Braidotti’s The Posthuman is the bees knees; I absolutely loved it! She interrogates multiple forms of posthuman subjectivity, with her four chapters examining ‘life beyond the self,’ ‘life beyond the species,’ ‘life beyond death,’ and ‘life beyond theory.’ The impetus behind her book is her conviction that the capitalistic tendencies of our global and technologically-mediated society are wholly unsustainable. I’m particularly impressed by her use of the posthuman as an affirmative political tool; she not only analyzes different elements of the posthuman, but offers a way forward for humanity at large. Braidotti is careful to point out that the posthuman doesn’t mean disconnecting from humanity, and intriguingly admits that some humanist ideals fit into her notion of a productive posthuman subjectivity.

I want to connect her discussion and methodology to theoretical material from Deleuze and Guattari. Braidotti explicitly attributes her take on the posthuman to Deleuzian and Spinozian influences, and the former’s impact is evidenced through her continued look to the future of humanity, the ‘becoming’ of something other. A fierce anti-humanist, Braidotti wants to forgo the inherent self-centeredness of humanism’s Vitruvian man in order to celebrate rather than denigrate all forms of otherness. She discusses different theories of ‘becoming-other,’ ‘becoming-animal,’ ‘becoming-earth,’ ‘becoming-machine,’ and ‘becoming-imperceptible;’ all of these are opportunities for ‘becoming-posthuman’ in an affirmative and multifarious manner.

This is reminiscent of Deleuze and Guattari’s Anti-Oedipus: Capitalism and Schizophrenia, a text in which they celebrate the idea of the schizophrenic (in anti-institutional rather than clinical terms) as a revolutionary figure that embraces a politics of desire to restore humanity from its current belief systems. The ideal schizophrenic is detailed as “a body without organs” that dissolves the dichotomous distinctions between production and consumption, and man and nature (6). This is the process of ‘becoming-machine’ that Braidotti focuses on in her second chapter. Deleuze and Guattari assert that everything in the world is a machine, and the schizophrenic is able to connect into each and every one in a type of transcendental self-consciousness. This totality of connections is brought about by the continuity of flowing connections and the simultaneous fragmentation of objects. Deleuze and Guattari label the schizophrenic as “Homo natura,” the natural mind that is freed from social hierarchies through pure desiring-production (5). On an even larger scale, they also define the schizophrenic as “Homo historia,” a nebulous being without a body that is forever de-centered and identifies with the entirety of all past and present things (21). This connective type of synthesis and subjectivity seems a process that matches Braidotti’s encouragement of zoe, an all-inclusive world view whereby life embraces the human and non-human; Pickering’s ‘mangle’ is hereby championed as not only integral to knowledge construction, but the nature of being itself. For Braidotti, Deleuze, and Guattari, human subjectivity becomes a privileged form of connection with others.

I was also struck throughout The Posthuman by Deleuze and Guattari’s notion of the rhizome, a metaphor based on the root networks of rhizomatous organisms such as the aspen tree, water lilies, and even the humble potato (compared to the hierarchal growth of a ‘standard’ tree).

citiestreesandrhizomeskevinmurray2013-cropped3

Deleuze and Guattari describe this deterritorialized structure in their book A Thousand Plateaus: “a rhizome has not beginning or end; it is always in the middle, between things, interbeing, intermezzo” (25). They undermine traditional dichotomies of subject and object, and signifier and signified, suggesting that “the rhizome is altogether different, a map and not a tracing […] The map is open and connectable in all of its dimensions […] Any point of a rhizome can be connected to anything other, and must be” (7, 12, 15). Again, the mapping of the productive connections of the rhizome reminds me of the zoe; this is the world that the Braidotti, Deleuze and Guattari’s posthuman schizophrenic might connect into. Similarly, Braidotti’s approach to the posthuman as a concept can be described as rhizomatous; she weaves together a myriad of critical material from countless theorists, and more importantly she offers multiple channels of accessing and embracing the posthuman on differing epistemological and ontological levels. Her text becomes an assemblage of interconnected visions whereby she makes a call to arms for social change. Both the schizophrenic and the rhizome are therefore anti-institutional concepts that suggest new codes of signification founded on decentralization and interconnection. I’m going to briefly bring up these ideas tomorrow in Laurel and I’s presentation, and I’d be interested in hearing whether they are helpful tools in understanding Braidotti’s approach to and vision of the posthuman.

This week looking for lab spaces that are explicitly dealing with themes of the posthuman proved a little unfruitful, but University College London’s Interactive Architecture Lab undertook a project that fits the bill. The Lab “is a multi-disciplinary studio interested in the Behaviour and Interaction of Things, Environments and their Inhabitants. We design, build and experiment with Responsive Environments, Robotics and Kinetic Structures, Multi-Sensory Interfaces, Wearable Computing and Prosthetics, the Internet of Things, Performance and Choreography” (http://www.interactivearchitecture.org/aboutus). Their Polymelia Project looks at enhancing the human body through extreme forms of prosthesis (http://www.interactivearchitecture.org/lab-projects/polymelia-the-body-as-an-evolutionary-machine). The project

considers the human body as an assemblage; a collection of heterogeneous components, a material-informational entity whose boundaries undergo continuous construction and reconstruction. We think of the body as the original prosthesis we all learn to manipulate, so that any replacement or extension becomes part of a continuing process of upgrading the human entity. The Polymelia Suit (PolyEyes, PolyLimbs, Exoskeleton, Sensing Suit) suggests a new communication language for the future of prosthesis and of humanity.

This is the type of posthuman technological advancement that Braidotti doesn’t really take a stand on. She acknowledges that society tends to villainize or worship technological advancements, but for her, it is all about the positive uses we can harness from the manifold of technologies constantly in development. The only stand she makes on this issue is her loathing of contemporary death technologies. And although the Polymelia Suit looks something like a futuristic version of Halo assault armor, its function is to communicate stimuli and aid those with disabilities, rather than weaponize. Phew.

pol4-670x407

Polymelia suit design

321-670x1039

Polymelia suit prototype

Works Cited:

Braidotti, Rosi. The Posthuman. Cambridge: Polity Press, 2013. Print.

Deleuze, Gilles and Felix Guattari. Anti-Oedipus: Capitalism and Schizophrenia (1st French edition, 1972). Trans. Robert Hurley, Mark Seem, and Helen R. Lane. Minneapolis: University of Minnesota Press, 1983. Print.

Deleuze, Gilles and Félix Guattari. A Thousand Plateaus. Trans. Brian Massumi. Minneapolis: University of Minnesota Press, 1987.  Print.

Archibald Post 8: Kittler and posthumanism

In his chapter “Media Theory and New Materialism” Jussi Parikka undertakes an interrogation of German and Anglo-American media studies, and I was especially convinced by his readings of Friedrich Kittler’s work. Kittler falls under the broad (and sometimes incorrectly labelled) group of German media theorists who merge close material readings of technologies with critical theory. Kittler particularly draws on Lacan, Foucault, and McLuhan in his focus on media hardware. His approach privileges the processes of media storage and transmission over the social and representative, in a tightly interwoven framework of arts, science, and technology. Like many modern media theorists, he recognizes the increasing imagined immateriality of media in the recent period of digital innovation.

I’m particularly interested in Kittler’s reading of the posthuman: Parikka writes of Kittler’s theoretical model that

We do not speak language, but language speaks us, and we have to participate in such systems of language, which are not of our own making. But language in the age of technical media is not just natural language; it is the new technological and physical regimes introduced by media, such as the typewriter, and later computer software languages, which should methodologically be seen in a similar way – they impose new regimes of sensation and use to which we have to accommodate ourselves in order to be functioning subjects. We are secondary to such systems.  (70)

He then goes on to discuss the type of power that is now inscribed in such systems and therefore over our bodies. Here Kittler advances a world view by which humans are absorbed into information systems, who are “secondary to such systems.” The human psychologically and physiologically becomes a conceptual reflection of the media systems themselves; he is describing a type of machinic agency whereby the human is programmed into specific kinds of behaviors. He always refers to the human as the ‘so-called human being,’ which again makes me question where the posthuman begins if we read the posthuman as being unable to separate the human and machine. During our last class, I suggested that posthumanism could be evidenced all the way back to humanity’s first moment of bipedalism. Sure, this is a controversial and perhaps totally inaccurate claim, but it plays into the way that I see the technological and human as inherently intertwined: technology has always acted as a kind of social prosthesis, and the human thinks through and alongside media. Our class proposed that the advent of the digital has merely highlighted this reciprocal relationship in a heightened manner, and this is reflected by the new currency of fields such as game studies, platform studies, and software studies. Kittler himself demonstrates that the posthuman can be applied much earlier than digital media’s explosion by analyzing painting and the typewriter, and Parikka also discusses Bernhard Siegert’s analysis on the postal system as a media network that manifests the posthuman.

Kittler additionally publishes a line from Nietzsche—that “Our writing tools are also working on our thoughts”—which plays into my view of the posthuman (72). Media drives epistemic changes, as it impacts cognition and corporality—in this case the human body becomes an inscription surface for the act of writing. At the same time, it is human thought that has created this writing tool, and although the tool shapes human thought through its material affordances, we can still use the writing tool for a host of differentiated purposes. Therefore although we live in a seemingly cyborg society, technology can be regarded as having always driven our social organization, and our individual psyches and bodies.

As others have noted in their posts for this week, Kittler’s methodology seems too extreme in taking the human out of media studies, but Parikka doesn’t seem to endorse this; he acknowledges the hardware focus of new media theories “in addition to social contexts,” and at the conclusion of his chapter he recognizes that “a variety of media studies methodologies are now insisting that we should not only engage in textual analyses of media culture, but be prepared to tackle what goes on inside the machine as well” (65, 89). For Parikka, the ultimate question seems to boil down to “how to rethink familiar media technologies in new material constellations and in ways that lead to new modes of using, consuming and institutionalizing media,” a formation that readily applies to the nature of DH as a process or methodology (64).

Connecting Kittler’s theoretical work to lab spaces,[1] CU Boulder’s Media Archaeology Lab (the MAL) seems the perfect space to explore such relationships between the human and machine (http://mediaarchaeologylab.com/). The MAL houses a host of dated inscription technologies, ranging from projectors and typewriters to personal computers and gaming consoles (and even a few ‘cutting edge’ devices such as its 3D printer). Unlike other media labs such as MIT’s “The Trope Tank” and Washington State University Vancouver’s “Electronic Literature Lab,” the MAL has open hours and doesn’t require any type of supervision or training to access and use its collective media, even allowing students to dismantle the hundreds of old technological parts and pieces to examine their material workings. During our class visit, we started to interrogate the implications of machinic agency on notions of space and embodiment. Furthermore, the MAL’s motto that “the past must be lived so that the present can be seen” directly alludes to the notions of the pre-digital and the imagined invisibility/immateriality that Kittler finds so important in his reading of postmodernism.

Works Cited:

Parikka, Jussi. “Media Theory and New Materialism.” What Is Media Archaeology? Cambridge: Polity Press, 2012. 63-89. Print.

[1] On the topic of lab spaces, I wanted to briefly note a great line from Parikaa where he writes (of Hugo Münsterberg’s work) that “Cinema is a laboratory of sorts for manipulations of states of mind and brain” (73). The same can certainly be said of writing as a laboratory of the mind, which supports our claim that DH should not be limited in scope to mere ‘doing,’ but include the ‘thinking’ of traditional humanities scholarship.

Archibald Post 7: The Performance of Knowledge Production, and Speculative Computing

In The Mangle of Practice: Time, Agency, and Science, Andrew Pickering’s approach is largely aligned with Latour and Woolgar’s from Laboratory Life: he builds on the notion that science is a construction of knowledge, with multiple factors influence the crafting of each fact. Pickering argues that scientific knowledge results from the ‘mangling’ of many factors—conceptual, material, technical, and social—therefore scientific knowledge production is again highlighted as a culture. Facts don’t exist “out there,” waiting for us to discover them; we actively create and craft scientific knowledge. However, Pickering’s approach is unique because he highlights the performative nature of scientific knowledge production. More strongly that Latour and Woolgar, he speaks to a posthuman view of science in which the human and the machine can no longer be separated. Pickering recognizes the

intertwining and reciprocal interdefinition of human and material agency. The performative idiom that [he seeks] to develop thus subverts the black-and-white distinctions of humanism/antihumanism and moves into a posthumanist space, a space in which the human actors are still there but now inextricably entangled with the nonhuman, no longer at the center of the action and calling the shots.  (26)

Claiming that both human and machinic agency drive knowledge production, Pickering only differentiates the two through the idea of intention that he attributes to the former. This “performative image of science, in which science is regarded as a field of powers, capacities, and performances, situated in machinic captures of material agency” is reminiscent of Kirschenbaum’s intent to interrogate the raw material processes that govern knowledge inscription (7).

Ontologically, then, Pickering seems to offer a Kantian world view of the human whereby our reality is just one representation of space and time; a violent yet wholly necessary singular perspective of space and time that allows the human experience. For Pickering, the human and machinic are agents of potentialities, and science is not the evidence of truth, but always a representation of one possible reality. In his chapter on “Facts,” he suggests

that we should see empirical scientific knowledge as constituted and brought into relation with theory via representational chains linking multiple layers of conceptual culture, terminating in the heterogeneous realm of captures and framings of material agency, and sustaining and sustained by another heterogeneous realm, that of disciplined human practices and performances.  (111)

The phrase “representational chains linking multiple layers of conceptual cultures” particularly resonates, as it speaks to the way that science is constantly building upon previously contested and culturally informed data that becomes accepted as scientific fact. Pickering also claims that knowledge is historically located, due to the time-specific constraints and local idiosyncrasies of each moment of knowledge production.

I want to link Pickering’s work to the University of Virginia’s SpecLab (the Speculative Computing Laboratory, cofounded by Johanna Drucker and Bethany Nowviskie), which during its 2000-2008 duration focused on experimental projects with uncertain outcomes. Nowviskie perceives speculative computing as deliberately acknowledging and adopting inefficiency (which we could equate with Pickering’s “mangle”) into its methodology. She writes that

It involves spinning up lots of processes, licensing the machine to perform lots of calculations, pretty much “on spec”—calculations that, importantly, have not been specifically requested by the user nor directly, precisely implied as a need, by the conditions of the system she’s operating […] speculative computing, if taken as a basic spirit or an ethos and a kind of practice, sets up the conditions for actionable creativity, and for responsive engagement with multiple possible futures.  (“Speculative Computing”)

The SpecLab engaged in a host of projects, one of which was titled “Subjective Meteorology: A System of Mapping Personal Weather.” The system used graphics to represent subjective experience and was “created entirely as an act of aesthetic provocation and a work of imagination” (Drucker, 99). Drucker helmed the project and began with ten old drawings of ‘weather maps’ that she had created in the 1970s to represent temporally-specific emotional states. The intuitive nature of her drawings drove subsequent project direction, and digital versions of the system were created, although there is not an accessible working model yet.

Drucker personal weather map drawing

Drucker ‘personal weather map’ drawing

When discussing the entirety of the Lab’s of projects, Drucker finds that “the ultimate lesson of SpecLab is that all forms of interpretation and scholarship are design problems premised on models of knowledge that make assumptions about what their object of study is” and adds that “in our current working lives, we are all digital humanists, and the task of modeling knowledge is part of our daily business” (36). Success! This feeds into our class recognition that it is very likely that the ‘digital’ will be dropped from DH, for the humanities as a whole would benefit from the explicit recognition that digital scholarship and methodologies have become an inherent part of academia.

Works Cited:

Drucker, Johanna. SpecLab: Digital Aesthetics and Projects in Speculative Computing. Chicago: University of Chicago Press, 2009. Print.

Nowviskie, Bethany. “Speculative Computing & the Centers to Come.” http://nowviskie.org/2014/speculative-computing/#more-2557. 15 November 2014. Web.

Pickering, Andrew. The Mangle of Practice: Time, Agency, and Science. Chicago: University of Chicago Press, 1995. Print.

Archibald Post 6: Inventing the Future through Anti-Disciplinary Research

The Media Lab’s epigraph recognizes the First Amendment to the U.S. Constitution as “Elegant code by witty programmers.” Yes! From the outset, Stewart Brand affirms legislators as coders. Take that SR! In fact, despite its 1987 publication, The Media Lab: Inventing the Future at MIT is an incredibly relevant text that speaks to many of the issues we are discussing in class. I’m particularly interested in the Lab’s establishing mission, and want to compare this to its present day philosophies.

Inaugurated in 1985, the MIT Media Lab was the result of years of proposals and funding acquisition by Nicholas Negroponte and Jerome Wiesner. During his time as a visiting scientist in 1986, Brand notices that the Media Lab isn’t concerned with the “publish or perish” mantra that guides academics in traditional science laboratories; instead, the Lab’s motto is “demo or die” (4). This emphasis on making and experimenting certainly feeds into the digital humanities’ ‘building’ and ‘doing’ concerns. Furthermore, the formal policy at the Lab is to “invent the future” through its work with electronic communication technologies; Brand expands on this by claiming that “the binding principle at the Media Lab, the primary theme, is conversation, with computers and through computers” (7). From its establishing moments, then, the Lab was specifically aware of the increasingly symbiotic relationship between the human and the digital. Interestingly, Brand identifies that his book is really examining two labs that shape one another—the physical and the networked—which plays into our conversations about DH labs’ negotiations of material and virtual spaces.

Brand details a host of projects and products that the Media Lab was working on, and I found two concepts arising from these discussions especially helpful. First is Seymour Papert’s idea of “bug appreciation,” whereby an individual can personally isolate and fix technological problems that they are confronted with (126). Second is his recognition that society prefers planning over tinkering, but that tinkerers are incredibly valuable due to an ability to negotiate and adapt learning experiences (129). With regard to DH pedagogical strategies, these are exactly the types of productive skills that I think students at high school and tertiary levels should be learning: not to ‘know’ everything, but to have firm processes and experiences behind them so that they are prepared to jump into unfamiliar situations with confidence in both failures and successes. Experimenting is the key word in this type of learning, and this plays into the Media Lab’s focus at large.

The present-day Media Lab has a list of guiding principles that are elaborated on in an interview with MIT’s current director, Joi Ito: http://www.wired.com/2012/06/resiliency-risk-and-a-good-compass-how-to-survive-the-coming-chaos/. Essentially, the Lab prefers:

Resilience > Strength,  Pull > Push,  Risk > Safety,  Systems > Objects,  Compasses > Maps,  Practice > Theory,  Disobedience > Compliance,  Crowd > Experts,  Learning > Education

All of these principles can be summarized as a type of anti-disciplinary research that eschews traditional structure (embodied by Ito himself, who started but never completed two academic degrees). Furthermore, the Lab has changed somewhat in research direction; its motto is still “inventing the future,” and it primarily investigates and designs in the world of electronic communication technologies, but it is especially concerned with projects aimed at “human adaptability” (MIT Media Lab). It therefore hosts research groups in areas such as affective computing (bridging the gap between human emotion and computational technology, for example using Google Glass to identify individual stress factors http://bioglass.media.mit.edu/) and biomechatronics (enhancing human physical capabilities, as seen in this project building better communications between amputees’ residual limbs and their powered prostheses – http://biomech.media.mit.edu/#/portfolio_page/neural-interface-technology-for-advanced-prosthetic-limbs/).

Reading about the Media Lab’s establishing and contemporary research goals is exciting and inspiring; the Lab has pioneered incredible innovations, such as BiOM bionic lower-leg systems, 3D digital holographic printing, and the electronic ink technology used by Kindle etc. However, I am wary of the capitalist structures governing its work: the Lab receives immense amounts of funding from corporations such as Apple and Twitter, who then directly benefit from Lab research. On the other hand, the Lab tends to make its research public, for example its programming is usually open source. Does this mitigate the potentially negative consequences of corporate investment?

Works Cited:

Brand, Stewart. The Media Lab: Inventing the Future at M.I.T. New York: Viking Penguin Inc, 1987. Print.

MIT Media Lab. “Missions and Principles.” . https://www.media.mit.edu/about/mission-history. Accessed 11 October 2015.

Wired. “Resiliency, Risk, and a Good Compass: Tools for Surviving the Coming Chaos (an interview with Joi Ito)” http://www.wired.com/2012/06/resiliency-risk-and-a-good-compass-how-to-survive-the-coming-chaos/. 06 November 2012.

Archibald Post 5: The Inherent Creativity of Scientific Fact Construction

“In sum, then, our discussion is informed by the conviction that a body of practices widely regarded by outsiders as well-organized, logical, and coherent, in fact consists of a disordered array of observations with which scientists struggle to produce order.” (Laboratory Life, 36)

As discussed in class, us humanists often think of scientific laboratories as being near identical in their methodic and sterile nature, but in Laboratory Life Bruno Latour and Steve Woolgar argue for the “idiosyncratic, local, heterogenous, contextual, and multi-faceted character of scientific practices” (152). In fact, the authors claim that science is a “highly creative activity” in which “knowledge is constructed,” leading to their working methodology to analyze the “social construction of scientific knowledge” (31-2). They begin their account using the process of defamiliarization—making the scientific laboratory new by transforming the (generally) familiar into the strange—before moving on to a historical treatment of a specific scientific fact’s construction, followed by a discussion of the individuals and micro-processes that make up a lab. Ultimately, Latour and Woolgar decide that science is a field of argument or debate, much like law or politics, because “scientific order [is] constructed out of chaos” and “alternate readings are always possible” (33, 35).

In the authors’ case study at California’s The Salk Institute, scientists depend on a number of frameworks to construct order and knowledge. Latour and Woolgar highlight the importance of the Institute’s physical layout, equipment, and investment; they investigate individual scientists’ activities, routines, types of conversation, origin of ideas, and unreliability of memory/account; and they recognize the way that scientific evaluation continuously builds on previously contentious facts, closing off alternative interpretations of evidence. Despite the interplay of these frameworks, the Institute’s scientists are adamant that they work to uncover “hard facts,” truths that already exist “out there” (128). Latour acknowledges the pushback received from the scientists regarding his claim for the social construction of scientific fact. Indeed, throughout the authors’ observations in Laboratory Life, I could only find one instance where the Institute’s staff acknowledged any of the bias resulting from overarching lab frameworks, in this case the difficulty of factoring human agency into findings (164).

I’m glad, too, that the authors acknowledge the cultural bias of their status as external onlookers; it’s perhaps telling that they use traditional humanist tools such as defamiliarization and literary inscription to focus their study. They appreciate the impossibility of acting as completely neutral observers, as we are always constrained by cultural affinities, and try to position themselves between the impossible desire to enter as a total newcomer, and the impractical outcome of becoming a complete participant. Latour is conscious that through his observations at the Institute, he came to view his own field of sociology biologically. It would be interesting to see if the Institute’s community was affected by Latour’s presence; if any sociological factors began to influence their thinking and doing. In his introduction, the Institute’s then director, Jonas Salk, comments that his “own style of thought was transformed,” and he wouldn’t be surprised if future science labs use “in-house philosophers or sociologists” (12, 14). But he doesn’t specify any detail about this purported altered perception, and although Saltz approves of the authors’ study, he clearly disagrees with their overall finding that facts are crafted through creative construction.

All of these findings play into our class discussion about the impossibility of neutral tools and processes in a lab. We unanimously agreed that there is no such thing as objective knowledge, and that with regards to the technological, we should always consider the sociopolitical factors behind the creation and use of the tech. At the end of the day, scientific fact and humanistic inquiry’s high degree of commonality can be summed up by the authors nod to Heidegger mid-way through their text: “thinking is craftwork” (171). All thinking is subjective, which is why we should analyze the cultural capital of any process or study; Latour and Woolgar’s publication was groundbreaking for that very reason.

Works Cited:

Latour, Bruno and Steve Woolgar. Laboratory Life: The Construction of Scientific Facts. Princeton, NJ: Princeton University Press, 1986. Print.

ARCHIBALD POST 4: THE “SITUATED IMAGINATION” THAT ENABLES AND LIMITS DH INFRASTRUCTURE

Patrik Svensson always seems to elegantly and lucidly set out the state of the DH landscape in his scholarship. In “The Humanistiscope—Exploring the Situatedness of Humanities Infrastructure,” Svensson introduces us to the notion of the “humanistiscope” as a new mode of viewing infrastructure that is created for the specific needs of the humanities. I’m interested in the discord that arises due to his identification of the opportunity, indeed the need, for DH as a ‘field’ to creatively and uniquely forge its own infrastructure, whilst adhering to the larger cultural, social, political, and technological frameworks around us. This tension is most obvious when we situate the goal of “unlocking infrastructural making and doing” alongside the practical necessity to “relate to the notion of infrastructure established by the policy makers, funding agencies, and institutions of higher education” (337, 344). A number of DH commentaries have expressed the desire to completely re-imagine processes and tools rather than merely revitalize the old, but realize that they must be grounded in real-world institutional politics. Svensson’s phrase the “situated imagination” helpfully combines these ideas, and he acknowledges that “making a case for [rethinking DH] infrastructure is one of politics and packaging as well as ideas, people, and equipment” (338).

Recent trends that Svensson identifies as imbedded in DH’s situated imagination are viewed in real-world spaces and methodologies; for example, they are all encompassed in the package that is the newly established University of Sussex Humanities Lab. Officially opening this month, the SHL is still developing its first projects, but aims to “re-launch the humanities” through the digital. Their mission statement claims to “re-imagine the humanities” without relying on “inherited disciplinary approaches,” and to this end the Lab is directed by an interdisciplinary team with backgrounds in media studies, philosophy, politics, sociology, performing arts, cultural studies, and coding and algorithms (https://humslab.wordpress.com/). But the SHL website also demonstrates the institutional expectation that has shaped the initiative. The £3 million investment demands longevity, and PhDs are the only individuals invited to apply for funded research positions at the Lab, in specific fields of study. Svensson considers the digital humanities lab an ideal model that can bring together different humanistiscopes, but the SHL shows that such a space, whether digital and/or physical, is largely formulated under a policy-driven rule of thumb. The policy itself is not necessarily undesirable, but it certainly limits the DH’s potential explosion in scope and innovation by setting out expectations based on traditional methods of study and evaluation.

As a side note, I’m very much in support of Svensson’s claim that “The [DH] challenge is also one of moving from critical sensibility to creative, if conditioned, making, which often does not come easy to the humanities” (337). His statement reminds me of the irony that when bridging the “two cultures” in a humanities lab, the humanists are often viewed as bring the creative element to the more structured workspace traditionally favored by the scientific community. However, except for specific programs such as creative writing degrees, we could argue that the humanities often find it difficult to integrate the creative into the academic. This is of course a gross generalization, but I think it’s worth considering the fact that the humanities still largely consists of disciplines defined by strict practices, making it all the more bizarre that humanists are viewed as compellingly more creative forces when undertaking work with STEM collaborators.

Works Cited:

Svensson, Patrik. 2015. “The Humanistiscope—Exploring the Situatedness of Humanities Infrastructure.” In Between Humanities and the Digital, eds. Patrik Svensson and David Theo Goldberg, 337-353. Cambridge, MA: MIT Press, 2015.

ARCHIBALD POST 3: DH AS A KEY PEDAGOGICAL STRATEGY RATHER THAN AFTERTHOUGHT

A number of the contributors to Matthew K. Gold’s Debates in the Digital Humanities recognize the field’s ongoing preference toward research and tool-making, with Stephen Brier writing that “teaching and learning are something of an afterthought for many digital humanists.” Even when prioritized, DH pedagogical strategies have largely been aimed at preparing staff, faculty, and graduate students for specific work as future digital humanists.

However, one of the most exciting opportunities we are faced with is the fact that DH can aid curriculum development at the undergraduate and high school levels by supporting the general learning outcomes of humanities education. In fact, I don’t think I’m exaggerating when I state that DH should be integrated into all humanities classrooms. We live in a media-intensive culture where students’ primary information and entertainment sources are screens. Despite having crossed over into many educational disciplines, skills from digital learning and recreational activities have not been well-integrated into humanities classrooms. Policy makers should be asking questions such as how we can convert increased day-to-day digital reading into increased literacy levels at school. However, Luke Walter recognizes that even at the tertiary level, “most curricula have not adjusted to the natural realities of the college experience, where the vast majority of students lead lives that are exponentially more digital and networked than they were when those curricula were designed.”

This is not to say that the current outlook is bleak for DH and pedagogy. Melissa has posted about the presentation by visiting lecturer Dr. Jeff McClurken, “Claiming DH for Undergraduates: Learning, Knowledge Production, and Digital Identity,” in which he discussed the University of Mary Washington’s Department of History’s fantastic teaching goals. The Department actively trains undergraduates to be adaptable knowledge producers as well as reflective consumers, with the specific goals of teaching skills in writing, speaking, perspectives on self and society, and digital literacy. In classes such as “History of American Culture and Technology” and “Adventures in Digital History,” McClurken requires students to undertake DH projects that include building websites and creating multimedia such as visual infographics. Furthermore (I was gobsmacked!) UMW as an institution provides domain names and web hosting space to all their members, particularly encouraging students to develop their digital identities and web skills.

Therefore, it is clear that scholars and teachers are thinking creatively about ways to incorporate new learning styles in higher education,[1] although I am unsure about examples of DH practices in high school curricula. Integrating DH into assigned student projects does not need to be a weighty task; the public course blogs championed by Trevor Owens (of which our ‘Doing Digital Humanities’ class WordPress blog is a great example) could happily replace the restricted access that characterizes traditional ‘Blackboard’ and ‘D2L’ discussion boards. Certainly, I was a little wary about the ramifications of posting my academic work on the Web, but this plays into Jeff McClurken’s mantra that humanities students (and staff and faculty!) should be “uncomfortably challenged, but not paralyzed” by DH practices. And the benefits of public scholarly discussion are obvious; students can take ownership of their work, build reputation, invite collaboration, and of course disseminate knowledge and receive feedback to/from a wider community. It’s safe to say, then, that writing, building, and designing are all valuable methods of inquiry for the humanities, as long as the affordances of such knowledge models offer rich and productive learning experiences for our students!

[1] It’s important to acknowledge Katherine Harris’ claim that teachers who engage in such pedagogical practices are not valued as much as DH researchers and tool-makers, again pointing to the imbalance between DH doing and teaching.

Works Cited:

Brier, Stephen. “Where’s the Pedagogy? The Role of Teaching and Learning in the Digital Humanities.” Ed. Matthew K. Gold. Debates in Digital Humanities. Minneapolis: University of Minnesota Press, 2012. http://dhdebates.gc.cuny.edu/debates/text/8

Harris, Katherine. “Failure? DHC 2011 Kerfuffle.” Triproftri. March 2, 2011. http://triproftri.wordpress.com/2011/03/02/failure-dhc-2011-kerfuffle

McClurken, Jeffrey. “Claiming DH for Undergraduates: Learning, Knowledge Production, and Digital Identity.” Exploring Digital Humanities Speaker Series, University of Colorado at Boulder. 17 September 2015.

Owens, Trever. “The Public Course Blog: The Required Reading We Write Ourselves for the Course that Never Ends.” Ed. Matthew K. Gold. Debates in Digital Humanities. Minneapolis: University of Minnesota Press, 2012. http://dhdebates.gc.cuny.edu/debates/text/6

Waltz, Luke. “Digital Humanities and the ‘Ugly Stepchildren’ of American Higher Education.” Ed. Matthew K. Gold. Debates in Digital Humanities. Minneapolis: University of Minnesota Press, 2012. http://dhdebates.gc.cuny.edu/debates/text/33

ARCHIBALD POST 2: FORENSIC AND FORMAL MATERIALITY – FEASIBLE RESEARCH CONCERNS?

I found Matthew Kirschenbaum’s Mechanisms: New Media and the Forensic Imagination (2008) both illuminating and difficult in nature. I’m certainly convinced by his call for digital humanists—or anyone looking at aspects of electronic reading and writing—to pay attention to the many facets of digital textuality, as his recognition of scholarship’s tendency to focus on the conceptual rather than the logical/physical generally rings true. I’m also incredibly impressed by his ability to render precise technical detail so lucidly to technological pedestrians such as myself. However, Kirschenbaum’s methods of forensic materiality and (to a lesser extent) formal materiality require an incredibly specialized skill set, one that an alarmingly few number of humanities scholars currently possess, and one that Kirschenbaum admits to have only acquired due to his obsessive desire to understand how things work even at the most microscopic level.

His demand for academics to examine the forensic and formal materiality of electronic texts, including their ‘invisible’ processes of storage, therefore seems impractical. I view Mechanisms as a wonderful demonstration of DH’s potential scope of scholarship, but the fact remains that we would be hard-pressed to find academics able to engage in such analysis. Although I’ve never read a literary article that examines medial properties to the extent of Kirschenbaum’s nanoscopic scrutiny, it is important to note that a host of scholars engaging with digital literature are looking beyond the screen to draw attention to the narrative impact of the medial affordances offered by specific electronic systems, hardware, and software; for example, Jessica Pressman looks at the underlying binary code driving works by collaborative duo Young-Hae Chang Heavy Industries (2008), Nick Fortunago examines the affordances of video game controllers to construct thematic tensions between desire and outcome in Shadow of the Colossus (2009), and N. Katherine Hayles’ Writing Machines analyzes the textual instantiation of web fiction and poetry (2008). Therefore I do not believe that the situation is as dire as Kirschenbaum argues (of course, the seven year lapse since his article has also helped remedy this fact).

I’m wondering, too, if Kirschenbaum’s melding of conceptual and technical methodologies implicitly ties back to the disciplinary argument that DH should largely be a collaborative undertaking. Individual scholars cannot feasibly keep on top of the many rapidly occurring technological advances, suggesting cooperation between researchers with specialized knowledge will increase. Similarly, if we look at successful DH tools, they are also being built through collaboration. For example, the recently released ACLS Workbench is the result of the efforts of Jessica Pressman, Mark Marino, Jeremy Douglass, Lucas Miller, Craig Dietrich, and Erik Loyer, a group consisting of scholars, artists, and programmers. The ACLS Workbench “is a new platform for collaborative research, which enables scholars to create, join, or clone online arguments enhanced with multimedia content” (Marino, 2015). That the Workbench and its accompanying print publication Reading Project was the result of six years of collaboration demonstrates the immense range of knowledge and skills required for such a project. Therefore, I’m certainly on board with Kirschenbaum’s general argument that DH scholars should look to design, discuss, and evaluate electronic texts at the literary, technical, and social levels, but I’m not sure that Mechanisms itself is an attainable model for such research.

Works Cited:

Fortunago, Nick. “Losing Your Grip: Futility and Dramatic Necessity in Shadow of the Colossus.” Davidson (eds.), Well Played 1.0: Video Games, Value, and Meaning. Pittsburgh: ETC Press, 2009. Web.

Hayles, N. Katherine. Writing Machines. London: The MIT Press, 2002. Print.

Kirschenbaum, Matthew G. Mechanisms: New Media and the Forensic Imagination. Cambridge, MA: The MIT Press, 2008. Print.

Marino, Mark. “Announcing a New Platform for Collaborative Scholarship of E-Lit.” Electronic Literature Organization, 2015. Web.

Pressman, Jessica. Digital Modernism: Making it New in New Media. Oxford: Oxford University Press, 2008. Print.

Archibald Post 1: Social Influence and Cognitive Implications of Digital Media

In 1988, Cynthia Selfe demonstrated remarkable foresight as to burgeoning digital technologies’ momentous academic impact in her ADE Bulletin article “Computers in English Departments: The Rhetoric of Technopower.” Speaking in an age when personal computers could undertake basic data base and word processing functions, and with the Internet’s public release fast approaching, she acknowledges that

“As a profession, [academics] are just learning how to live with computers, just beginning to integrate these machines effectively into writing- and reading-intensive courses, just starting to consider the implications of the multilayered literacy associated with computers. In our departments, we are just beginning to see possibilities for using computers to encourage collaboration and communication among colleagues, to ease secretarial burdens, to support research and publication projects, to make scholarship accessible.” (63)

Whilst Selfe’s focus on issues of physical access to computers within academia has now become largely redundant,[1] her recognition that unequal access to digital information can create hierarchies of “technopower” is still pertinent; for example, specific administrators might be the only department individuals with access to information about faculty salaries, or student funding opportunities, both important sources of technopower in contemporary institutions where research and study opportunities are governed by university budgets. Selfe’s commentary is also particularly appealing due to her call to acknowledge the many facets of the computer revolution’s social impact. Since her article was published, the analysis of digital technologies’ role in academia and society at large has become just as important as its scholarly capabilities, but the social influence of digital technologies in the areas of research methods, publication avenues, pedagogical strategies, and further forms of academic communication could still benefit from increased scrutiny.

I’d like to pick up on an aspect of the digital and the humanities that we couldn’t expect Selfe to highlight in that earlier era of computing. The extraordinary technical advances that we have witnessed since the turn of the millennium demand that we pay attention to the cognitive implications of new media; we now think alongside and through digital technologies, and there is an incredible amount of research focused on this symbiotic relationship. I’m particularly interested in the cognitive effects of digital media usage with regard to student literacy levels. Having just learned what a digital humanities lab is, I’m wondering if such a lab would be an excellent space to explore this issue. Michigan State University’s Digital Humanities & Literary Cognition Lab (DHLC Lab, http://dhlc.cal.msu.edu/) is one such center that would seem the perfect fit. The Lab focuses on tracking the history of cognition and media, and interrogating contemporary knowledge production. It also uses neuroscience technologies such as fMRI and EEG to investigate the literary. Although projects haven’t yet compared student reading habits when engaging with print and digital texts, a future study directly falling into this category is titled “Distraction and Digital Reading: Cognitive Patterns of Attention in Fiction Reading for iPad, Kindle, and Traditional Book.” I’m looking forward to reading the results, which I’m sure will be of interest to a wide audience—literary scholars, high school teachers, tertiary lecturers, and those in the fields of education policy and cognitive sciences. It seems as though the DHLC Lab is rare in it’s fusion of the cognitive and literary fields,[2] and it would be interesting to learn more about its lab culture, specifically the synthesis (and discordances) between its humanities and science concerns – perhaps I’ll be able to conduct an interview with a DHLC Lab member for our later project!

[1] It is now the norm for all tertiary staff and faculty to have a computer in their office (although the latter may be required to provide their own), and all students, staff, and faculty can access campus computer labs.

[2] After a quick Google search, the University of Paris’ Laboratoire Paragraphe is the only other research laboratory that I could find with a similar interdisciplinary focus.

Works Cited:

Selfe, Cynthia. “Computers in English Departments: The Rhetoric of Technopower.” ADE Bulletin. 90 (Fall 1988): 63-7. Web.