Thursday, March 28, 2013

The Digital Turn: Reflecting on the Digital Capacity for Self Reflexivity

I want to pick up, very quickly, from where I left off with McPherson's suggestion that racial concerns permeate DH covertly, sneaking in perhaps through the back door of UNIX modularization and the logic of fragmented autonomous processes. McPherson likens this "lenticular" logic to holographic postcards which both separate and bring two distinct images into a unitary construct that both highlights their difference even as it covers over their separability. The idea here is that UNIX modularization arose from the same milieu as neo-liberal covert racism. Gone are the days of the lynch mob and the segregated water fountains, only to be replaced by the quietly managed structural racism that made cities like Detroit and Chicago even more racially segregated in the 80s than in the 50s. The logic of modularization, lenticular logic, allows multiple social "processes" to operate under the illusion of autonomy. It covers up the interconnectedness of each module, privileging smooth, uninterrupted, and straightforward articulations of what might actually be rather complex systems. While this may make for elegant computer design, it also describes a rather insidious social situation where the concerns of the margins have no chance to make their plights known to more mainstreamed groups. The screens, in other words, are hidden.

How to begin investigating, let alone solving this problem? If it is true that the very logics upon which DH is predicated begins with the principle of concealment for the sake of cleanliness, how can we turn these digital discourses back on themselves in a critically productive fashion? I believe Johanna Drucker's essay "Humanistic Theory and Digital Scholarship" begins to get at some of the ways the digital can begin to work against its own cultural logic, though perhaps not in the way that she intends to. The basic thrust of Drucker's essay consists of a call for humanistic criticism to play a more central role in what Ramsay would call the practice of "building." Drucker closes her essay thus:
Our challenge is to take up these theoretical principles and engage them in the production of methods, ways of doing our work on an appropriate foundation. The question is not, Does digital humanities need theory? but rather, How will digital scholarship be humanistic without it? (94)
This seems to be basically the same question McPherson asks. It's clear that digital scholarship is effective as a mode of knowledge production, but what in particular makes it useful to the theoretically committed humanist? Both McPherson and Drucker seem to suggest that only by integrating the more traditional modes of humanistic criticism at the level of "building" can we really think about DH as an actually humanistic endeavor.

The trouble is, it doesn't take very long before we run into some imaginative difficulties on this trajectory. What, after all, could Foucauldian code look like? Is it possible to write a computer language that inscribes Marxist critique at the level of Python syntax? The answer right now looks to be unequivocally "no." The computer knows only binary, and criticism necessarily undoes the polarizing work of the either/or construct. Drucker begins to get at this apparent absurdity in her examination of a digital reconstruction of a Roman forum:
If we embark on a project to study inscriptions in the Roman forum, for instance, should we simply take a virtual simulation, a fly-through model made on a standard 3-D software platform and place the writing on the variously appropriate surfaces? Or should the ways signage works as an articulation of space, the species of "spaces" described by Georges Perec, be merged in a mash-up of Christopher Alexander's "pattern language" with Gaston Bachleard's poetics of space and the Foucauldian analysis of spatialized regimes of disciplinary control structured into and structuring architectural and social relations? A ridiculous proposition emerges from that impossible sentence. (92)
This is exactly the trouble with trying to, as McPherson says in relation to race issues in the digital humanities, "hold race and computation together in a systemic manner" (153). There is something about the way computers work - not only internally, but on our own social heuristics - that simply makes sustaining cultural criticism difficulty. It is as if computation necessarily imposes the lenticular modularity that prevents academic cross-pollination and therefore insulates praxis from critique.

At the end of Drucker's frustration, however, she asks whether or not the ridiculous proposition is really ridiculous? Whether the impossible sentence must really be thought of as impossible Drucker's own analysis takes her into a critique of visualization as a way of representing data, that it always distorts and represents incorrectly or ideologically. Though she seems essentially right about this, I would submit, briefly, that this distortion is not a problem as long as it is self-reflexively implemented. Indeed, it seems to me that distortion is the deformative act that McGann wants from the beginning. Perhaps distortion is even what the critical act has always been. However, I want to move past Drucker's critique and entertain for a moment what a digital construction that uses its own tools against itself might look like. How, in other words, might a program "show the screen," not only of itself but of the cultural logic in which its lenticular modularity originated?

I want to suggest that a certain kind of video game is one way of thinking about this possibility. The simulation, it seems to me would make room for dynamically representing the many theoretical considerations Drucker describes in her Roman forum project. Indeed, video games can simulate quite complex ideological structures that make clear the network, not the node, as McPherson might say.  Next time I hope to get into some of the ludic possibilities for DH criticism by way of Paradox Interactive's game Victoria II. If anyone has other ideas of games that might solve some of Drucker's problems, I'd love to hear what you think.

Wednesday, March 27, 2013

The Conversation Ramsay Gets Us

I left off last week with the argument that perhaps the rather productive threat of DH to the rest of the humanities lies in its allegiance to the TOPLAP creed, “down with obscurantism!” I still take that to be one of the more provocative ideas that Stephen Ramsay gave us in his live casting experiment with Andrew Sorensen. It seems to me to have obvious political ramifications, the likes of which arise out of a fairly heady mixture of art-collective polemic, critical theorizing, and artisinal craftsmanship. It is also why, in part, I feel compelled to defend Ramsay from some of Golumbia's and perhaps even Natalia Cecire's criticisms of his essay On Building.

To my eyes, On Building is a relatively straightforward essay that does two things for Ramsay. First, Ramsay further articulates his position as a "builder" in DH - one who participates in acts of digital "construction." Secondly, the essay fends off hostility from the broader humanities by re-situating earlier claims he made about his commitment to the actual, ground level engagement with code that he believes is necessary to the work of the digital humanist. The essay's virtues are certainly limited in scope, but still relatively clear. It articulates a clear ethos for Ramsay's conception of the digital humanities (and I think it's important, by the way, that we begin discussing whose digital humanities we are talking about when we use the term). I like his inclusion of the term "procedural literacy." It's clear that what Ramsay thinks is (or wants to be) unique about the digital humanities is its unique position to actually craft the tools of study that we humanities folk use to make our arguments. Perhaps, even, if we're lucky, Ramsay thinks we might write some code that in and of itself enacts a theoretical perspective.

Certainly the essay has its limits, but I was surprised at some of the arguments that the ideologically opposed Golumbia and the somewhat more sympathetic Cecire launch at Ramsay here. Their complaints seem largely predicated on what I find to be a rather strange reading of Ramsay's text - though this particular reaction to the more optimistic DH works that we've read this semester seems fairly common. Both Golumbia (in his comment on Ramsay's blog as well as in the article circulated through our seminar) and Cecire (in her introductory essay for the Winter 2011 edition of The Journal of Digital Humanities) take Ramsay's entire argument to be predicated on a displacement of traditional, close-reading-based theoretical work by this new focus on "building." Golumbia seems to believe that Ramsay's version of DH is divorced from ideological moorings and that this divorce somehow absolves the craftsman from the burden of self-reflexive critique. In the essay we read, Golumbia relies on personal anecdote to ground his criticism of Ramsay:
In my experience, truly committed DHers are often just those who resist being taught to read, especially being taught to read politically, and do not "love" to read this [in the mode of politicized interpretation] way at all. (9)
Though she is a good deal more gracious in her critique, Cecire seems to harbor a similar concern. For Cecire, she sees in Ramsay a dangerous propensity to privilege methodology over ideology in the kind of scholarship he promotes. She worries that the "happy fault" of Ramsay's methodologizing signals a divestment of the power of "doing" to signify. The concern, unless I misread her, is that Ramsay's way of including the constructive along with the deconstructive act leads not to a broader inclusion of scholarship but rather to the inevitable decline of criticism as the privileged scholarly act. If I felt convinced that the relationship between the deconstruction in Theory and the construction of "building" was an essentially inverse one, I would sympathize with both Golumbia and Cecire a bit more.

However, in both Reading Machines and "On Building," I fail to see any divorcement between theoretical self-reflexivity and Ramsay's acts of digital construction. Likewise, it seems bizarre to accuse someone with such obvious commitment to criticism (I'm thinking here of his relatively conventional blending of the algorithmic and the theoretical in Reading Machines) to somehow believe that Ramsay has lost ideological grounding. He seems quite happy to admit that "building" has ideological commitments, and equally happy to entertain objections to that end. Moreover, he implicitly legitimates the practice of critical self reflection after and throughout a DH project in his various, again, rather conventional publications. Indeed, DH for Ramsay is a way of "showing the hidden screens" of more broad assumptions that undergird humanities praxis. This seems to me a fundamentally critical act.

I do, however, see Ramsay's piece (as well as Moretti's and even, to a lesser extent, Wilkens) open to significant and provocative criticism is in the article by Tara McPherson in Debates in the Digital Humanities. McPherson's piece, "Why Are the Digital Humanities So White?" asks a crucial question that Ramsay virtually necessitates in his pervasive reliance on blue collar, male metaphors of economic production for his romantic depiction of the work of the digital humanist. Working from the rather obvious omission of race in much of the work that DH has generated, McPherson suggests that there might be something inherent to the language of computers as it has developed through UNIX coming out of the sixties that keeps race concerns out of sight and out of mind. She wonders,
Might we ask whether there is not something particular to the very forms of electronic culture that seems to encourage just such a movement, a movement that partitions race off from the specificity of media forms? Put differently, might we argue tha tthe very structures of digital computation develop at least in part to cordon off race and contain it? Further, might we come to understand that our own critical methodologies are the heirs to this epistemological shift? (143)
These questions get at, I think, an actual critique of Ramsay's underlying ideology rather than a straightforward rejection of his methodology. And, from the language of Ramsay's article "On Building," it seems like a worthy investigation. Return to Cecire who asks why the digital humanist must build rather than weave, foregrounding the gender skew that seems implicit in DH work. I'm not quite sure yet whether or not I buy all of McPherson's argument about the relation between racial unrest in the sixties and the sudden shift toward modular computer language, but it at the very least seems like the right kind of conversation to be having.

Wednesday, March 20, 2013

Show Them Our Screens: Live Coding, Digital Humanities, and the Obscurantism of Close Reading

After watching Stephen Ramsay's video essay on musical "live coding," I had what I can only describe as a small ecstatic moment.This, I felt, was truly interesting, truly hybrid. If many aspects of Digital Humanities that I have commented on in this blog are strained with competing interests from the sciences, the humanities, and the many, many populations outside the academy, the work of Andrew Sorensen, the "live coder" whose performance Stephen Ramsay annotates in the video to which I've linked, offers a refreshing panacea to that ideological strife. Sorensen's performance is wholly at peace (what other language to use? at one? comfortable? self aware?) with its own hybridity. To my eyes, the practice of live coding holds together at its core the threads of performance, computation, language, sensual aesthetic experience, and critical awareness,all in a beautifully layered lump of sound. This is no McGannian deformance, nor is it a Morettian reduction or abstraction. This is composition.

In reading the blogs of some of my fellow DH students over at Northeastern University I came across a discussion of Alan Liu's essay "Where is Cultural Criticism in the Humanities?" He, like many other scholars laboring in the (sub)field of Digital Humanities takes as his subject the disconcerting relationship between DH and the broader field of humanities (whatever that is). Some of this essay reads like a temporally dislocated continuation of C.P. Snow's old treatise on the two academic culture, but on the whole. Liu's take remains fresh to my eyes. My sense of his argument is that what DH has lacked since the very beginning is a strong critical voice. The few exceptions - Liu points to Moretti as one example, I would also suggest Wilkens and McGann as two others - reserve their perspective of cultural criticism by remaining firmly rooted in what is in the end a rather traditional mode of textual analysis. Liu connects the dearth of cultural criticism in DH to the growing cultural irrelevance of many university humanities programs. Though again, the "humanities" remains vague here, Liu's argument seems clear enough: DH can lead the humanities back to a STEM-like relevance again if only it will recover its voice for cultural criticism.

I, like fellow grad blogger dherdoyle, feel somewhat dubious about reorienting our perception of the humanities around the rather instrumental, cultural-relevance bellwether that Liu's essay seems to suggest. On the same token, what good is cultural criticism that no one outside of the academy pays attention to? I'm reminded here of TOPLAP's manifesto as it is cited in Ramsay's video. The second demand made by TOPLAP (an organization for support and promotion of live coding artists) in its manifesto is thus:
Obscurantism is dangerous. Show us your screens.
The motion of the live coder is to lay bare the language, the syntax of electronic composition. It is the quintessentially self-critical work. Its agenda? Commitment to clarity and computation. In his commentary on Sorensen's live coding, Ramsay necessarily resorts to mixed metaphors and hastily constructed portmanteaus to describe the performance. He uses phrases like "playing the comments" to describe the bizarrely naked collusion between raw LISP (the coding language that Sorensen uses) and musical theory. It is this very nakedness that, to my mind, suggests the kind of contribution that the so-called Digital Humanities can make to not only the academy, but the socio-intellectual discourse field (ha) at large.

To translate an aesthetic work into computation is to the humanist the ultimate abstraction. When we look at a document like House Of Seven Gables that has been rendered newly byzantinian by TEI markup, it is easy to feel as though the soul has been stripped. With the advent of data the age of intuition and close reading walks out the door. And yet it may be the very nakedness of computational languages that, upon translating the aesthetic work, lays bare the many intimate interfaces between artist and art, performer and performance, player and game. When TOPLAP demands we show them our screens, they are demanding that we no longer pretend to know what we do not.

This is the threat of DH to the rest of the humanities. Committing to DH is the humanities' concession that the broad claims it has made from the staggeringly small canon that it studies are in some very important way, quite inadequate. We have been hiding our screens from view, and the decline of the humanities stature in the university, let alone public opinion, has suffered because of it. If Liu is right - and he is at the very least not at all alone - in suggesting that STEM fields have far outpaced us in terms of cultural relevance and that that is in fact a bad thing, then it seems quite natural to look to the STEM fields and find out what it is that they are doing right. To that end, Manovich's work on visualization, Moretti's graphs, maps, and trees, McGann's acts of deformance, and Wilkens' cartography all adopt some aspect of data-based investigation. These are, I'm becoming more and more convinced, valuable in their own right. However, perhaps they are each barking up, if not the wrong tree, then not quite the right one. The live coders show us what our lesson from STEM ought to be: down with obscurantism.

Wednesday, March 6, 2013

Critical Spatial Scholarship

There was a brief moment as I read Matthew Wilkens' contribution to Debates in the Digital Humanities where I felt swept along in the dream of a data driven English department. I imagined undecided freshman filing into their undergraduate education. They were confronted with an English department split into two tracks, one traditional the other digital, "data driven". The two tracks shared, of course, a core curriculum. Everyone read a bit of Shakespeare, everyone took at least a basic literary analysis course, and everyone at least heard the name "Foucault" in the same sentence as "sexuality" at some point in their education. Where the tracks differed however was in their commitment to the tradition of close reading or, conversely, to "distant" reading, perhaps even literary mapping (though Moretti's work gives us reason to question the validity of the word "map" to describe his kind of scholarship). The two tracks shared some classes together and spoke a similar scholarly language, but they also had the opportunity to specialize into the particularities of their particular reading commitment. I imagined something similar to a university's multiplicitous engineering offerings: computer, chemical, mechanical, electrical.

My reverie came upon me as I read Wilkens concluding remarks on the necessary displacement of scholarship that must accompany a move by the humanities into broader textual analysis. He says: "If we do that--shift more of our critical capacity to such projects--there will be a couple of important consequences. For one thing, we'll almost certainly become worse close readers." (256) For Wilkens, scholarship is a constant question of opportunity cost. It is this principle of opportunity cost which has kept the canon firmly in place, even after decades of poststructuralist fashionability. We are only so many eyes, Wilkens seems to suggest, and we must consider the cost of directing those eyes elsewhere, away from the two hundred or so sacral texts which anchor the discourse fields of literary scholarship. I will return to this idea in a moment.

Jo Guldi, in her series of blogs on the "spatial turn," seems to suggest a historically traceable trend in myriad fields of scholarship. She likens this "turn" to the linguistic turn with which English departments are far more familiar. Guldi explains that a "turn" is concerned with a retrospective reevaluation. It is the movement of a discourse field in the process of rearticulation according to a newly (re)introduced logic. Thus the spatial turn is the incorporation of spatial logics into seemingly disparate modes of knowledge production. This is not a particularly new phenomenon. Guldi locates the beginnings of the spatial turn in literature before the Civil War. However, in much the same way that digital technologies reinvigorated debates about the history of the book and the canon (even as they effaced the longstanding tradition of exploring both), Guldi sees new tools like GIS as a refreshing force in the application of spatial logic to other fields.

Franco Moretti certainly seems to agree by way of his experimentation with spatial logic in his "Maps" of Graphs Maps Trees. Though he admits that his spatial representations aren't really maps (a map, he says, would find value in a location "as such") they are certainly a spatialization of literary texts. While I like the approach, and I think that his diagrams of various village narratives from the nineteenth century present an otherwise hard-to-see set of information about the genre, it also feels strangely like a kind of close reading as well. The conclusions Moretti draws are, more than most of his claims throughout the text, founded in the question he starts with, and it is never quite clear why he chooses to arrange the texts according to the concentric circle logic that he selects. If this is a kind of shift in textual analysis toward scientific modeling, then it is a very strange shift. The methodology is comprised of thoroughly uninterrogated assumptions about the "best" way to represent the interior spaces of a given series of texts. It is also unclear how this particular methodology could generalize out to other genres of texts, and whether it would even be worth doing.

The diagrams in Wilkens' text seem to me to have much more utility. His net is cast broad enough and his methodology is distant enough from close reading that his conclusions are, I think, genuinely insightful. Sometimes, I am even surprised by what he finds. Most importantly for me, however, is the very specific revisions that his data suggests. This, to me, is the sign of productive scholarly work: surprising conclusions that when properly reckoned, bring about change in the body of knowledge to which it belongs. In the case of Wilkens, the revision is to "American regionalism," a critical historical construct with implications across a whole spectrum of humanities disciplines. I wonder if the difference between Wilkens' work and Moretti's isn't ideological motivation. Moretti's work, while interesting, remains inscrutable to me. It's lively, experimental, and evocative, but only sporadically does it have a political locus. Wilkens, on the other hand, begins with the clearly enunciated assumption that canons are in some way damaging to understanding the human activity we call "literature." This, I feel, is quite different from what Moretti does when his question already contains the answer, as it seems to in "Maps." Wilkens' question has less to do with a hunch about the nature of the texts and more to do with the conviction that these other texts that lie outside the purview of canonization are indeed worth investigation.

Perhaps what the digital humanities have been lacking is neither good ideas nor clever experimentation nor academic rigor. Perhaps it was merely lacking the spine of critique.

Wednesday, February 20, 2013

Convergent Divergence (Divergent Convergence): Doodling Moretti's Trees

In his thorough and quite civil response to Franco Moretti's Graphs Maps Trees: Abstract Models for Literary History, Cristopher Prendergast returns (ironically?) to a critique of Moretti's circularity. Though this critique appears in a number of different locations throughout his essay, I take its most concrete manifestation to be in Prendergrast's discussion of the reciprocal relationship between convergence and divergence among the branches of Moretti's trees. In Moretti's text, the convergence is the precondition for the divergence, and divergence provides ever more opportunities for convergence as well. Prednergast seems to take this as a "chicken-egg" problem. He says,
These are however strictly reversible propositions: if convergence preupposes divergence, then divergence presupposes convergence. We are back in the chicken-and-egg world, yet again opening onto the infinite regress that leads back to a hypothetical First Cause. It is best to avoid this morass. (57)
Though Tsuda counters many of Prendergast's arguments against Moretti more effectively than I could, I want to take a bit of time to examine this particular hang up, this concern with circularity and convergence as it relates to divergence and vice versa.

I object, first, to this notion of circularity, and instead want to reaffirm along with what I think Moretti would call reciprocality. There is nothing to my eyes in Graphs Maps Trees that suggests a search for whichever came first. Moretti's goal in outlining the give and take of divergence and convergence is conjoined to his attempt to combine the wave with the tree. The point in that combination is the perpetual preexistence of the one over the other. The effect of Moretti's divergent convergence (convergent divergence) is one of entanglement, not linear causation. He simply has the good sense to not append "quantum" as an adjective. Prendergast is sensing in this entanglement a question which isn't there. Moretti has no need to answer which came first, divergence or convergence. Neil Degrasse Tyson once said in response to the chicken-egg question: "The egg, but it wasn't laid by a chicken." In much the same way, Moretti's divergence and convergence morph and change, creating a heritage with as many conjunctions as disjunctions.

In Moretti's discussion I was reminded once again of the similarities between the models that DH theorists generate in an attempt to chart literary history and the decision-making trees that form the backbone of simulated experiences. Moretti's tree of clues is certainly of the same structural family as the reader's progress through a "Choose Your Own Adventure" book. This is evolution written as OHCO. Here I wonder at the possibilities of McGann's deformance and Drucker's speculative computing as they apply to a simulated, computer generated representation of divergence and convergence. How might someone play through literary history, utilizing data from the other 99.5%, the unread, and as a result generate new divergences and convergences that would otherwise be invisible?

Last year I became briefly addicted to a small mobile game called Doodle God developed by JoyBits. It is, in a few important ways, a possible resolution to Prendergast's concerns about circularity as well as possible imaginative platform for playing in Moretti's trees. In Doodle God, the player starts with four basic elements: Earth, Water, Air, and Fire. From these elements, players combine and recombine them in different arrangements, generating new elements along the way. As each combination yields yet more elements, the game creates a tree of equations, tracking the heritage of each new convergence. Here again we encounter the fecundity of a text. Doodle God is a small, simulated example of numerous discourses in DH which have come to a head in Moretti's text. It conjoins the concept of convergence and divergence to Latour's fecundity as well as McGann and Drucker's speculations. The results of these combinations are often quite unpredictable, though in hindsight, always explicable. This seems to me to be the logic of divergent convergence (convergent divergence). It is progressive yet reciprocal. The question I would like to ask is this: How can we doodle literary history?

Wednesday, February 6, 2013

Deformity, Praxis, and the Digital Aesthetic Work

Finding a single thing to respond to in Jerome McGann's Radiant Textuality feels like a Sisyphean task - as soon as I think of one topic to criticize (the wanton abuse of the word "quantum" in chapter 7, for instance) or an idea to pick up and push farther (McGann's interesting approach to the development of new critical tools comes to mind) another tangentially related boulder tumbles back down the hill, taking me with it back to square one.On some level, Radiant Textuality's sprawl is a performative product of the subject matter. There's something wry about McGann's shifty text given its conclusions about the limits of the book and the potential of "deformance" to unlock new critical perspectives. Perhaps, in the same sense that it has already been "marked up," Radiant Textuality has already begun to deform under my readerly eye. McGann's multigeneric approach to this text is in and of itself a form of the deformance that he suggests in chapter 4. Radiant Textuality is rife with addendums, appendices, notes, digressions, introductions, prefaces, close readings, dramas, and criticism. It slips with startling fluidity from one form to another, loosening the reader's demands for linear argumentation. It suggests, rather, an internal logic which it describes as it deploys. In other words, Radiant Textuality is the performative enactment of what I take to be one of McGann's central arguments. The lesson we learn from digital humanities projects is not that computational logic displaces the book. Rather, computational logic shows us what was true all along about the books we thought we knew: they were already algorithmic to the core.

Though this implication is potentially interesting, I'm doubtful that it's a terribly revolutionary. It seems the two primary insights to garner from McGann's wandering tour through his quasi-biographical research journal are rather less sexy, but still, I think, quite important. On the one hand, McGann's preference for digital tools for literary criticism seems to rest on his belief that books are in some way an informational subset of computational archival. The book, argues McGann' is a specific kind of informational engine which inscribes its own instructions for consumption in its corpus. Computers are much much better at encoding informational at the indexical level, and therefore, at least when it comes to the marking-up of structural and semantic information (a process which Mcgann seems to equate to criticism) and therefore offer us a new critical vantage point on the books we've always loved. The critic who criticizes using a set of tools that is made of the same stuff as his critical subject, argues McGann, necessarily starts at his task with a significant impairment. Computerization grants the critic the new tools he needs to adequately account for the structural and semantic complexity of bookish algorithm.

The second insight which I take to be particularly interesting is McGann's is his view of deformance as the specific mode of criticism which computational logic unlocks. Beginning from Dickinson, McGann looks to the efficacy of techniques of erasure, isolation, and arbitrary but physical reworkings of texts themselves as textual, typographical objects. Rhymes which were otherwise hidden (think here of Keats) suddenly spring out to digital humanist. Photoshop yields a newly chromatic Blessed Damozel. Herein I find the potential for genuinely illuminating self-reflexivity. It seems that despite all the apparent scientific rigor of digital experimentation, there is something delightful, almost Proustian, about the kinds of accidental aesthetic discoveries that newly mechanized recombinations of analog works, whether literary or otherwise, might enable. There is something refreshingly serendipitous about the whole affair.

I also find McGann's focus on generating a praxis out of his longsuffering sojourn with The Rosetti Archive and then the Ivanhoe Game to be refreshing as well. In his calls for critical praxis he reminds me of his compatriots in OOO who also lament the lack of what Ian Bogost calls philosophical "carpentry" in his book Alien Phenomenology. If some of McGann's formulations are lacking in precision (I'm still not sure why TEI or the OHCO thesis are regarded as important to the digital humanist) or reserve (say quantum one more time...) they make up for it in their self-consciousness. McGann seems genuinely invested in a constant reciprocation between his ideas, their implementations, their results, and the new ideas those results generate. This kind of scholarship seems to be particularly needed in a field which all too often falls into screechy optimism.

Having finished Radiant Textuality, I couldn't help but feel, however, that McGann had left it unfinished. Perhaps that is part of the book's own deformity, but I'd like to push for a moment at what I take to be an enormous oversight in McGann's conceptualization of digital-logic-as-critical-tool. If indeed editing is a critical activity (a proposition I find wholly agreeable) and if the power of the computer to illumine the book lies in its greater efficacy in processing the algorithmic logic which already inheres in its pages, what happens when we turn these critical apparatuses back onto the digital object? Nowhere in Radiant Textuality does McGann seem willing to admit the existence or even the possibility of a rich, wholly digital aesthetic text. I'm left wondering if his particular view of digital humanities could accommodate the relatively straightforward e-reader, let alone the meticulously crafted fictions of Heavy Rain and Spec Ops: The Line, or the vibrant simulated societies and economies of EVE Online. Indeed, the decidedly un-fun structure and implementation of the Ivanhoe Game lead me to believe that McGann, at least at the time of this book's publication has very little experience with the rich world of video games. Given that these objects (aesthetic works that I take to be thoroughly textual) exist on the same logical plane as the computerized critical tools which McGann advocates for deploying upon the analog book, I wonder what kind of critical apparatus McGann would imagine for them if he was given the opportunity. If one of the worthwhile products of the many projects in Radiant Textuality was an uncovering of the computerization's critical power as a deforming agent, how might we begin to imagine a deformed video game? A deformed Kindle? A deformed app? Would McGann even consider these things texts? At the very least McGann gives us some ground on which to start asking these questions, or as McGann might say, to imagine what we don't know.

Wednesday, January 23, 2013

There Is No Aura: Distinguishing Reinterpretation from Replication

I wonder if my youthful disregard for the sacred cows of the academy has deadened me to the sort of materially rooted awe that Walter Benjamin ascribed to great works of art. Though I suppose its possible to think of Benjamin's aura as nothing more than the complex congregation of social discourses which sacralize the notion of the unique original artwork, the whole idea still smacks too much of mysticism for my liking. Perhaps I'm too philistine to appreciate it properly, but when I stood in front of Da Vinci's Ginevra at the National Art Gallery I felt none of the intense aesthetic attachment that seems to afflict Benjamin in his most famous text. Certainly, I appreciate the painting's historical uniqueness - it's kind of fun to be in the same room as something Da Vinci himself touched - but that sense barely approaches trivial curiosity, much in the same way I like very much to look at baby capybaras in the zoo. In the end I wonder why in our time of copies upon indistinguishable copies should there be an aura at all? Why do we not dispense with the notion of an aura entirely?

In their article "The Migration of the Aura," Bruno Latour and Adam Lowe seem to have some sense that Benjamin's aura feels awkwardly out of place in a world of reddits, Pinterests, and Tumblrs (look I'm hip!). They challenge the geographical and physical rootedness of originality that Benjamin's aura requires, arguing that the artwork's historical facticity has little to do with its originality. Rather, the artwork's status as an original depends upon its "fecundity," its ability to produce many new copies of itself, which is to say without the copies the original is inevitably lost. This leads Latour and Lowe to paradoxically describe artworks as more or less original. Like a cornucopia, works of art exist along a continuum, gradually opening up as the copies become more dispersed.

Though I appreciate their negotiation and adaptation of Benjamin's aura in the context of digital replication, their commitment to the very existence of the aura leads them to make some curious claims. In order to illustrate the sense in which copies of artworks both prove and mobilize the aura of originality, Latour and Lowe attempt to equate the myriad variations of King Lear to the intensive material reproduction of Le Nozzi di Cana. They argue that what makes poor reproductions of art lesser than the original is their lack of imagination, and that in approaching the practice of artistic reproduction we should reserve for ourselves the more generous expectations that we use for stage productions. We do not, Latour and Lowe rightly suggests hope for the exact replica of King Lear every time we see it. In fact, much of the pleasure of seeing a production of King Lear comes from the way it diverges from the expectations we may otherwise have had from previous renditions of the text. While I wholeheartedly agree that our expectations for a play function in just the way Latour and Lowe describe them, I reject the notion that a new interpretation of Shakespeare's text (or whoever's text King Lear is) is tantamount to even the most liberal conception of replication. Interpretation != replication. Though I am sympathetic to the argument that replication does in fact produce greater desire (and greater sanctity) for the original, and I am even willing to admit that repeated reinterpretations performs a similar social function, I refuse to equate them as cavalierly as Latour and Lowe. In foreclosing on the difference between interpretation and replication, they collapse the role of the conservator with that of the critic, the copycat with the analyst. They ignore, therein, the crucial ideological distinctions which underly and motivate the essentially conservative project of art preservation and the essentially progressive undertakings of the art theorist. Their conflation of reinterpretation and replication is symptomatic of their commitment to the aura. Without the aura, the art conservator is an archeologist, a historian, not an aesthetic authority.

Dispense of the aura, I say.