Electronic Text

I assume there’s no lack of discussion amongst book historians on the topic electronic texts. What are they saying?

Lots. It seems to me that since the 1990’s, scholarship on electronic texts has centered around a few central topics, namely, the notion that electronic texts are demonstrative of postmodern ideologies and deconstructive strategies, the theory and education value behind hypertextual archives, the efficacy of the digital text as a surrogate form, and the imperative need for an understanding of deep encoding as it structures future academic engagement with literary texts.

Let's deal with these one at a time. The first one connects book history with theory.

It does. John Landow, Richard Lanham, and John Grusin are the most outspoken proponents of the position that the electronic text in some way fulfills the promise of Postmodernism. Grusin states that electronic texts “realize or instantiate the theoretical assertions of poststructuralism, postmodernism, or deconstruction.” He claims that hypertext embodies the work of Derrida who, “more than almost any other contemporary theorist…uses the terms link, web, network, matrix, and interweaving.” Is it true that electronic texts embody Derrida’s work or that such texts realize the project of deconstruction?

If electronic text is underwritten by code, then the programmers of that code are working in the capacity of a translator. Benjamin writes of a "pure language" only approachable in the space between languages, as a by-product of translation. He could not have forseen the particular way in which computer programming has developed languages of its own, but how does his conceptions of translation and of the figure of the translator themselves translate into the scenario we now encounter: a radical gulf between specialized computer languages and the existing languages they make representable digitally?

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License