image from work

The Future of -Writing -Vilém Flusser +Language +John Cayley

  • with
    Sally Qianxun Chen

  • HTML & JavaScript
  • Desktop & Mobile
  • doi:
  •  BEGIN 

In the rewrite, move your pointer over any sentence. Sequences that are shared by my rewrite and a corresponding Flusser sentence will remain in the fully opaque blue-black of my rewrite, with my added or changed words faded slightly. If your pointer is over a shared sequence, after a short delay, the sentence or unit will crossfade to the red-black of Flusser's version. If your pointer is over my own, less opaque text, nothing will happen – and you can pause to see what I have changed or added – until, that is, you do move your pointer over a shared sequence. Once Flusser's red-black words are crossfaded in, then, after a short delay, their less opaque but distinct text will take on full red-black opacity. Move the pointer off a sentence or unit to crossfade back to the rewrite. The section number in the upper left is also a drop-down menu giving access to any section, with previous and next buttons enclosing it.

“The Future of Writing” from Writings by Vilém Flusser, edited by Andreas Ströhl, translated by Erik Eisel, is reproduced here by permission of the University of Minnesota Press. English translation copyright 2002 by the Regents of the University of Minnesota.


Work toward the current dynamic form of this essay-as-rewriting was prompted in 2019 by an invitation from Erika Fulop to give a keynote presentation at the conference, ‘Language INTER Networks.’ Footnotes:( Lancaster University, UK, June 20-21, 2019, (accessed May 9, 2020). As conference proceedings the rewrite will also be made available, with a translation into French, in a forthcoming issue of the online journal Hybrid. Thanks to Erika Fulop and Hybrid’s editors for permission to publish the webapp version here. ) As I reworked Vilém Flusser’s essay, my collaboration with Daniel C. Howe on The Readers Project and also with Penny Florence on our “inextrinsic readers” led me to imagine the development of a related algorithmic reader that moved through my rewriting, briefly removing words on the surface of the screen’s page to reveal the words, phrases, and sentences by Vilém Flusser which underpin the rewrite, the thoughts that this reconfiguration conforms to its own.Footnotes:( (accessed May 9, 2020); (accessed May 9, 2020). ) The inaugural issue of The Digital Review then offered me the perfect platform to present a reimagined version which allows the same visualization of the rewrite and its underpinnings but now gives temporal control to its human (re)readers and introduces a generalizable framework with a reading instrument that would be able to provide comparable visualization for any pair of texts (or versions of a single text) which are similarly structured and share phrases or sequences of language in tokenized orthographies.Footnotes:(These shared sequences recall the Longest Common Phrases (LCPs) deployed by The Readers Project’s and How It Is in Common Tongues, (accessed May 9, 2020). ) I conceived and initiated a webapp, but coding for this framework and engine was done, chiefly, by Sally Qianxun Chen, with my lesser programmatological powers in tow. There follows, here, a brief introduction, giving my rewrite some context, and then a mildly technical “after-reading” which provides an explanation of the framework and the visualization engine, with an eye to their potential use by others.

+Language: an Introduction to +Cayley’s Rewrite

If the future of something promises significant change – in anything from sea level, say, to the speed or modalities of networked communication – then language-as-such does not have a future of which we can conceive. Language is an evolved faculty and, by all accounts, unique to our species, the language animal. If this faculty were to change significantly, these changes would take place at the inhumanly slow pace of evolution. Any future of language can only refer to a speculative account of the vicissitudes and, perhaps, the fate of linguistic practices in the face of actual or speculative cultural reconfiguration. Despite these circumstances, some of our best thinkers, including Vilém Flusser, have speculated on ‘The Future of Writing’ as if writing was a metonymic stand-in for language-as-such, or as if changes in writing could drive changes in culture as momentous as the end of history or generate some paradigmatic shift in the ways of human thought.Footnotes:(Vilém Flusser, ‘The Future of Writing,’ in Writings, ed. Andreas Ströhl (Minneapolis: University of Minnesota Press, 2002). ) What follows is a quasi-creative, critical adaptation of Flusser’s 1983–4 essay in the course of which — with the minimum possible amendment from my point of view – language is treated as if it were actually the subject of such challenging, generative thinking. This introduction provides a few hints and suggestions with respect to my own thinking and impetus.

Flusser wrote his essay in English, according to the editor of the original publication and the 2002 collection of his translated Writings, Andreas Ströhl. In Ströhl’s editions “1983-84” is given as the date of the essay’s composition, and it was first published in the Yale Journal of Criticism, Fall 1993.Footnotes:(‘The Future of Writing,’ The Yale Journal of Criticism 6, no. 2 (1993). In this issue of the journal ‘The Future of Writing’ is one of three essays published with a four-page introduction to Flusser by Elizabeth Wilson and Andreas Ströhl. The other pieces are ‘Change of Paradigms’ (Flusser’s last lecture) and ‘Orders of Magnitude and Humanism,’ both composed in German (translated by Wilson and Ströhl) and both collected in Writings. ) I chose to adapt this essay because 1) it was written in English, 2) for the wording of its title, and 3) because of its more or less immediate priority with respect to one of those monographic collections of Flusser’s writings that were composed in German and for which he became best-known in the fields of media theory and, dare I say it, digital language art: Into the Universe of Technical Images, 1985.Footnotes:(Into the Universe of Technical Images [Ins Universum der technischen Bilder], trans. Nancy Ann Roth, Electronic Mediations (Minneapolis: University of Minnesota Press, 2011). )

Flusser is a philosopher of the apparatus, epitomized by an abstracted camera. He is, thus, also the philosopher of photography as what had already become, by the time he became its philosopher, an institution. Critical media studies, at an historical moment subsequent to that when Flusser had ceased to write, adopted his then still maverick thought as, potentially, the abstraction that would help us all to understand computation and digitalization. Some of his most influential later writing was on the verge of comprehending the regime of computation and his inclinations with respect to the image, the imagination, and the imaginary resonated with the hyper-rationalist, somewhat anti- or post-literary, countercultural, techno-utopian “visions” of even the most astute early critics of “cyberculture.” But Flusser’s technical image was not computational, despite his gestures toward scripting and programming. Flusser’s technical images are produced by artifactual apparatuses manufactured by humans. Writing was also, for him, an apparatus, a human “invention,” a medial paradigm determinative of culture and of thought.Footnotes:( Historians generally, and historians of writing, often point to supposed “origins” of practices of writing as if these were instances of an all-but-singular “invention” of writing itself, whereas this could not possibly be the case, precisely because there are, historically, multiple instances of this supposed “invention.” We may say that schemes for the transcription of particular natural language practices have been “invented” but this has to be contextualized. My own view is that what we called writing is best understood as a distinct – support media distinct – way of engaging with the constitutive human faculty for language. What we call writing can be seen as a grapholect (Walter Ong’s term) of any natural language with which it is integrated. “Grapholect” shadows the fairly well understood term, dialect. If you agree with the implied correspondence, then you may also agree that it is misleading to suggest that grapholects are “invented” since dialects are not. My own position is a little stronger than Ong’s. I see writing practices as corresponding with those of distinct natural languages. For a nuanced treatment of these issues, with a position less radical than my own and better informed by modern and contemporary linguistics see Roy Harris, Rethinking Writing (London: Athlone, 2000). Harris usefully describes his own lingustic theory as “integrationist.” For example, written English can be characterized, in these terms, as a grapholect of global English highly integrated with spoken dialects of the language. See below and also section 2 of the rewrite. )

A surface reading of Flusser can suggest that if you can understand, control, relativize, analyze, instrumentalize, and finally compute the lesser apparatus then you will grasp its, writing’s, linearity, the font of all its suspect cultural formations. What you have grasped will, now, allow you to enhance productive cultural practice with the technical facility of apparently newer but eternally-returning apparatuses, those of the image, now the technical image, and thence imagination itself. The technical image may well supersede writing. So the argument may run.

Flusser enjoined linear rational, historical thought to maintain its relationship with the technical image. My own, related problem with this philosophical and, to an extent, futurist project is at least twofold: computation is not apparatus; and writing is a practice of language-as-such, something that subsumes writing without impugning either entity’s integrity. In this writing-through of Flusser I have made changes indicative of my understanding that computation is not apparatus and that Flusser’s thinking is occasionally misdirected by this misapprehension. But in undertaking my adaptation I was not chiefly concerned to unravel the specifics or consequences of the misdirection, amongst which I take to be an inability to account for the absolutist, totalizing tendency of computation’s fundamental abstraction, a supposed universal applicability that is beyond the facility of any conceivable apparatus. There is also the point that, for the same reason of purportedly perfect abstraction, computation credits itself with the ability to produce or simulate any apparatus, including many that are beyond the horizons of human perception or humanly appreciable scale and speed.

Writing is a practice of language in the same sense, I believe, that a particular “natural” language is a practice of language. No one will dispute that a language is language, with privileged iconic resonance and commensurate interrelations between instance and class. But most linguists and philosophers are hesitant to say that [a] writing [system] is language without prescriptive hierarchical implications. They can’t accept that a subordinate, “transliterated” instance is an instance of the class. The denigration, and the logic, of the supplement rears its now ugly, now anti-originary head. In linguistics since Saussure, the subsidiary, supplementary practice of writing is denigrated in that it cannot be the object of linguistic science. This object is presumed to be the empirical data of spoken language, which is taken to be language as such and regardless of the Saussurean distinction between langue and parole. In Of Grammatology Derrida helps us to understand that this circumstance, which he enfolds in a “logic of the supplement,” derives from the presumption or assertion of a fundamental, anti-originary supplementarity with respect to our faculty of language itself. (The human faculty of language has no origin; if anything, it evolved. It presents itself to us and to our culture as a supplement of “life” or “nature.”) It is misapprehension of the logic of the supplement that allows writing to subserve the metaphysics of logocentrism (metaphysics itself) and, since its Saussurean advent, causes the science of linguistic to mistake its object. And so now, with the help of this logic and by contrast with those philosophers of language who have, in my opinion, failed to follow Derrida, I affirm: as sociocultural human practice, [a] writing [system] partakes of the constitutive faculty for language just as fully as any particular natural language does.

Paradoxically, thinkers like Flusser have attributed humanities-constitutive and culturally formative power to what they also think of as a lesser instance of language. They attribute constitutive, formative power to writing as apparatus, one that is not only, for them, a human invention and a caricature of language-as-such but one that seems singularly ill-suited to the appreciation and generation of multi-dimensional images. If the imagination is to flourish, authoritative history must fall, says Flusser, and writing must adapt or die with it. Perhaps a misconceived “writing” can serve as Flusser’s straw man, but if writing-as-such is a practice of language-as-such then should we not at least begin to substitute our challenging inquiry into the future of writing with a challenging inquiry into the future of language? To do so places the relationship between language and networked computation in, I believe, a different light, revealing distinct images, and casting other, thought-evoking shadows.

[ Please follow this link and read the essay-as-rewriting. ]

… reading through (writing or word-diff) through reading …

Given that The Readers Project can be viewed as a visualization of algorithmic reading strategies, when I came to rewrite Flusser’s essay, I was primed for the thought that there should be some way to visualize my rewriting strategies: what kind of reading this rewriting came from and how it differed, specifically, from the composition that inspired it. Also in my mind were the inextrinsic readers that I developed with Penny Florence. These were word[s]-for-word (or token[s]-for-token) readers that allowed other words – intrinsic to a reading or a translation that had been done by the same or another writer – to appear in turn as replacements for corresponding words that were extrinsic to (on the surface of) a particular composition. Word-for-word “translation” was an obvious “use-case” for the inextrinsic readers, algorithmically visualizing the words of a translation as “intrinsic” to its “extrinsic” original. But, we thought, Penny and I, that more than one word – a phrase or sentence of commentary – might also be revealed as intrinsic to the words of a text.

Not only was Flusser’s essay intrinsically associated with my rewriting-through-rereading-through-rewriting: because of the way I composed it – due, that is, to my reading strategies – my final rewriting shared whole sequences of words and phrases with Flusser’s. In fact, this was the case as often as I could make it so. Broadly, I approved a good deal of Flusser’s thinking and its impetus. For any visualization I wanted to show, just as clearly, what had been retained, as what had been changed or, chiefly, added.

And then, inevitably, in the world of writing for networked and programmable media, there are the writer-coder’s memories of encounters with diff, the Unix diff command and its wide-ranging, powerful applicability to version control in code development. Typically, diff shows coders what has changed in the program they are (co-)writing: what has been modified, deleted, added at the level of the line, a formally integral unit of programming languages. Coders, at the level of the line, can see what’s what, and also often who’s done what. And this makes the creation of software better all round, especially for productively critical collaboration. diff is more rigorous and effective for coding than “Track Changes” is for scholarship, but it does not have the granularity and nuance that writers, or even rewriters, require. Nonetheless, digital language artists like myself fantasized about using GitHub for tracking versions of their fictional and poetic and unclassifiable masterworks, frustrated by the complete absence of any credible crosswalk from, say, Microsoft Word docs to the files of a diff-able git repository (with clones and branches and reflogs and pull requests and issues and everything).

And yet – both theoretically and in actual fact – diff can be configured to operate at the level of the character, showing, by way of procedural formal analysis, differences between two files or texts at the level of the character and how, character-by-character, one file can be reinscribed as the other. When operating at the level of the character, on natural language texts, diff’s outputs and visualizations make all-but-no sense from the writer or scholar’s point of view. But what about word-diff? Surely that would be interesting for self-identified word(-)smiths?

The standard form of the Unix diff command does not have a word-level option. But wait, it’s built into git, which any self-respecting code monkey is using daily. So, let’s try:

git diff --no-index --word-diff=porcelain language.txt writing.txt

This produces output that automatically finds many of the two t(e)xts’ shared sequences of language, while parsing out what’s modified, deleted, and added, more or less at the level of the potentially shared word – or above if sequences are shared in roughly corresponding positions – and more or less in a manner that is of semantically-implicated interest to the wordsmith and scholar.

Thus, as it deploys and massages the above command, the rt(w||w)tr webapp engine used for the dynamic presentation of my essay-as-rewriting generates, basically, a transactable visualization of formal word-diff, configured for literary critical purposes. As published here, my text is on the surface but output from the git diff command has been parsed to structure the underlying html and to allow the regular association of this structure with that of Flusser’s essay. By default, the engine’s parser treats conventionally punctuated sentences as “units” of text which are to be considered as, in some pertinent respects, “equivalent.” But I have, on occasion, added sentences, and I might have declined to include or recompose others. For this eventuality the engine’s parser anticipates a degree of manual markup such that the number of sentences and or explicitly marked “units” is equal in the two texts. These units can then cross-fade back and forth between one another, with actual shared sequences of language notable and highlighted as actual or potential “anchor points” for visual, that is, typographic correspondence. Once a small amount of always-optional manual markup is done, the operations are all formal (and thus in principle generalizable for arbitrary pairs of texts), but they have been configured – along with the compositional procedures of my all-too-human rewriting – to draw out critical, semantic, readerly and writerly insights and resonances.

As already highlighted, “heavy-lifting” on the coding for this webapp engine – augmented by many sharp conceptual insights and corrections to my own misconceptions – was done by Sally Qianxun Chen, a digital language artist in her own right. Sally has the mastery of DOM, CSS, and JavaScript that made this reader readable, particularly in terms of interface, animation, and style. We agreed, as we developed, that we did not want to produce a reading instrument for which a human author or authors needed to compose a “hidden score.” I could have spent hours marking up my essay-as-rewriting and Flusser’s supply text with machine-readable metadata tags that, themselves, would entail prefigured prescriptive instructions on how they should be parsed. We were both intrigued by the production of an instrument that visualized a reading experience playing itself out somewhere between human and machine reading. It was more exciting and interesting for both of us to incorporate an existing formal tool for analyzing and editing code – the diff command – and, as I say, reconfigure it for human comparative, critical and language-aesthetic reading. At the moment, this is bespoke code, crafted to animate a particular essay-as-rewriting. We believe, however, that it’s a fairly short step from this implementation of the rt(w||w)tr engine to a tool for more general use, visually exploring differences between literally associated texts, versions of the same text, and even – I’ve thought of an algorithmic way to do this in principle – a text and its translation. Watch this space.


Notes are not referenced in the dynamic rewrite, but thanks is due to Charles Bernstein for his tweet, silently quoted in section one, 'Literature is $50bn behind art.'



Flusser, Vilém. ‘The Future of Writing.’ In Writings, edited by Andreas Ströhl, 63-69. Minneapolis: University of Minnesota Press, 2002.

Flusser, Vilém. ‘The Future of Writing.’ The Yale Journal of Criticism 6, no. 2 (1993): 299-305.

Flusser, Vilém. Into the Universe of Technical Images [Ins Universum der technischen Bilder]. Translated by Nancy Ann Roth. Electronic Mediations. Minneapolis: University of Minnesota Press, 2011. Berlin: European Photography, Andreas Müller-Pohle, 1985.

Harris, Roy. Rethinking Writing. London: Athlone, 2000.

is a writer, theorist, and pioneering maker of language art in programmable media. Apart from more or less conventional poetry and translation (Ink Bamboo, Agenda, 1996 and Image Generation, Veer, 2015), he has explored dynamic and ambient poetics, text generation, transliteral morphing, aestheticized vectors of reading, and transactive synthetic language. One of his recent works is a skill, The Listeners, for a well-known digital assistant. He now composes as much for reading in aurality as in visuality and investigates the ontology of language in the context of philosophically informed practice-based research. Professor of Literary Arts at Brown University, Cayley directs a graduate MFA track in Digital and Cross-Disciplinary Language Arts. Selected essays are published in Grammalepsy (Bloomsbury, 2018).

is a media artist, programmer, and researcher. She works at the intersection of language, art, and digital technology, with a focus on digital textuality, generative poetics and the aesthetics of algorithm. Her work has been published in Drunken Boat, Cura, ZeTMaG, and Electronic Literature Collection. She holds an MFA in Digital Language Arts from Brown University.