Fillip 18 — Spring 2013

The Question of Interface
Alexander R. Galloway and Mohammad Salemy

Mohammad Salemy – Your new book, The Interface Effect, is a dense text that, on first read, presents a significant number of original insights, in straightforward language, about the aesthetics of new media as well as the possibilities and limits of interactivity within information technology. On second read, however, a more complicated set of questions arises about these subjects. If it wasn’t for outlining larger propositions, why was it necessary to pack so much into a book? Is this the beginning of a much larger project in which a new set of crucial questions will gradually be answered in subsequent books?

Alexander R. Galloway – On the one hand, this book is the final installment of a series of three texts devoted to the theme “allegories of control.” So it represents the culmination of about ten years’ thinking regarding how aesthetics and politics operate within new media. I’m not sure I’ll be writing more about new media in the immediate future, at least not in this precise way.

But, on the other hand, the book is a bridge to some new work currently under way. And you are correct to note that it does not exactly answer questions so much as gesture toward new areas of inquiry. The ethical is one such area, which I touch on here: Why do I think that the computer is the ethical machine par excellence? The answer has to do with the notion of an ethos or a practical orientation within a world; I see the computer as fundamentally designed to emulate practices, and is thus “ethical” in the strict sense of having an ethos. I’m also relying on a distinction that thinkers such as Alain Badiou and Jacques Rancière have made between the ethical and the political. I’ll be pursuing that question more in future projects. A second area is the question of digitality itself. I’m not sure anyone has really posed the question adequately: What is digitality? Yes, we talk about computers. We even talk about binary encoding and symbolic capture, but is the digital something entirely different? That’s the subject of a new book I’m writing on François Laruelle and digitality. As we know, Laruelle’s work deals with the One. My challenge is to take his “one” very literally, as one half of the binary pairing of zero and one. Because he so adamantly refuses such binary distinctions I see Laruelle as an anti-digital thinker, perhaps the most rigorously anti-digital thinker we have.

Salemy – In the introduction to The Interface Effect you propose that interfaces are about the thresholds of interaction that are situated between different realities. You say that you would like with this project not just to define but rather to interpret interfaces. Throughout the book, instead of referring to interfaces as objects, you speak about interface effects and why interfaces function the way they do. Is this insistence on the mediating character of interfaces, instead of their objecthood, an indirect way of acknowledging their highlighted temporal dimension compared to the mere spatiality of non-media objects? Isn’t it that the durational basis of the way interfaces organize human attention allows them to be more than just objects? What I mean by temporality here is not its classic sense in which people like Edmund Husserl, Henri Bergson, Maurice Merleau-Ponty, or even Alfred Schutz have spoken about, as an inherently human property, but something along the lines of Gilles Deleuze’s concept of pure perception without the preceptor.1 Don’t the media and art play their mediating role through employing a pure temporality outside of an embodied human consciousness?

Galloway – The late twentieth century is characterized by the paradigm of space, not time, at least in terms of the intellectual. Just think of all the language that was popular in the 1970s and ’80s: the autonomous zones, Michel Foucault’s “heterotopias,” Deleuze’s “lines of flight,” Guy Debord’s “situations” and “psychogeographies,” Henri Lefebvre’s landmark book The Production of Space (1974), and Fredric Jameson defining postmodernism as fundamentally a question of architecture. This is a spatial era. There are those who view the computer in terms of temporality (Mark Hansen, for example), but that’s not at all my interest.2 For me there is convincing evidence—in fact going back to the nineteenth century and even beyond—that computation is ultimately a spatial process. It’s cellular, grid bound, focused into diagrams and structures.

An article I wrote recently on the mathematician Nils Aall Barricelli confirms this spatiality of computation.3 Time is just a variable like any other for the computer. Time is no longer the active, vital infrastructure of the medium like in the way that we say cinema is a “time-based medium.” It’s difficult to say something similar about the computer. If anything, the computer is a space-based medium, which, by the way, is why Deleuze couldn’t imagine anything like digital cinema in his Cinema books, and why we still need a third volume on the “space-image”! Yet I think what you say is also important; that if we follow Deleuze—for example, what Deleuze says about the crystal—then indeed we can start to think about a pure, autonomous temporality outside of human perception. That would indeed be a way to bring time back into the conversation. The crystal is really a kind of vitalism for Deleuze, and the cybernetic, cellular universe re-enchants dead matter with a kind of vivaciousness. To that extent, yes, indeed, it has become “temporal” in the Deleuzian sense.

Salemy – Precisely, I was thinking along the lines of Deleuze’s discussion of the crystal in Cinema 2.4 A temporality that is external but in the same manner as the Bergsonian non-spatialized and inner time of consciousness; more like the phenomenology of the object. I say this because mediation always brings up the discussion of temporality.

Galloway – Yes, but that’s a fundamentally different way of looking at the world, rooted in both phenomenology and cinema. It’s what Friedrich Kittler would call the 1900 media, not the 2000 media.5 We need to make a much harder break with both phenomenology and cinema if we wish to understand the computer. So while I agree that temporality is and will remain a crucial category for analysis, I stand firm on my formula “computer = space,” even if it sounds a bit stubborn or polemical.

Salemy – Your book moves back and forth between ontological discussions of interfaces and computers as both hardware and software. It seems that you are looking at the computer as a set of mediations and therefore itself as a form of interface. For you, the interface does not exclusively reside in the computer but the reverse, meaning computers belong to the ontology of the interface, which is completely understandable. The question is, are there lines that separate interfaces from raw data? Or rather: from the graphical user interface all the way to the binary code, everything can be considered a form of interface since even a computer program is nothing but a mediation between the programmer and the machine.

Galloway – Yes, it’s “interfaces all the way down.” Yet, there is indeed a difference between structured and unstructured data. Unstructured data are data that have no interface; they have no structure. When data gain structure they gain interfaces. And, at that point, the data is called something else—it is called information. The word “information” combines both halves of data: “-form” refers to a relationship (here a relationship of identity as same), while “in-” refers to the entering into existence of said form, the actual givenness of “abstract” form into real concrete formation. The lines that separate them are more or less arbitrary, which is to say they are philosophical. It’s nothing more than the classical distinction made between object and relation (or between body and mind, or what have you). Data is the object and information is the relation. The interface appears when data becomes philosophical. I like what you say: the interface is not in the computer, but the computer belongs to a larger kingdom of the interface.

Salemy – I guess, then, going back to the discussions of the first wave of cybernetics at the Macy conferences when the concept of data, information, and signals were first conceptualized, data can only be pure data before it is separated from its worldly materiality, and the minute data is detached, collected, measured, and recorded in any shape or form, it starts to become information?

Galloway – Yes that’s right. So “data” is in some basic way a fictitious concept. Although, in being fictitious, it also becomes utopian.

Salemy – Can we say one man’s data is the other man’s information and vice versa; that the line shifts with your knowledge of computers?

Galloway – Yes. Although there is a more elemental question: Do we ever see data at all, or is it always already pre-structured in some way? Data may be merely an abstract concept, much like givenness or facticity. But yes, in withdrawing from information, in the Laruellean sense, it might indeed be possible to enter into a universe of immanent data.

Salemy – The loose frame through which you define your concept of interface opens up your project to larger concerns that poke their heads in at crucial points throughout the text. For you, an interface is not just what one encounters on the computer screen, and it can include things like windows, kiosks, doorways, channels, sockets, and holes, as well as paintings, photographs, TV shows, and movies. The last few instances of interface are where the discourse of media studies meet, or rather collide, with that of art. How does it feel to delve, with this latest book, into a territory marked out by art historians, curators, and art critics? Why is it that you can state things as you see them without demonstrating fidelity to any particular school of thought on the subject of visuality?

Galloway – I’m not sure what I despise more, disciplinarity or interdisciplinarity! The former leaves us to languish in a dull repetition of the old ways of thinking, but the latter forces us into a kind of unskilled promiscuity where we stumble around aimlessly in places where we shouldn’t be—which is really nothing more than a new kind of proletarianization of the mind. Nevertheless, I am drawn to art and aesthetics because it opens up a vast number of possibilities. It allows us to talk about form and sensation, relationships and complexity. If anything, there may be a common domain that both media theory and art criticism may claim. We might call it simply aesthetics, or perhaps something like a mode of mediation. All of these things flow together in my mind, and in fact, as you know, it’s impossible to think about metaphysics without conjuring the vocabulary of aesthetics, since from Plato onward the true and the beautiful have been forever intertwined. In other words, aesthetics is always a question of philosophy.

Perhaps this is why I want to expand the conversation in The Interface Effect to include these larger discussions around aesthetics and politics. And it also comes from a certain interest in the “monumentality” of the computer. People assume that these machines, these video games, these websites and viral videos are all a kind of trash culture, a miniature culture for miniature people. But for me there is a monumentality to the computer. It’s not just the novel or the epic poem or the cinema that we need to focus our attention on. The computer also tells us something monumental.

Salemy – If Alois Riegl were alive, he would rebuild his house of art history to which he refers in Historical Grammar of the Visual Arts6 to include a whole section on computers. Of all Vienna-school art historians, he is the one who understood the overlap between art and monumentality, particularly in the short piece he was commissioned to write for the Austrian government called “The Modern Cult of Monuments” (1903). For him the process of ruin had a lot to do with how art becomes a monument and vice versa. Don’t you think the problem with computers and their products is that they are too immune to the process of ruin and perhaps this immunity contributes to them being unworthy in the mind of those concerned with so called “high culture”?

Galloway – The computer has two basic forces: ruination and victory. On the one hand, the computer, and particularly the computer network, is a force of ruination, laying waste to everything it touches. The logic of the computer net is identical to what Karl Marx and Friedrich Engels described in the Communist Manifesto: “All that is solid melts into air, all that is holy is profaned.” Networks are profoundly destructive forces, most notably when they come up against entrenched centres of power (like the King, the Father, the Firm). This kind of network is what we might call the web of ruin. But the computer is not simply an erosive force, since it responds to ruination by instituting a new victory regime. And the new regime is, in fact, much more powerful than the old regime. (Such is the irony of protocol, for example.)

What I want to refuse, however, is the old argument, common during the 1990s, that distributed networks only mean ruination, anarchy, or chaos, which is the classic “chaos fallacy.” It goes like this: hierarchy is synonymous with power; thus, since both centralized networks and decentralized networks display hierarchy, any elimination of these forms will necessarily be an elimination of power. However, as I and countless others from Manuel Castells, to Yochai Benkler, to Michael Hardt and Anthony Negri have shown, the advent of distributed networks has practically nothing to do with anarchy. On the contrary, distributed networks are, in some ways, much more highly organized than their centralized or decentralized counterparts. In my view, it is difficult to speak about things like “late capital,” “empire,” “cybernetic society,” “protocol,” or “the age of networks” without endorsing the position that distributed networks have a unique, potent, extensive mechanism for organization and control.

Incidentally, Quentin Meillassoux makes the same mistake about chaos—and he is thus a sort of dot-com philosopher par excellence. If we undo the principle of sufficient reason, there is no particular reason why chaos or radical contingency will follow. Take the case of Laruelle for example. Laruelle shows quite convincingly that if you undo the principle of sufficient reason it’s in fact highly likely that there will be determinism, unilateralism, and destiny, not to mention utopia. Laruelle’s rebuttal to Meillassoux is very powerful, yet I don’t see people discussing it much.7

Salemy – You propose that interfaces in general create autonomous zones in which interactions, the most important of which is the political, are simultaneously facilitated and prohibited. At the same time, you are not afraid of privileging their unworkability over their assumed practical function. You claim that computers are principles of mediation like a code of conduct or, to paraphrase, they are what you call “machines of ethics.”

Building on Wendy Hui Kyong Chun’s propositions about the ideological function of software, you stress that software is not a container for ideology but that it is capable of functioning to enact and resolve the ideological contradictions of the conversion and abstraction of data into information.8 In other words, by taking advantage of our dependency on visual knowledge, software programs manage to hide more than they are able to show. As strange as it may seem, these propositions do not necessarily refute the idea that we live in a post-ideological paradigm. That is, since ideology has been materialized via computers, it is no longer the prime medium for social control. Am I getting this right?

Galloway – The black box is a key technology of our time. It is the central trope that unpacks a whole series of crucial questions, everything from how technology is organized to shifts in the landscape of critical theory. With the black box, things are recast as functions containing a visible interface (with inputs and outputs) and an invisible core. My aim in writing about software and ideology is to show that ideology is a technical operation, not a shortcoming of culture or thought. In other words, ideology is not a kind of misrecognition or delusion that happens in human minds. Ideology is built into the actual material technicity of things. It’s identical to how Marx defined the commodity in Capital (commodity fetishism is a material reality, not a human shortcoming): If, however, we bear in mind that the value of commodities has a purely social reality, and that they acquire this reality only in so far as they are expressions or embodiments of one identical social substance, viz., human labour, it follows as a matter of course, that value can only manifest itself in the social relation of commodity to commodity.9

Salemy – In your talk in Germany titled “10 Theses on the Digital,” you associate digitality with the basic process of bifurcation, meaning going from one to two and so on. Is this new work related somehow to your interest in Laruelle? Are you working on an ontology of the digital?

Galloway – Exactly. Laruelle allows us to think about digitality in a whole new way, simply because he is so militantly anti-digital. In fact, he might be one of the very few thinkers who does not imagine the world in a digital way.

Salemy – Deleuze identifies the immediate effects of societies of control as the gradual waning and opening up of autonomous and closed institutions of social power such as family, school, factory, hospital, and prison that Foucault had previously identified within disciplinary society. Picking up where Foucault had left off, Deleuze, in “Postscript on the Societies of Control” (1990), foresaw the emergence of a new form of free-floating control with possibly tighter grips.10 This new regime of power simultaneously opened up and connected together the institutions of the disciplinary society through their subjection to the logic of corporations, finance capitalism, and marketing. While asserting the capability of machines in expressing “those social forms capable of generating them and using them,” Deleuze identified computers as the emblematic machines of the societies of control.

This very brief text by Deleuze has been an area of interest to you in the past, more specifically during your graduate work at Duke and the writing of your first book, Protocol (2004), in which you fleshed out the specificities of the Deleuzian concept of control as it related to distributed networks and their governing software and protocols. It seems almost natural that you, with your background in coding and your academic work in literature while studying with Fredric Jameson and Michael Hardt, would be the person who would follow up on Deleuze’s identification of computers as the quintessential machine of the societies of control. In The Interface Effect, you return several times to Deleuze’s concept of control society in connection with:
— the move from the catoptrics (i.e., the branch of optics dealing with reflection) of the spectacle to the dioptrics (i.e., the branch of optics dealing with refraction) of control (p. 25);
— the emergence of data visualization (p. 80), or what you have precisely identified as the move from singular machines producing many images to multitude of machines creating a single image (p. 91);
— the self-exploitation of informatic systems (p. 106); and, finally,
— the rhizomatic structure of networked narrative cinema heralded by directors such as Robert Altman, Quentin Tarantino, and Krzysztof Kieslowski.

A lot has happened with both information technology and the mechanisms of social control just in the last few years. Among these changes we can mention, on the one hand, the widespread use of mobile computing and the emergence of big data and rich interfaces within communication technology and, on the other hand, the heightened ease with which these technologies have directly and indirectly facilitated the open flow of control. How has the concept of control society evolved for you throughout the eight years that separate the publication of your first and your last book?

Galloway – We are all Deleuzians today—that much is clear. But the vital question is, which Deleuze? Two basic factions have emerged: first, those who think Deleuze describes resistance and flight from power, and second, those who think Deleuze describes power itself, the very structure of organization and control. The first are today’s post-Web liberals, ahistorical but enlightened (“everything is a rhizomatic system”); the second are the historical materialists, a label no less gauche for being accurate (“let us historicize and critique these systems, because they proliferate injustice”). In short, the line-of-flighters and the society-of-controlers. The Deleuze of 1972 and the Deleuze of 1990.

As you note I have been particularly struck by the shift from one Deleuze to the next. Who is able to make that shift, and who is not? If earlier I was more evenhanded about it, today it’s not possible to equivocate: rhizomatic structures are at the very heart of organization and control. So if earlier I saw more of a liberating potential in Deleuze, today I am slightly more pessimistic. The reason is that Deleuze is an “unaligned” writer.11 There is no necessary political dogma that one must stomach today when reading Deleuze (of course that might not have been true in 1968 or 1972). Today, the US Special Forces can be Deleuzian and the Occupy movement can be Deleuzian. There’s nothing inherent in Deleuze that aligns him along a specific political movement. That’s why I call him “unaligned” (similar to the old Cold War parlance of the non-aligned countries). It’s up to us to align Deleuze along certain specific routes. Although I think the window is closing quickly. We are currently living through the last weeks and months in which it is still possible to read Deleuze.

Salemy – Deleuze is more politically aligned in his shorter writings, for example in editorials he wrote for Le Monde and other publications in the 1970s. “Postscript” also has a very Marxian flavour to it and situates Deleuze more on the side of negation and struggle, no? A particular question that comes to my mind is that why, despite the fact that “Postscript” was published in the 1990s, did no one take its dire warnings seriously? Was it technological euphoria, network optimism, or simply the spirit of Hegelian progress that delayed its important message?

Galloway – Yes. Deleuze himself was an aligned thinker—the writings with Guattari for example are quite explicitly anti-patriarchy and anti-fascist. I’m thinking more of the reception of Deleuze and his use today. Unfortunately, Deleuze has been turned into a kind of TED Talk, where people use him to talk about how awesome networks are. That’s what I mean by his newfound “unalignment”: although Deleuze himself had explicit political beliefs and desires, some of his work has fallen prey to instrumentalization, especially by line-of-flighters, which evacuates his work of political significance.

“Postscript” is therefore a very important Deleuzian text in my mind. Perhaps years from now it will be viewed as a “classic” Deleuzian text, even though it is only five pages long and written at the very end of Deleuze’s life.

Salemy – In today’s world where we constantly face multiplicities and diversities, we only speak of and work with a single Internet and not a multitude of Internets. Even within distributed networks, while there are an almost infinite number of web pages and websites, servers, data farms, and information modules, there exists only one Internet. Shouldn’t this be used as an excuse for us to drop the plural “s” from the societies of control and start using the term, the way you often do in the book, as the control society? What would be the philosophical and practical implications, if they could ever be separated, of going from this particular “many” to this very single “one”?

Galloway – This is a question of cheating and forcing. The ethical position is the one, the common, the generic, the univocal—different thinkers use different terms, but as you say, it’s always a question of the one. But of course not all ones are the same. There’s a forced one and a cheated one. Alain Badiou defines forcing as the transformation of a given situation by virtue of a hypothetical principle.12 With forcing, a hidden impossibility within the situation becomes visible and possible. So what we need is a forced one. We need to virtualize ourselves into a common unity of the one. Unfortunately instead of a forced one we have a cheated one. The singular unity of the Internet, or the singularity of the society of control—these are not the ones we’re looking for! We’ve been cheated. So there’s always an irony or tension in how universalism is discussed today. Yes the Internet is a universal, and I’ve written in my book on protocol how the universal is achieved technically. But this kind of universal has almost nothing to do with the universal of the common. So today we have universality without collectivity. We are given universal claims like “there is no alternative.” But those universals aren’t very appealing because they are cheated universals. Instead we need a unity that follows Badiou’s notion of forcing the one. A different kind of slogan—something like “no one is illegal”—captures the spirit of it nicely.

Salemy – In your book there is a chapter titled “Are Some Things Unrepresentable?” that relates more directly to the field of visual culture. There you propose two distinct but related theses regarding representation. In the first you argue that digital aesthetics represent “nothing” and in the second you argue that all digital aesthetics arrive at “one” single image, summarizing the two in a dilemma called the unrepresentability of information aesthetics. Does this dilemma characterize the end of what Jacques Rancière calls the sensible regime of aesthetics that, in his opinion, itself replaced the two previous regimes of straightforward and sublime representation? How do you see these three categories offered by Rancière as relational models between politics and aesthetics in comparison to your own categories in the book, which arise out of combining four concepts of coherent and incoherent politics and coherent and incoherent aesthetics? Is the world getting ready for arriving at the final and fourth stage in which incoherent politics coexist with incoherent aesthetics?

If the problem with Rancière’s theory of the unrepresentability of violence is its exclusion of the affective response by the viewers, how is this lack of affect, precisely in the way you remind us in the book—hardly anyone ever cries out in front of a website like they do in front of a movie—accounted for in your four different configurations? Isn’t the affect deficit the sign of the world of incoherent politics and incoherent aesthetics?

Galloway – When incoherence in aesthetics combines with incoherence in politics, the result is truly radical. At the end of the book I connect this dual incoherence to the generic (what some call “the whatever”). In fact this is the way to escape the ethical mode entirely. The kinds of processes that lead to the generic have sometimes been associated with the ethical mode—thus we speak of “withdrawing” from the world into a kind of suspension of antagonism. Yet there is another realm beyond this, I think, which is the realm of incoherence. And here I don’t mean anything like chaos or anarchy, not at all. Incoherence means simply that there is no centre of gravity; there is no organizing principle. The principle has been suspended, or withheld. (Although of course once it’s suspended a whole series of weird new logics might present themselves. This is where Laruelle comes in, particularly his notions of “unilateral duality” and “determination-in-the-last-instance.”13)

Salemy – Can computers be repurposed to assist us in the process of arriving at this fourth stage, or no? Can the machine that Deleuze identified with the control society be used effectively against itself, or is it that the incoherent politics and aesthetics require a new set of machines not even imagined today?

Galloway – Yes, it’s possible, although it would require a machine quite unlike any computer we currently know. Eugene Thacker and I have speculated on what this might look like in our “Liberated Computer Language.”14 The key is to withdraw from digitality into the generic. I don’t see current machines helping in that way—although there are specific exploits that achieve the generic, things like encryption. The hacker group Anonymous, therefore, is a kind of Laruellean phenomenon, because the group dissolves difference into a crypto-ontology of identity.

Salemy – Do you think that the introduction and use of these new exploits requires a particular kind of political consciousness, or do you think the acceleration and intensification of the control society sooner or later will force us to go beyond the digital?

Galloway – The digital is the realm of the political. This is where distinctions are made: one or zero, self or other, progressive or reactionary, and so on. It’s summarized well by Badiou’s “theory of points” in which the world is collapsed into a single binarism of two points. You must choose: this point or that point, yes or no.15 That’s what digitality ultimately means. But next to the digital lies the realm of analogicity, and this is the ethical realm. Here digitality plays no part. There are no more points, no more distinctions, no more choices. In fact it’s the opposite: to be ethical is to withdraw from distinction; to be, as Laruelle would put it, “insufficient.” In this sense the ethical, the analogue, and the insufficient are synonymous. This is why Laruelle talks about things like destiny and determination.16

Salemy – In the book you speak of your intention to migrate Jameson’s methodology of cognitive mapping toward new media and to read the truth of social life through new structures of information and material resources. For Jameson, cognitive maps are situational representations on the part of the subject about the larger and properly unrepresentable totality, which is the ensemble of social structures as a whole.17 It seems clear at first that understanding interfaces, due to their aesthetic nature, can be achieved through cognitive mapping. After all, Jameson, despite objections from people like Nancy Fraser who prefers to assign the task of cognitive mapping to critical theory, believes that this mapping is the work of aesthetics.

What I am not able to reconcile is the fact that cognitive mapping, at least in what it used to represent before becoming the subject of Jameson’s thought, shares similar aesthetics with—to use your words—the candy-coated way in which various data sets are visualized today. Somewhere in your text you make this humourous point that all maps of the Internet resemble a cauliflower. How can you explain this contradiction? To provide you with an example from contemporary art, is Christian Marclay’s The Clock (2010) a new cognitive map of cinema or yet another cauliflower-like visualization of cinematic data?

GallowayThe Clock is a virtualization of cinema, in the classic Deleuzian sense of the term “virtual.” Which is to say it gathers all of cinema, applies a certain abstract rule (“all the minutes of the day”), and produces a parallel space of all possible configurations. A lot of Marclay’s work is like this. He is more map than cauliflower. The cauliflower forces a “visualization” while still requiring that it says nothing. The one and the nothing. The Clock doesn’t do that exactly. Marclay’s terrain is ultimately the concept, not the algorithm. He is making conceptual art, not algorithmic art. In that respect his work is quite classical. Perhaps the cognitive map is itself insufficient, and what we really need are cognitive machines. Yet there arises a certain kind of danger, since our world today is filled with more and more cognitive machines with each passing moment—Google algorithms, Predator drones, etc. Thus, the crucial issue for Jameson is not so much the map or the cognitive aspect, but establishing a relationship to the totality. That’s the political moment for Jameson, the moment in which an incommensurable reality (we might say “unrepresentable” reality) is forced into consciousness. So The Clock is quite interesting as concept or virtuality, but I’m not sure if it seizes on that necessary next step, the moment of forced commonality.

Salemy – About The Clock, I keep going from thinking it is the first great art of the twenty-first century to thinking of it as the last great art of the twentieth century that arrived ten years late. I think on the level of lists and names, The Clock is the visualization of a series of movies, but I have to agree that it is also conceptual, particularly in the moments that Marclay chooses to return to the same movie instead of limiting the use of each movie to just one instance. These are the instances in which one can go beyond machinic data visualization and arrive at a form of conscious Dada collage.

I am saying all this about The Clock in reference to a bold statement in your book in which you declare that “we do not yet have a critical or poetic language in which to represent the control society.”18 Do you think these new forms, these new languages, will arise from the field of critical theory/continental philosophy, or more from practices within literature, music, and visual arts? Where should one look for clues for the emergence of these new forms of representation?

Galloway – Remember that there is a very particular reason why we don’t have such a language. The reason is that computational language (code) was specifically designed to break with everything we know language to be: it’s not poetic, it’s not hermeneutic, and it’s not narrative. Sure there’s “Perl poetry”—but you see my point. Even then, the central problem is not that we’ve over-evolved into a kind of hyperdigitality. Language has had its digital aspect from the beginning, with the alphabet being the first killer app of digitality. The problem is that we don’t yet have a suitable analogue language; what Deleuze called a language of “expressive movements, paralinguistic signs, breaths and screams.”19 This is why I’m so obsessed with the non-diegetic aspects of media. We need to become more phatic, not less. More analogue, not less—but, again, only in a very precise definition of “analogue,” a definition that has absolutely nothing to do with nostalgia for the past or some kind of “old media.”

Salemy – In line with your interest in the philosophies of immanence, one gets a sense that in your opinion, information networks do not represent, copy, or recreate a world, but instead they create new and autonomous inhabitable environments. Here one sees a shift away from the ontology of representation into that of presentation, from replication to simulation. How do you reconcile your interest in the Deleuzian concept of immanence with your commitment to Jameson, if not with Marx and Sigmund Freud whose techniques of thought rest on metaphysical bifurcation and the age-old ideas of the real and the represented?

Galloway – Indeed, these two traditions are very different, at least at the most elemental level. Jameson and Marx are dialectical thinkers, thinkers of antagonism, negation, contradiction, development, and history. And Deleuze is more or less against all that; he throws out the dialectic entirely, and he thinks instead in terms of affirmation, multiplicity, immanence, horizontality, and the nonhuman. I’m not pretending that these two traditions magically combine into some harmonious super organism! Yet they do share something important, for they are both materialist.

For Marxists the material conditions of existence are paramount, for they shed light on the socio-historical situation. Materiality, thus, drives history, in the most literal sense. Likewise for Deleuzians, all spiritual or ideal realities (mind, soul, essence, cause, divinity, etc.) are collapsed into a single, immanent, material plane. In other words, for Deleuze, there is no such thing as “essence”; he will have to invent a whole different vocabulary to fill in the void that essence once occupied—vocabulary that will include words like “fold,” “refrain,” and “repetition.” So from the perspective of matter, Marx and Deleuze fit well together. (In fact, late in life Deleuze admitted what everyone already knew: that he was a Marxist, despite having never written at length about Marx or adopting the common Marxist discourses of his peers in the 1960s and ’70s. And as Deleuzians are keen to point out, the aged Deleuze spoke of an upcoming book project devoted to “the grandeur of Marx,” a book that unfortunately was never realized during his lifetime.) But, as you hint, Marx and Deleuze do not fit well together from the perspective of immanence or the dialectic.

Here’s where it is important to follow some strains of heretical Marxism, particularly those postwar French thinkers who, following Spinoza, try to recast Marx together with a more purely immanent materialism. The most famous of these names today is probably Negri—although Deleuze should be given a lot of credit for this trend, along with others like Louis Althusser and Pierre Macherey. These are the postwar followers of Benedict de Spinoza who were also Marxists. In Althusser or Macherey the basic choice is not so much Marx or Georg Wilhelm Friedrich Hegel (the classic choice between materialist practice and bourgeois philosophy) but another choice entirely. The true alternative is Hegelian-Marxism or Spinoza. In such an alternative Spinoza is held up as a materialist, like Marx, but a materialist within whom the divisions and distinctions of the dialectic give way in favour of a pervasive immanence of nature. So that is probably the best path to take, in my view, and luckily it’s a quite common path today. Yet another path exists, a darker and stranger path. This is the path of Laruelle, who is also a Marxist and is also totally committed to immanence. I predict that slowly people will shift from the Spinozian/Deleuzian Marxism common during the last couple of decades to a new form of Laruellean Marxism.

Salemy – What about Freud, since you are explicit in the book about not throwing out Freudian psychoanalysis as a methodological technique?

Galloway – I’m relying on a specific way in which Freud was used by postwar French thinkers, and then later by Jameson in a book like The Political Unconscious (1981). This is a tradition that more or less superimposes the Freudian unconscious onto the realm of the superstructure. So there can be an unconscious for society at large, or for culture at large. This strikes me as tremendously useful today, because it allows us to think in terms of trauma, repression, ideology, and mystification. Certainly I admit that these are not the most fashionable concepts anymore, after the turn of the millennium. Today everything is a question of surface and rhizomatic transformations. So I like Freud precisely because he is so anachronistic. My friend Ben Kafka has said that he wants to write a book called Anti-Anti-Oedipus. I can’t think of a more interesting project that should be undertaken today.

Salemy – Is it more or less safe to assume that we are living in the age of big data and interfaces? One can argue that the shift from painting to printing press, then to photography, and from there to cinema, television, cable television, and the Internet, and from there to our very current regime of signification or worldview, which I’d like to call post-Internet, has been gradual. However, can we claim that the introduction of computers has finally facilitated the dispersion and unification of various historically autonomous systems for the accumulation of data and access to information into a single open, universal, and global medium?

Galloway – For me “big data” means one thing and one thing only: the exploitation of labour. We have to be very militant on this point, and repeat it over and over. Industry trumpets this buzzword, and now academia has picked it up. But no one is quite sure what it means. It means one thing: the extraction of surplus value from unpaid micro-labour. And this can be revealed by the most rudimentary Marxist analysis. Google gains access to untapped data reservoirs—the “big data” of Web search, Gmail, or Google Books—they perform complex vector transformations, cluster trends, and extract the “shape” of these networks. But the fact remains: Google takes value created by other people and doesn’t pay for it. The practice is formally identical to what Marx described one hundred and fifty years ago. In other words, what Google does with big data is formally identical to the colonial and imperial phases of capitalist accumulation, only now the value is being skimmed from immaterial labour rather than physical labour in the classical sense. So in that sense I’m quite cynical about this buzzword “big data.” It cuts right to the heart of the new economy, but it covers over the reality of the situation with a whole complex of ideological mystification.

Salemy – Has the prevalence of databases and interfaces in all facets of life, in addition to causing the emergence of the society of control, had epistemological and ontological ramifications for humanity? For Bernard Stiegler who, if I am not wrong, is recalled in the book as an interlocutor of media theorists such as Marshall McLuhan and Kittler, these ramifications consist of the industrialization of memory and the proletarianization of subjectivity, which he, in my opinion, considers a malaise. Because you are equally invested in these questions, I would like to know your opinion about these ramifications and, if possible, ask you to respond to Stiegler’s propositions.20 Are we dealing today with a technological pathology or simply a major shift in the mode of production? Are distributed networks creating a fundamentally different type of human or new ways of mediation between humans and machines? Is the category of posthuman analogous with the idea of a distributed subject, or what Deleuze has called the society of dividuals?21

Galloway – Stiegler has achieved something profound for phenomenology: completing the work that others only began. He has tried to eliminate the nostalgia and technophobia that lingers in Heidegger, which is to say, the distinction that Martin Heidegger makes between the authentic life and the inauthentic life. Instead, Stiegler is willing to take the ultimate step and position technology together with being itself. And this produces what he calls a “technological phenomenology.” This is crucial, since the authentic-inauthentic distinction must be overcome if ever one wishes to surpass the law, the commandment, or the moral structure in general. In fact, it’s another way to think about the so-called ethical turn in philosophy today: morality is over, we are “beyond good and evil.” So Stiegler wants to say that we are always already technological; there was no Garden of Eden in which technology did not yet exist; we are not the fallen children living in some new world profaned by technology (and thus destined to clamour for a return to the past, or die trying).

In this sense Stiegler is very liberating. Yet at the same time a new problem emerges, which is the problem of judgment. In the absence of moral code, judgment is no longer automatic. This is why a whole new armature for practice emerges within the ethical regime: self-sacrifice, forcing, withdrawal into the generic, or recognition of the Other. We might need to think about local intensities, therefore, rather than global laws. Practice, enacted behaviour, performance, fiction and fantasy, a “form-of-life” as Tiqqun puts it.22 The language of pathology is seductive—and certainly a moral instinct lingers in Stiegler, as it ultimately does in all of us—but the important thing is to leverage the language of pathology in certain cases (within the political, for example, the language of pathology is totally legitimate), and to resist it in other cases (within the ethical, for example, pathology has no place whatsoever). I’m not one of those silly surrender monkeys—the kind who criticizes Slavoj Žižek or Badiou because they are “too militant” or because they do not shy away from violence—yet at the same time we must come to terms with a basic fact: that we are living today in a fundamentally ethical universe, not a political one. So the practical task ahead is to come to terms with what the ethical is, and how it may or may not allow for things like peace, truth, love, or death—in spite of a world that denies the people all of those things.

  1. In the phenomenological method developed and used by several philosophers, temporality has been mainly theorized as a sole property of the human subject, a medium for the operation of our embodied consciousness. Deleuze could be argued to be the first to break from this tradition and locate a temporality outside of the human subject. See Henri Bergson, The Creative Mind (Westport, CT: Greenwood, 1946) and Time and Free Will (London: George Allen and Unwin, 1927); Edmund Husserl, Logical Investigation, vol. 2 (New York: Routledge and Kegan Paul, 1970); Alfred Schütz, The Phenomenology of the Social World (Chicago: Northwestern University Press, 1967); Maurice Merleau-Ponty, Phenomenology of Perception (London: Routledge, 2005); Gilles Deleuze, Cinema 1: The Movement Image (Minneapolis: University of Minnesota Press, 1986), 71–86.
  2. Mark B. N. Hansen, chapter three, New Philosophy for New Media (Massachusetts: MIT, 2004).
  3. Alexander R. Galloway, “The Computational Image of Organization: Nils Aall Barricelli,” Grey Room, no. 46 (Winter 2012), 26–45.
  4. What constitutes the crystal-image is the most fundamental operation of time: since the past is constituted not after the present that it was but at the same time, time has to split itself in two at each moment as present and past, which differ from each other in nature, or, what amounts to the same thing, it has split the present in two heterogeneous directions, one of which is launched toward the future while the other falls into the past…. In fact the crystal constantly exchanges the two distinct images which constitute it, the actual image of the present which passes and the virtual image of the past which is preserved: distinct and yet indiscernible, and all the more indiscernible because distinct, because we do not know which is one and which is the other. This is unequal exchange, or the point of indiscernibility, the mutual image. Gilles Deleuze, Cinema 2: The Time-Image, trans. Hugh Tomlinson and Robert Galeta (Minneapolis: University of Minnesota Press, 1989), 81.
  5. Friedrich Kittler, Discourse Networks, 1800/1900, trans. Michael Metteer and Chris Cullens (Stanford: Stanford University Press, 1990).
  6. Published after his death and containing two radically different versions of the same manuscript, Historical Grammar of the Visual Arts reflects Riegl’s interest in the intersection of aesthetic and stylistic developments with that of cultural history in general. Pursuing a supra-genre agenda in the second part, Riegl recognizes the significance of media-based art historical divisions like architecture, painting, sculpture, and design, comparing them to different departments of a building, while stressing the need for horizontal, vertical, and diagonal corridors that, based on the cultural history, reconnect these segregated art forms. See Alois Riegl, Historical Grammar of the Visual Arts (New York: Zone Books, 2004).
  7. After discrediting the Kantian notion of correlation between object and subject, Meillassoux goes on to identify Chaos as the absolute base for the mathematics of a world in which nothing has ever been impossible: Our absolute, in effect, is nothing other than an extreme form of chaos, a hyper-Chaos, for which nothing is or would seem to be, impossible, not even the unthinkable. This absolute lies at the furthest remove from the absolutization we sought: the one that would allow mathematical science to describe the in-itself. We claimed that our absolutization of mathematics would conform to the Cartesian model and would proceed by identifying a primary absolute (the analogue of God), from which we would derive a secondary absolute, which is to say, a mathematical absolute (the analogue of extended substance). We have succeeded in identifying a primary absolute (Chaos), but contrary to the veracious God, the former would seem to be incapable of guaranteeing the absoluteness of scientific discourse, since, far from guaranteeing order, it guarantees only the possible destruction of every order. Quentin Meillassoux, After Finitude: An Essay on the Necessity of Contingency (New York: Continuum, 2008), 103.
  8. Wendy Hui Kyong Chun, “On Software, or the Persistence of Visual Knowledge,” Grey Room, no. 18 (Winter 2005), 26–51.
  9. Karl Marx, Capital, vol. 1, 1887,
  10. Gilles Deleuze, “Postscript on the Society of Control,” October, no. 59 (Winter 1992), 3–7.
  11. For more on the distinction between aligned and unaligned, see Alexander R. Galloway, “The Poverty of Philosophy: Realism and Post-Fordism,” Critical Inquiry, no. 39 (Winter 2013), 347–66.
  12. Alain Badiou, The Rational Kernel of the Hegelian Dialectic (Victoria, Australia: Re-Press, 2011).
  13. François Laruelle, Principes de la non-philosophie (Paris: PUF, 1996).
  14. Alexander R. Galloway and Eugene Thacker, “Notes for a Liberated Computer Language,” in The Exploit: A Theory of Networks (Minneapolis: University of Minneapolis Press, 2007), 159–66.
  15. Alain Badiou, Logics of Worlds (New York: Continuum, 2009), 557.
  16. Laruelle, Principes de la non-philosophie, passim.
  17. Fredric Jameson, “Cognitive Mapping,” in Marxism and the Interpretation of Culture, ed. C. Nelson and L. Grossberg (Chicago: University of Illinois Press, 1988), 347–60.
  18. Alexander R. Galloway, The Interface Effect (Cambridge: Polity, 2012), 98.
  19. Gilles Deleuze, Francis Bacon: The Logic of Sensation (New York: Continuum, 2003), 79.
  20. Bernard Stiegler, Technics and Time 2 (Stanford: Stanford University Press, 2009), 15­–20 and 97–187.
  21. Deleuze, “Postscript,” 3–7.
  22. Tiqqun, Introduction to Civil War (Los Angeles: Semiotext(e), 2010).
About the Authors

Alexander R. Galloway is an author and associate professor in the Department of Media, Culture, and Communication at New York University. He obtained his PhD in Literature from Duke University in 2001. Galloway’s research interests include media theory and contemporary philosophy. He is the author of Protocol: How Control Exists after Decentralization (MIT Press, 2006), Gaming: Essays on Algorithmic Culture (University of Minnesota Press, 2006), The Exploit: A Theory of Networks (coauthored with Eugene Thacker, University of Minnesota Press, 2007), and The Interface Effect (Polity Books, 2012).

Mohammad Salemy is an independent NYC/Vancouver-based critic and curator from Iran. He co-curated the Faces exhibition at the Morris and Helen Belkin Art Gallery, University of British Columbia. In 2014, he organized the Incredible Machines conference in Vancouver. Salemy holds a master’s degree in Critical and Curatorial Studies from the University of British Columbia and is an Organizer with the New Centre for Research and Practice, where he oversees the Art and Curatorial Program. He is a regular contributor to The Third Rail.

You Might Also Enjoy
Folio EOut Now