Re-conceiving information space: the fix is in

33
Re-conceiving information space: the fix is in Author: M. Brock Moorer Contact Info: M. Brock Moorer [email protected] Abstract: This paper explores spatial imaginings of the internet and makes the case for a new imagining of the internet as ‘real’ space (contingent, relational, and empirical). This new understanding of the internet and information space as real space leads to an examination of its vulnerabilities to capitalism especially in relation to David Harvey’s ‘spatial fix.’ Word Count: 7989

Transcript of Re-conceiving information space: the fix is in

Re-conceiving information space: the fix is in

Author: M. Brock Moorer

Contact Info:M. Brock [email protected]

Abstract:This paper explores spatial imaginings of the internet and makes the case for a new imagining of the internet as ‘real’ space (contingent, relational, and empirical). This new understanding of the internet and information space as real space leads to an examination of its vulnerabilities to capitalism especially in relation to David Harvey’s ‘spatial fix.’

Word Count: 7989

2

Re-conceiving information space: the fix is in

“What we are seeing are new spaces being imagined into being by reworking the spatial technologies that we hold dear, and what is clear is that these acts of imagination are all profoundly political acts; what we often think of as ‘abstract’ conceptions of space are part of the fabric of our being, and transforming how we think those conceptions means transforming ‘ourselves’” (Thrift 2003, page 105).

“Is cyberspace a kind of space?” (Adams and Warf, page 141).

There are many ways of imagining ‘information space,’ the space of electronic

communication that includes the internet, mobile phones networks, text messages, emails,

and numerous iterations of coded ‘objects’ that are transported/communicated across

networks of computers and software and with which they constantly interact. This space

doesn’t fit well with our everyday experience of physical space filled with what we think

of as material things like cars and houses, highways and forests. Information space and

the things it contains are generated inside computers by software/code that is created and

articulated by human programmers. This lack of a traditional, Euclidian physical location

or position in the ‘real world’ of trees, sidewalks and houses has lead to a variety of ways

of imagining the internet or information space. They vary wildly, from ‘information

superhighways’ to William Gibson’s fictional ‘cyberspace,’ to Castells’ ‘space of flows,’

to Zook and Graham’s ‘Digispace,’ to Dodge and Kitchin’s ‘code/space’ and Batty’s

‘cspace’ and ‘cyberplace’.

These and other scholarly imaginings of cyberspace as a) communication

medium, as b) software or code/space (Dodge and Kitchin, 2005) and as c) hardware

Re-conceiving information space: the fix is in Moorer

3

(computers, routers, telecom cables) are critical in understanding and locating the points

at which information space intersects with the real world creating specific and unique

forms of hybrid space or place (Dodge and Kitchin, 2005), (Zook and Graham 2006). As

I read these sorts of conceptions of information space, however, I find myself wondering

what is this space where software ‘lives’ and from which it interacts with and affects real

space (or as web users tend to refer to it, real life or the real world). As Adams and Warf

ask, “is cyberspace a kind of [geographical] space?” (page 141). Does it have boundaries

and hierarchies of scale? Does it contain ‘places’? Does it operate by different ‘physical’

rules and laws, different standards of measurement? And most importantly, who benefits

and how from these imaginings of information space’ as a space/place (or not)?

The purpose of this paper is to explore information space as a truly empirical ‘real

space’ in its own right. Through this investigation of information space I hope to begin to

understand its geography, its unique topography and to protect it from those forces that

would conquer this new ‘frontier’ to satisfy the expansive drive of capitalism's ‘spatial

fix’. I will build on earlier scholarly conceptions of information space and the work of

others investigating this ‘new frontier’ to reconceive information space as space, to

imagine ‘sites’ and locations online as places in their own right. Armed with a new

understanding of information space, I will explore potential threats to that space and

eventually locate potential sites of resistance.

(INFORMATION) SPACE

Re-conceiving information space: the fix is in Moorer

4

So how do we define information space as space when there are so many definitions of

space? And what exactly is ‘information space’? Information space, for the purposes of

this paper, is the space of all electronic communications and transmissions including cell

phone calls and text messages, radio, the internet and the world wide web. As many

scholars have pointed out, this space can be extended to include television (Adams,

1992), media, books, newspapers and the spaces where these information technologies

intersect to form new spaces in the real world (Dodge and Kitchin, 2005), (Zook and

Graham, 2006).

In order to begin an investigation of information space as space, we need to

establish a point of reference, a way to orient this discussion in the midst of so many

definitions of space. Space is where things happen, where things/actors (understood as

collections of matter and/or energy) reside and interact with each other according to

certain empirical and physical laws. It is not a stage upon which actors are located, its

existence is contingent upon the things that inhabit and simultaneously compose it. It is

constantly created, made by the interactions of things – in other words, the ‘events’ -- that

inhabit it and which are themselves composed of spacetime.1

Information space, is generated at the intersection of three actors: 1) computers

(hardware), 2) software (programs/code) and 3) human users.

Re-conceiving information space: the fix is in Moorer

1 This is based on theoretical physics’ conception of space. Specifically, this is a reference to Einstein’s re-imagining of Newton’s absolute space (a stage upon which events happen) to relative space where space is created by the interaction of objects (matter and energy) that inhabit it. For an excellent overview/summation, I recommend Part I of Lee Smolin’s Three Roads to Quantum Gravity. And of course, there’s always the source material, Einstein’s Relativity.

5

Human Relational and Rhizomatic Space

Information space is generated from the network of human relations as they are embodied

in software, hardware, and the activities of online ‘users’. It is contingent space and its

very existence relies upon the interaction of these agents. There is a growing group of

scholars who describe the internet and world wide web as rhizomatic (Froehling, 1997),

(Calleja and Schwager, 2004). According to Deleuze and Guattari, rhizomatic space is

“an acentered, nonhierarchical, nonsignifying system without a General and without an

organizing memory or central automaton, defined solely by a circulation of states” (page

21). Intuitively, information space, especially the internet that provides its foundation,

would seem to be by definition, rhizomatic. The internet was created to decentralize

communication in order to protect it. Any site, any message, any user can connect to any

other point (user, site, bulletin board, etc) in this space and there is no in-built hierarchy

(although I believe and will demonstrate later in this paper that a hierarchy has begun to

Re-conceiving information space: the fix is in Moorer

6

emerge, a hierarchy based on capitalist standards of ‘ranking’). In this space no point (no

user, no server, or packet of information) is given more importance, more weight than any

other point. There is no central ‘organizing memory,’ but instead a virtual infinity of

memories spread across servers and hard drives located throughout ‘real’ and virtual

space. This includes non-machine memory of the humans who interact with and create

this space. It would seem that information space is rhizomatic, but is it really?

Some interactions within this relational space, however, could arguably occur

between non-human actors. Thrift and French have highlighted the increasing agency of

software, even contending that programs are beginning to have agency or ‘intelligence’

themselves as they become involved in the “automatic production of space” (Thrift and

French, 2002). Software, embedded in urban systems of control (traffic systems,

surveillance) and inaccessible to citizens, has become “part of the taken-for-granted

background” that controls and ‘automatically generates new spaces (Ibid, page 21).

Even if more sophisticated ‘intelligent’ software systems are developed, their

automatic production of space through code, however, ultimately remains entangled with

and dependent upon people. Software is written by humans (typically large teams of

programmers and engineers) and it lives ‘in’ or ‘on’ hardware until it is interacts with

humans (or their coded agents).

Software as Space

Re-conceiving information space: the fix is in Moorer

7

Thrift and French privilege one component of information space (software) and its

interactions with very particular moments of human real space, where “everyday spaces

become saturated with computational capacities” (Ibid, page 315). They are interested in

a particular aspect of information space – software – and its interactions with humans.

Thrift and French acknowledge software’s physical ‘spaceness’: “Software takes up little

in the way of visible physical space. It generally occupies microspaces” (Ibid, page 311).

In the world of computing, space is everything and each microspace has value in a very

real world way, corresponding to expensive (in terms of the limited space of a hard drive

and the limits this places on the speed of calculations) hardware and/or software. In other

words, those ‘microspaces’ are critically important on the scale of code/software and

should be recognized as important spaces.

Fundamentally the microspace of code corresponds to Thrift's (2003) definition of

“empirical space,” which is similar to the measurable space of physics, and is composed

of the things that make up space. We take for granted that space has some physical

components -- walls, ground, atmosphere, roads, etc. – that give it structure or place.

These ‘things’ mark space and distance that can be measured. But things within

information space are not only the real objects that comprise it (computers, cell phones,

etc.), but include the software objects that exist within it. A code object can be a program

or an operating system, a web site or a video file. These things in information space are

entities that lose their identity if anything (an important line of code, for example) is

subtracted from them much like a table loses its usefulness if a leg is removed. In other

words, a digital music file ceases to be music if part of its program/code (or structure) is

Re-conceiving information space: the fix is in Moorer

8

damaged or removed. An email ceases to be useful as an object or method of

communication if part of its program is damaged or removed because it will be

unreadable. It will no longer fit the description of an ‘email’ and its identity changes to

that of ‘junk’ or ‘error’ code. Like DNA code, programs are fragile, and minor mistakes

to software code, mistakes, whether caused by human or machine error, can cause

enormous changes at the level of the program, even changing the function of the program

itself. More importantly, these code objects or programs interact with the network

(human, machine, and software), are traded and transacted across the network, and, as

I’ve already noted, like their physical counterparts (cars, trees, tables) these information

objects take up space.

Thrift (2003) also sees measurability as a necessary quality of empirical space.

Space must have distance and standards of measurement that are ‘universal’ to the space.

In other words, an inch is an inch anywhere on earth. I would argue that based on his

definition of measurement the information space of the internet has its own standards

although they do not directly correspond to physical measures (although as Thrift himself

argues, standards of measurement are arbitrary). We are all familiar with file ‘size,’ but

we rarely think about it beyond its immediate implications for emailing larger files or

downloading music. File size is actually the amount of space (measured in bits of

information or ‘microspaces’) that a piece of software takes up on the network (or a

computer hardware/hard drive/server). This is critical to an understanding of information

space as empirical ‘space’ in its own right. In other words information space cannot be

limited to abstract notions of ‘cyberspace’ as a conceptual space akin to literary or

Re-conceiving information space: the fix is in Moorer

9

metaphorical space or even that nebulous contingent social space that is so difficult (or

impossible!) to conceptualize and measure. This is a space with physical laws, where

objects have relative size, that limits transport and traffic ‘across’ the network of

hardware (cables, servers, routers, etc.).

Moving Information Through/Across Hardware Space

The internet’s existence as a distributed system is based upon the transport of ‘packets’ of

information across a “dumb network” that doesn’t prioritize or even care what sort of

information each packet is carrying. Each time a file is sent across the internet it is

broken down into discrete packets of information, which are then sent out onto the

network and to the particular address (a string of numbers assigned to the destination

computer). These packets are then reassembled at the point of download into the original

file (Abbate, 1999). It’s important to understand that “[o]n the Internet, all packets are

equal. Any one packet hurtling over the pipe to my house is treated more or less the same

way as any other packet, regardless of where it comes from or what kind of information

-- video, voice or just text -- it represents…bits are bits, and a video is no more important

than a Word file” (Manjoo, 2006). These packets are the quanta, the individual bits of

spacetime upon which the internet is built.

The internet was created this way in order to keep information flowing even if

some of its nodes (routers and/or servers) crashed. If all packets are equal, it is easier to

reroute the flow of traffic to different ‘parts’ of the network in order to avoid bottlenecks.

As Manjoo puts it, “Introducing intelligence into the Internet also introduces complexity,

Re-conceiving information space: the fix is in Moorer

10

and that can reduce how well the network works. Indeed, one of the main reasons

scientists first espoused the end-to-end principle [of dumb networks] is to make networks

efficient; it seemed obvious that analyzing each packet that passes over the Internet

would add major, if not impossibly large, computational demands to the system.”

It is this network stability along with the illusion of instantaneous transmission of

data packets that gives rise to the apparently frictionless transport of communications that

lead to Harvey’s ‘time-space compression.’ But there is friction in information space.

Files with larger sizes take up more space in the network and take much longer to

download or move/translate from information space to human space. Extremely large

files accessed by large numbers of users can even create bottlenecks and slow down and

inevitably crash servers and whole sections of the network. In fact, hackers use the

friction of file size to launch one of the most damaging forms of sabotage, ‘denial of

service attacks,’ which involve overwhelming mail servers with an enormous volume of

emails, thereby over-loading the computers and software and crashing the system.

Files do not simply take up space, but spacetime. Like an object’s mass in ‘real

spacetime’ a file’s size is directly related to the amount of time required to transport it

from one location to another as well as the amount of hardware space required to store it.

Of course, file transport can be expedited by faster computers with higher bandwidth

much as travel in real space can be sped up by investing in faster vehicles on roads or

routes designed for faster traffic. When we talk about a file’s size then we are talking

about the amount of time that is required to move it across information space and into

human space in exactly the same way that an object’s mass in ‘real space’ has the time

Re-conceiving information space: the fix is in Moorer

11

required to move it embedded within it. The systems of measurement are different

(microseconds rather than days or hours), but we are still talking about actors (software,

hardware, and human) moving things with weight across time and space. Even when

these objects are at rest (stored and inactive in hardware), they continue to impact and

interact with the larger space as software checks and rechecks hard and software. Their

very existence within the system creates drag on the overall flow of information.

File size is only the most obvious form of friction in information space. There are

many forms of friction that can slow down transport of information across information

space. Viruses, software conflicts, and even hardware malfunctions can disrupt the

distribution of packets and create incredible drag on the network. And these are just the

points of ‘friction’ that happen within information space. There are an infinite number of

points of friction between the human user and their (in)ability to interact with or access

that space because information space like space in the ‘real world’ (as internet users tend

to refer to space outside) is governed by physical laws that are dictated by the physics of

electricity and conductivity. These laws determine how much information can be stored

on drives, can be transported over cables and wireless, how fast this information can be

transferred across networks. Although the laws for moving about this space may be

different from the laws that govern real space (these things move at speeds near the speed

of light), there are still laws and limits to movement in this space. Despite many

arguments to the contrary (Harvey, 1990), and despite numerous and ongoing

technological advances in storage and communication technology, there is distance and

friction in information space and it can be exploited.

Re-conceiving information space: the fix is in Moorer

12

Information space even generates its own distinct physical features that are only

understandable as emergent features of this complex, physical space: ‘internet black

holes’2. A project at the University of Washington called Hubble, tracks the appearance of

these black holes, which are essentially places on the internet where information is

‘mysteriously’ lost. These black holes are not permanent features or even, at this point,

predictable. They emerge out of the complexity of the network only to disappear just as

suddenly, taking any information (packets) that has fallen into them with them. In other

words, the distance between two users connected to this ‘frictionless space of

flows’ (Castells, 2001) suddenly, and inexplicably becomes insurmountably large when

one of these black holes appears between them (University of Washington, Hubble:

Monitoring Internet Reachability in Real-Time). No one user, or piece of software or

hardware purposefully generates these features; they are produced randomly, emergent

features of the complexity of interactions between the human, code, and hardware actors

that are information space.

Information Space is 'Real' Space

Each time a (human) user accesses the internet (to get email, text messages, download

music, or conduct complex banking transactions), they begin a complex interaction with

an intricate network of machines, software, and other humans: humans who are actually

present on the network, humans who are present only through their participation in the

Re-conceiving information space: the fix is in Moorer

2 Actually, these features resemble emergent weather events such as storms more than black holes. I’m assuming they use ‘black holes’ because of the disappearance of information, much like an astronomical black hole.

13

production of software (or in the production of objects such as blog posts generated by

humans and software), and humans whose presence can be felt in the hardware

(computers, cables, routers, and servers) they design, create, build and constantly

maintain. Despite the apparent dominance of software and hardware in the typical

conception of the internet, human interaction with these objects is absolutely essential to

the generation and creation of information space3.

Even the physical law by which information and interaction exists and moves

through information space are based in human relations. These laws, known as protocols,

-- the standard of which is currently TCP/IP -- govern not only how information moves

through and across networks, but also how networks communicate with and connect to

each other. Although we think of network protocols as ‘natural’ arising from computer

programs and hardware, TCP/IP was only adopted as the standard for the internet after a

lengthy struggle between telecommunication companies, computer companies, and state

actors including the US military (Abbate, 1999). As Abbate describes it, “[network]

standards are a political issue because they represent a form of control over

technology” (Ibid, page 147). In short the physical laws of information space are not

unchangeable. Because these standards are created and maintained by humans, they are

Re-conceiving information space: the fix is in Moorer

3 There is an art pieces that illustrates this point beautifully: Cory Archangel’s installation, “Permanent Vacation,” is comprised of two computers screens each displaying a separate email account. Both accounts have been set up with ‘autoresponders’ that automatically send a reply to incoming email with the message that the user is on vacation. These two computer screens show the result of one initial email from one account to the other: an endless feedback loop of useless messages generated by software and hardware that ‘bounce’ back and forth between computers and ultimately reach no one. Without the human component, the message becomes meaningless, making the medium (information space) collapse.

14

vulnerable to political attacks from the actors (state, individual, and corporate) who wish

to control the movement of information essential to the creation and generation of this

space.

Information space therefore is not only contingent and social (and political),

composed of bounded and political entities and local communities; it is also empirical

space with its own standards and rules inhabited by humans, software, and hardware.

(INFORMATION) PLACE

In some ways, space is easier to talk about. We can use empirical terms to refer to the

physical nature and limits of space. ‘Place’ is less definite. Its definition slips from paper

to paper, yet place is essential to (information) space. Places are emergent and constantly

mutating pieces of space (composed of the interactions of objects including humans) that

ultimately interact with other places, with objects, and with the larger space in which they

reside to generate and reproduce space. Space and place, for the purposes of this paper

are distinguished by differences of scale. Space is composed of places and networks of

places that can become places in their own right.

We can deepen the process of re-imagining information space, more specifically

the internet, as including information places through Massey’s description of a

“progressive concept of place”:

“First of all, it is absolutely not static. If places can be conceptualized in terms of the social interactions which they tie together, then it is also the case that these interactions themselves are not motionless things, frozen in time. They are processes…. Second, places do not have to have boundaries in the sense of

Re-conceiving information space: the fix is in Moorer

15

divisions which frame simple enclosures. …Third, clearly places do not have single, unique ‘identities’; they are full of internal conflicts. …Fourth, and finally, none of this denies place nor the importance of the uniqueness of place. The specificity of place is continually reproduced, but it is not a specificity which results from some long, internalized history” (Massey, 1994, page 155, emphasis mine).

Information places are present in information space (which is then generated in the

interaction between software, hardware and humans).

Information space is not static

Information space does not reside in one physical place; it is a process that is spread over

a vast network of computers and continually reproduced by the information that travels

across it and the users that interact with it. Information space continues to exist in its own

right beyond these moments of specific agency. Computers and pieces of software are

constantly interacting with this space, each other, as well as human agents in the vast

information space network and these interactions are information space and place. A

frozen moment of time on the internet or the world wide web is impossible. Information

space and place not in motion cease to be the place we imagine.

Thus, information space clearly meets the first criteria of Massey's definition of

‘place’ and also contains a collection of places that interact with each other and

themselves to endlessly reproduce space. Social networking sites such as MySpace and

Facebook are the most obvious examples of information places, places that ‘can be

conceptualized in terms of the social interactions which they tie together’. Technically,

MySpace is simply a piece (or multiple pieces) of software residing on a collection of

Re-conceiving information space: the fix is in Moorer

16

servers with which users interact via software interfaces and hardware (personal

computers, cell phones, etc). But MySpace is also a collection of personal, individual

sites created by individual users who interact with each other and/or with communities of

other users and places to continually reproduce this place we think of as MySpace. These

sites/places are as individual as the users who create them and are connected to a unique

network of other users. Nested within these information places are other places or

communities bound together by the social interactions and interests of their users. These

places shift and change as users change, never settling into a static identity or relationship

with other places and with the whole of information space.

Information space and boundaries

Information space has no ‘boundaries in the sense of divisions which frame simple

enclosures’ it is, as we’ve already described, a network of interactions that happen on all

scales, from the user/computer interaction to the overall collection of interactions we

describe as the web. It is by its very nature a porous, hybrid, constantly changing space

built of ephemeral information circulating on cables, routers, and servers and interacting

with humans.

That is not to say that there are no divisions, no attempts at creating boundaries in

information space. The most obvious attempts to create boundaries are the creation of

firewalls by certain states and corporations to control private networks and/or the

movement of specific users (citizens of specific states) within information space

(Reporters Without Borders, 2008), (Fallows, 2008). Although these attempts to create

Re-conceiving information space: the fix is in Moorer

17

boundaries are typically understood in terms of censorship (in the cases of China and

Iran) or corporate security, they are very real attempts to draw boundaries in information

space – to limit the movement of certain users and certain information to specific,

bounded, and heavily surveilled areas. But these boundaries themselves shift constantly

to reflect the changing content of information space and the users they contain. Even

within these firewalled areas, different places are constantly created and un-created in

response to these shifting boundaries.

Information space and identity

Information space and the places that compose it have no ‘single, unique “identit[y]”’.

For business travelers information space is the invisible mechanics behind the ticketing,

trafficking and control of airlines or the interface they use to trade stocks and bonds, for

teenagers on MySpace it is a gathering space like a mall where they can interact with

their peers while simultaneously guarding against harassment and/or abuse from peers

and predators. Information space is continually re-imagined and re-constituted based

upon the needs of the user, the hardware, and the software involved in the moment and

place of interaction. Specific infoplaces such as MySpace can act as a communications

hub one moment, a community the next, a business/marketing portal (for bands and/or

online advertisers) the next. Identity for users and places online is never fixed. The

interaction of human, machine and software is messy, its edges are not clear and easily

defined.

Re-conceiving information space: the fix is in Moorer

18

Massey also states that places “are full of internal conflicts,” which is consistent

with community or social networking sites. Conflict is a part of everyday life in these

sites where everything from the boundaries of communities (who will and who will not

be allowed citizenship) to acceptance of new forms of language and overall rules of

conduct are constantly contested. These communities (BoingBoing.net, LiveJournal,

Worlds of Warcraft, etc.) employ moderators to control these conflicts and ultimately to

keep them from destroying the communities they continually reproduce. Even moderated

conflicts shape and continuously change and/or reinforce the identities of these places.

Information space is continually reproduced

The specificity of information space and of the places it contains is “continually

reproduced” through the network, the intersection of human user, software, hardware.

Each user brings a different set of interactions, thereby changing space, or localizing it to

create a unique narrative born of the complex interaction of a specific human with

machines and specific pieces of software. This is not specificity determined by ‘some

long, internalized history,’ these places are determined and created by each user, each

interaction, each community and they change as users, software and hardware change

over time. Information space is a place itself (that is different for every user in every

moment), but it can also be simultaneously characterized as a network of specific places

(sites) at which points humans, hardware, and software interact. More importantly, these

interactions between machines, software, and human agents create localized places in

information space that have their own autonomous identities without fixed boundaries.

Re-conceiving information space: the fix is in Moorer

19

Martin Dodge’s exploration of Alphaworld is another example of virtual worlds inhabited

as ‘places’ in their own right with their own topography and physical laws and a social

structure that is constantly reproduced by users and programmers (Dodge, 2002).

Thus, these places in information space are bounded in the Masseyan sense:

“Places came to be seen as bounded, with their own internally-generated authenticities, as

defined by their difference from other places which lay outside, beyond their

borders” (Massey, 2005, page 64). Although these boundaries are porous and hybrid,

they enclose localities with distinct identities and/or narratives. This can be seen in the

differences between social networking sites (and the rivalries and loyalties of their

respective members) such as MySpace, Friendster, and Facebook. Although the software

(rules and laws of use) and functionality of these spaces is nearly identical in purpose and

use, each has its own distinct community identity that is not interchangeable. Studies

even indicate that these social sites differ along socio-economic lines (Boyd, 2007),

(Hargittai, 2007).

A more obvious example of bounded, internally-generated authenticities can be

found in Massively Multi-User Online Role Playing Games (MMORPGs) or ‘synthetic

worlds’ investigated by Castranova. To enter these distinct places, people create

characters or avatars through which they interact with other players and an entire ‘world’

where they live, eat, fight, and die according to the laws and code of a specific gaming

world. Each world is unique in its programming, architecture, and its very nature

expressed through the type of play involved, which can be anything from killing monsters

and collecting gold, to creating simulated cities, etc.

Re-conceiving information space: the fix is in Moorer

20

But it is not just the software and hardware that makes these worlds or places

unique, it is the interaction of users as they participate in information space with each

other and the software that resides on different ‘servers’ across the network. Players

interact with each other, form communities and guilds that transform play and the very

places they inhabit inside these virtual worlds. Participants in these places have even

begun to sell objects (swords, currency, even entire characters) in ‘real world’ markets for

real currency (Castranova, 2005), blurring the line between what is real and what is

virtua’ or digital. The self that interacts with these places that exist in information space is

then bringing pieces of the virtual world with it to interact with the real world in very real

ways. This has also created tensions between the programmers/creators of these worlds

who control these objects and the players who assume ownership of what is essentially a

piece of code, owned by the company that programs and maintains the MMPORG world.

THE (INFOSPATIAL) FIX IS IN

Information space is then measurable space, governed by physical laws and rules, full of

and constituted by places and things that interact with humans, software, and hardware to

constantly reproduce it. But what does that mean and why is it important? If we begin to

re-conceive information space as ‘real’ space with 'real' places then we are forced to see

its vulnerabilities and imagine new ways of defending this space from the intrusion of the

aggressive forces of capitalism, which are, according to Harvey, always in need of new

spaces to inhabit:

Re-conceiving information space: the fix is in Moorer

21

“Spatial displacement entails the absorption of excess capital and labour in geographical expansion. This ‘spatial fix’ (as I have elsewhere called it) to the overaccumulation problem entails the production of new spaces within which capitalist production can proceed (through infrastructural investments, for example), the growth of trade and direct investments, and the exploration of new possibilities for the exploitation of labour power….

If continuous geographical expansion of capitalism were a real possibility, there could be a relatively permanent solution to the overaccumulation problem. But to the degree that the progressive implantation of capitalism across the face of the earth extends the space within which the overaccumulation problem can arise, so geographical expansion can at best be a short-run solution to the overaccumulation problem.” (Harvey, 1990, page 183)

But in the world of information space, which is constantly expanding, evolving, and

virtually limitless, the infoplaces it contains are vulnerable to the geographical expansion

of capitalism and Harvey’s “spatial fix” is no longer a “short-run solution.” In the

virtually infinite geography of information space, capitalism has the makings of a

limitless spacetime fix, i.e., the infospatial fix.

We can see capitalism’s move into information space clearly if we follow a

timeline of the internet’s history (Computer History Museum, Timeline Exhibit). The

internet began as a networked collection of government (mainly military) and

‘foundation’ computers over which academics and programmers communicated with each

other and shared information (primarily through USENET). This network went through a

period of exponential growth (and is still undergoing exponential growth) and several

transformations to the current standard protocols.

Re-conceiving information space: the fix is in Moorer

22

The World Wide Web (or the protocols that created it) was officially established in

1993. As business became aware of the potential of the world wide web for advertising

and eventually for profit in the form of web retailers and online stores, growth of the web

exploded as each corporation scrambled to create a ‘web presence.’ The movement of

capitalism into information space in the form of the world wide web creates a demand

that causes growth to explode faster than the growth of the internet backbone that

preceded it (Zook, 2005).

The Hardware Fix

The dangers of the infospatial fix run deeper to the very infrastructure of information

space itself – the packets that carry information across and through hardware. The

backbone of the internet (the hardware, routers, servers, and cables) are owned and

controlled for the most part, by a small number of companies. Verizon and AT&T alone

own a substantial number of internet ‘pipes’ and AT&T has proven its interest in buying

more and has taken steps to control information space at the smallest, most vulnerable

scale.

Manjoo in his 2006 article outlines Cisco’s plan to create a hierarchy of packets

using technology it calls “deep packet inspection” that would allow Cisco (and ultimately

its customers such as AT&T) to peer into each packet and determine its value based on a

variety of variables including what kind of software thing it is (video, audio, spreadsheet,

email, etc.), where it originated, who or what created it, and how large the original file

was. This will allow Cisco (and its clients) to charge companies (and, in the end,

Re-conceiving information space: the fix is in Moorer

23

consumers) more to move certain, more valuable, information packets across its network.

Those who can’t pay will find their packets, their things, relegated to a different kind of

information space with different rules and distances over which their things must travel.

AT&T is one of the most aggressive companies in this drive to own the internet.

They are already planning to build ‘intelligence’ into their networks thereby setting limits

on the information packets that travel over their pipes. This scenario is straight out of

Harvey as corporations “pursue profit (or other forms of advantage) by altering the ways

time and space are used and defined” (page 229). Capitalism is simply doing what it does

best, spatially fixing the problems of over-accumulation by moving into and controlling

new geographies, in this case, information space and the microspaces and places that

compose it. And, as AT&T argues, they own the network or a substantial part of it (the

cables, servers, and routers over and through which information must travel) so they

should be able to control the protocols for traffic that travels over it.

The decentralized nature of information space (which is built into the packet

technology at its root) and the structures built upon it (most notably, the web) allow for

the creation of vast, organic communities of users and sites. With the hierarchical

prioritization of packets, new territorialities of spacetime will emerge, as corporations

dictate how this space is to be used and draw lines of ownership around certain territories.

Right now, anyone with the proper tools who speaks the language, can build a site or

community in this frontier, but the time is near when the very fundaments of that space

will be controlled by corporations and the interests of capital. Those without money or

Re-conceiving information space: the fix is in Moorer

24

voice will be denied and or limited in their mobility within and/or through information

space and will be relegated to ‘virtual’ shantytowns.

Of course, there are already vast differences in the accessibility of information

space that depend upon geographical location (access to the network), class, gender, and

citizenship. One of the most well-documented points of friction in information space is

access (Baliamoune-Lutz, 2003), (Main, 2001). Since the breakup of state-owned

telecommunications monopolies, private corporations have taken ownership of most of

the network that underpins the internet and information space. The privatization of

telecommunications in many countries including the U.S. ensures that accessibility will

be driven by market forces. The ‘digital divide’ between those with access and those

without is the subject of a great deal of writing about the internet including U.N.

initiatives to bridge the divide between the so-called developed and undeveloped world.

As one would expect, the limits of access seem to follow the boundaries that have

historically limited access to resources and technology: race, class, poverty, ethnicity,

geographic location, gender. The World Summit on the Information Society is pressing

the U.N. to add access to information space to the list fundamental human rights, but so

far only two countries (Estonia and Greece) guarantee internet access as a basic human

right. And it is still not clear that more access is inherently better for those it serves if

they simply become users (and not citizens) whose access to information is limited, but

whose access to commodities is not.

The Software Fix

Re-conceiving information space: the fix is in Moorer

25

Beyond accessibility and the attempts to change the fundamental rules of information

space, there are other, less visible capitalizations. Many writers have taken Google to task

for its complicity in the jailing of Chinese dissident bloggers, but also (and less visibly)

for its role as the premier online and increasingly offline (in institutional libraries)

archivist (Vaidhyanathan, 2005). Vaidhyanathan specifically argues the dangers of

contracting a profit-driven company that is inevitably beholden to its stockholders to

digitize and archive the libraries of public institutions and universities. Google will not

only control the rights to the digitized material, but will (and I argue more importantly)

be responsible for establishing the categories (and thereby the search parameters) within

which materials in that archive are placed. It is Google’s power as an archivist that

concerns me the most because the power to create categories is essentially the power to

create boundaries and thereby potentially limit access to certain information and to

certain information places.

We have already established that information space could currently be described

as non-hierarchical, but Google is working diligently to change that. In a very real sense,

search engines are the maps of information space. They draw the lines around and

between sites/places. They direct traffic to certain sites/places and away from others.

They hierarchically rank and direct traffic to those sites they’ve included in their

databases. Most of us think of Google as an unbiased search engine. We enter the

keywords of our search (the kind of sites we want to find, the characteristics of the

infoplace we’d like to visit) and Google immediately responds with lists of sites/places

that fit those keywords. Most don’t think about the esoteric processes involved in ranking

Re-conceiving information space: the fix is in Moorer

26

those keywords or the order in which those sites are listed in our search results, an order

which literally and figuratively brings sites closer to the user or separates them.

Google’s ranking system is a carefully guarded secret because there is an

enormous business in hacking Google’s system. As any business with an online presence

knows, the closer you are to the top of the ranking, the more likely you are to receive

click-through visits from consumers. Thus, the closer you are to the top of Google’s

ranking system, the closer you are to consumers/users. Studies have shown that ‘users’

typically don’t click beyond the first page of results when searching sites so it is critical

to the online presence of businesses (and therefore critical to the Google’s extremely

lucrative business selling advertising linked to keywords) that their site be ranked in the

top ten. This may sound like an issue of interest only to computer geeks and

businessmen, but there is a much more insidious agenda at work here. What Google is in

fact doing is creating a hierarchy of sites or places online. That is, Google is

territorializing the rhizomatic space of information space and ranking based on an opaque

black box system that is inherently capitalistic.

Much like the corporate owners of the internet backbone who seek to territorialize

information space, Google has begun the process of territorializing the internet and

ranking/labeling sites based on their ‘value.’ These are values, boundaries and categories

determined strictly by the ubiquitous Google and therein lies the problem and potentially

the vulnerability of this system. Once hierarchy is introduced to information space, it is

difficult to undo. The alternative would seem to be obvious: use another search engine,

but most search engines act upon the same principles of hierarchical ranking if they do

Re-conceiving information space: the fix is in Moorer

27

not simply use Google as their backend. Even the other search engines (Yahoo!, Ask

Jeeves, etc.) are inherently problematic because they all categorize sites based on

arbitrary keywords they themselves determine typically for reasons centered on profit

(most ad revenue online is based on serving up advertisements targeted to keywords).

The categories and the keywords and therefore the boundaries of places online are

established by the search engine software, not by the sites, the users, or the creators of

content. If the internet is truly rhizomatic as I and others have argued, Google is working

to change this, to create a center – Google – a gateway through which traffic must pass, a

gateway that determines distance, importance, and the path(s) users and information take

through and around information space.

The Human Relational Defense

Any alternative would therefore have to move beyond these artificial categories and

boundaries to a new way of ‘arranging’ these sites in information space: human-centered

sites/search engines or ‘social bookmarking’ sites like del.icio.us and mag.nol.ia or the

transparent Wikia Search created by the founder of the open source Wikipedia.

Del.icio.us and mag.nol.ia allow ‘users’ to create ‘tags’ to describe sites and bookmark

them by meanings/identities assigned by humans. Wikia Search uses open source,

‘transparent’ ranking systems that are created by and open to its participants. When a

person enters a keyword into one of these search engines, the sites that are returned are

determined not by some black box ranking and categorical system, but by the ‘tags’ and

categories created by other participants and communities. Far from hegemonic, this is an

Re-conceiving information space: the fix is in Moorer

28

inherently democratic system that allows citizens of information space to create the

places that define it. Citizens determine which sites loom large in the rankings, which

sites are ‘closer’ in terms of their hierarchical ranking and distance to the ‘user’. In other

words, people are responsible for creating the maps that define, form and shape this ever-

expanding space. Of course, even these open systems can be hacked by marketers who

artificially enhance results for their clients, but their efforts are not built-in and can be

combated by moderators and private citizens as they are on Wikipedia.

CONCLUSION

“[W]hile we celebrate the “inherent” freedom of the net, the architecture of the net is changing from under us. The architecture is shifting from an architecture of freedom to an architecture of control” (Lessig, 2004, page 10).

“To know how and what space internalizes is to learn how to produce something better.... To change life is to change space’ to change space is to change life” (Merrifield 2000, page 173).

In conclusion, information space is empirical, ‘real,’ and contingent spacetime with its

own geography, its own places, its own ‘objects’, actors, and physical laws. As such, it is

vulnerable to what Harvey describes as capitalism’s need for a ‘spatial fix’. With its

ever-expanding territories, information space offers capitalism the potential for a limitless

spatial fix. But a re-imagining of information space to understand it as (info)spatial will

help us see that it is not a hegemonic space of frictionless flows; it is a space with

Re-conceiving information space: the fix is in Moorer

29

physical laws, its own distinct forms of friction and a vast network of heterogeneous,

non-hierarchical, and bounded ‘places’ each with its own identity. As the space expands,

and more and more distinct places are created within it, capitalism’s power to dominate

and flow into these spaces will only grow. It is only through difference and change,

through alternative narratives made possible by information space’s very

‘spaceness’ (friction, bounded, specific territories or localities, etc.), through shifting

definitions and identities of place that the actors within this space can begin to (re)claim,

politicize, and defend it. If we do not begin to know and understand what information

space internalizes, its expanding territories and topography, it will remain vulnerable to

the infospatial fix and we will never be able to ‘produce something better’.

Re-conceiving information space: the fix is in Moorer

30

References

Abbate, Janet, 1999, Inventing the Internet (MIT Press, Cambridge and London).

Adams, Paul C, 1997, “Cyberspace and Virtual Places”, in Geographical Review 87 155-171.

Adams, Paul C, and Warf, Barney, 1997, “Introduction: Cyberspace and Geographical Space” in Geographical Review 87 139-145.

Adams, Paul C, 1992, “Television as Gathering Place”, in Annals of the Association of American Geographers 82 117-135.

Aoyama, Yuko, 2001, “The Information Society, Japanese Style: Corner Stores as Hubs for E-commerce access”, in Worlds of Electronic Commerce, Eds Leinbach, T. and Brunn, S (John Wiley and Sons, New York) pp 109-128.

Baliamoune-Lutz, M, 2003, “An analysis of the determinants and effects of ICT diffusion in developing countries”, in Information Technology for Development 10 151-169.

boyd, danah, 2007, "Viewing American class divisions through Facebook and MySpace", in Apophenia Blog, June 24.

Calleja, Gordon and Schwager, Christian, 2004, “Rhizomatic cyborgs: hypertextual considerations in a posthuman age”, in Technoetic Arts: A Journal of Speculative Research 2.

Castells, Manuel, 2001, The Internet Galaxy (Oxford University Press, Oxford).

Castronova, Eward, 2005, synthetic worlds (Chicago University Press, Chicago and London).

Dodge, Martin, 2002, “Explorations in AlphaWorld: the geography of 3D virtual worlds on the Internet”, in Virtual Reality in Geography Eds Fisher, P and Unwin, D (Taylor and Francis, London and New York) pp305-331.

Dodge, Martin and Kitchin, Rob, 2005, “Code and the Transduction of Space”, in Annals of the Association of American Geographers 95 162-180.

Dodge, Martin and Kitchin, Rob, 2004, “Flying through code/space: the real virtuality of air travel” in Environment and Planning 36 195-211.

Re-conceiving information space: the fix is in Moorer

31

Einstein, Albert, 1961, Relativity: The Special and the General Theory, translated by Robert W. Lawson (Three Rivers Press, New York). Fallows, James, 2008, “The Connection Has Been Reset” in TheAtlantic.com, http://www.theatlantic.com/doc/200803/chinese-firewall

Froehling, Oliver, 1997, “The Cyberspace "War of Ink and Internet" in Chiapas, Mexico” in The Geographical Review 87.

Graham, Stephen DN Graham, 2005, “Software-sorted geographies”, in Progress in Human Geography 29 562-580.

Gray, Matthew, 1996), “Web Growth Summary”, MIT University website, http://www.mit.edu/people/mkgray/net/web-growth-summary.html

Hargittai, E, 2007, “Whose space? Differences among users and non-users of social network sites”, in Journal of Computer-Mediated Communication 13 article 14.

Harvey, David, 1990, The Condition of Postmodernity (Blackwell, UK).

Lessig, Lawrence, 2004, “The Laws of Cyberspace”, in Readings in Cyberethics, 2nd Edition, Eds Richard A Spinello and Herman Tavani (Jones and Bartlett Publishers, Sudbury, MA).

Main, Linda, 2001, “The Global information infrastructure: empowerment or imperialism?” in Third World Quarterly 22 83-97.

Marston, S. Jones, JP III, and Woodward, K, 2006, “Human geography without scale”, Transactions of the Institute of British Geographer 30 416-432.

Massey, Doreen, 2005, for space (Sage Publications, London).

Massey, Doreen, 2001, “Talking of space-time”, in Transactions of the Institute of British Geographers 26 257-261.

Massey, Doreen, 1999, “Space-time, ‘science’ and the relationship between physical geography and human geography”, in Transactions of the Institute of British Geographers 24 261-276.

Massey, Doreen, 1992, space, place, and gender (University of Minnesota Press, Minneapolis).

Re-conceiving information space: the fix is in Moorer

32

Manjoo, Farhoud, 2006, “The corporate toll on the Internet”, in Salon.com, http://www.salon.com/tech/feature/2006/04/17/toll/

Merrifield, Andy, 2000, “Henri Lefebvre: a socialist in space”, in thinking space. (Routledge, London) pp 167-182.

Reporters Without Borders, 2008, http://www.rsf.org/article.php3?id_article=10749

Sheppard, Eric, 2002, “The Spaces and Times of Globalization: Place, Scale, Networks, and Positionality”, in Economic Geography Volume 78 307-330.

Smolin, Lee, 2001, Three Roads to Quantum Gravity (Basic Books, New York).

Thrift, Nigel, 2003, “Space: The Fundamental Stuff of Human Geography”, in Key Concepts in Geography (Sage Publications, UK) pp 95-108.

Thrift, Nigel and French, Shaun, 2002, "The automatic production of space", Transactions of the Institute of British Geographers 27 309-335.

Thrift, Nigel and Crang, Mike, 2000, “Introduction”, in thinking space, Eds Thrift, Nigel and Crang, Mike (Routledge, London) pp 1-30.

Turkle, Sherry, 1995, Life on the Screen: Identity in the Age of the Internet (Simon and Schuster Paperbacks, New York, London, Toronto, Sydney). University of Washington Computer Science and Engineering, Hubble: Monitoring Internet Reachability in Real-Time, http://hubble.cs.washington.edu/

Vaidhyanathan, Siva, 2005, “A Risky Gamble With Google”, in Chronicle of Higher Education 52.

Zook, Matthew, 2005, The Geography of the Internet Industry: Venture Capital, Dot-coms, and Local Knowledge (The Information Age) (Blackwell, London).

Zook, Matthew and Graham, Mark, 2006, “The Making of DigiPlace: Merging Soft-ware and Hard-Where via GoogleLocal”, forthcoming.

Re-conceiving information space: the fix is in Moorer

33Re-conceiving information space: the fix is in Moorer