Information technology is changing many aspects of human endeavor and existence. This is beyond doubt. What are contested are the social and ethical implications of these changes. The source of these contestations is the multiple ways in which one can conceptualize and interpret the information technology/society interrelationship. Each of these ways of conceptualization and interpretation enables one to see the information technology/society relationship differently and therefore construe its social and ethical implications in a different manner. This entry is concerned with the phenomenological approach to interpreting information technology and its social and ethical implications. However, in order to understand the distinctiveness of the phenomenological approach some other possible ways of interpreting this relationship will also be outlined briefly.
Information technology has become ubiquitous, invading all aspects of human existence. Most everyday technologies such as elevators, automobiles, microwaves, watches, and so forth depend on microprocessors for their ongoing operation. Most organizations and institutions have become reliant on their information technology infrastructure to a large degree. Indeed information technology is seen by many as a cost-efficient way to solve a multitude of problems facing our complex contemporary society. One can almost say that information technology has become construed as the default technology for solving a whole raft of technical and social problems such as health provision, security, governance, etc. One could also argue that it has become synonymous with society's view of modernization and progress. For most it seems obvious that information technology has made it possible for humans to continue to construct increasingly complex systems of coordination and social ordering; systems without which contemporary society would not be able to exist in its present form. The economic and organizational benefit of information technology is not widely disputed. The dispute is more often about the way information technology is changing or transforming the social domain, and in particular, the ethical domain. This dispute is largely centered around different ways of conceptualizing and interpreting the nature of the information technology and society interrelationship.
- 1. Views on the Nature of Information Technology
- 1.1 Information Technology as an Artifact or Tool
- 1.2 Information Technology as Socially Constructed Artifacts and Actors
- 1.3 Information Technology as an Ongoing Horizon of Meaning and Action
- 2. Phenomenological Approaches to Technology
- 2.1 A Fundamental Critique of the Technological Attitude
- 2.2 The Technological Attitude in Contemporary Society and Technologies
- 2.3 A Phenomenology of the Human/Technology Relationship
- 3. Ethics and Information Technology
- 3.1 The Impact of Information Technology and the Application of Ethical Theory
- 3.2 The Politics of Information Technology and a Disclosive Ethics
- 3.3 Information Technology, Ethics and our Human way of Being
- 4. Phenomenology, Ethics and Information Technology: The Case of Virtuality
- 5. Conclusion
- Other Internet Resources
- Related Entries
1. Views on the Nature of Information Technology
It seems obvious that a world with information technology is somehow different from a world without information technology. But what is the difference? Is it a difference of order (faster, closer, clearer, etc) or is it a difference of kind? How can we make sense of these questions? Does technology shape society or society shape technology, or both shape each other? What is the nature of this shaping? Is it in practices, in ways of thinking, or is it more fundamental? The answers to these questions will obviously influence the judgments we make about the social and ethical implications of information technology when we consider the policy and practical concerns of using information technology in a particular domain (such as commerce, education or government).
The answer to these questions are also grounded to a large extent in one's particular, implicit or explicit, ontology of information technology itself; what is the nature—way of being—of information technology as such? Obviously many different ontological positions are possible and have emerged. Nevertheless, it may be useful for the purposes of this entry to discern at least three contrasting and prevailing views.
1.1 Information Technology as an Artifact or Tool
The most common view of information technology is that it is an artifact or tool simply available for humans to achieve their objective and outcomes. Some of these tools might be useful and others not. When users take up a tool or artifact (word processor, mobile phone, etc) it will tend to have an impact on the way they do things. For example if I write with a word processor I would tend to have different writing practices than I would with pen and paper. According to this view, we need to understand the impact that information technology has on society as it is taken up and used in everyday practices. For example, how will communication with mobile phones change our social interaction and social relationships? In asking such a question this view does not primarily concern itself with the development of the technology—why and how did it come about in the first instance. It mostly assumes that the particular technology—mobile phones in this case—operates in a more or less uniform manner in different social settings. In other words, it assumes that a particular technology has certain determinate effects on, or in, the context of its use. This way of conceptualizing information technology leads to questions such as “what is the impact of the internet on education” or “what is the impact of CCTV on privacy”. This view of technology is often criticized for a greater or lesser degree of technological determinism. Technological determinism is the view that technology more or less causes certain ways of doing or ways of organizing to come about. For example, a technological determinist may argue that the Internet's open and non-hierarchical architecture can more or less cause a society that uses it to become more open and less hierarchical. The work of Postman (1993) is an example of this type of critical evaluation of the impact of technology on society.
1.2 Information Technology as Socially Constructed Artifacts and Actors
Many scholars argue that the ‘impact view’ above of information technology does not give an adequate account of the relationship between information technology and society (Bijker, Pinch, and Hughes 1987, Bijker 1995, Law 1991, Latour 1991). Firstly, it does not take into account that the technology does not simply appear but is the outcome of a complex and socially situated development and design process. In this development and design process many alternative options become excluded in favour of the technology that is now available—obviously with important implications. In other words there are many cultural, political and economic forces that shape the particular options suggested as well as the way the selected options become designed and implemented (Bijker, Pinch, and Hughes 1987). It is not only technology that ‘impacts' on society; technology itself is already the outcome of complex and subtle social processes—in other words it is socially constructed. Moreover, they argue that when we look at the actual uses of particular technologies we discover that users use them in many diverse and often unexpected ways—leading to many and diverse unintended consequences. Both in its design and in its actual use there is an ongoing reciprocal relationship in which society and technology co-construct each other; they act through and upon each other. It is therefore very difficult to make general statements about the ‘impact’ of a technology. One can, at most, speak of some general trends for which many exceptions will invariably exist. For the proponents of the this constructivist view it is important to understand, through detailed descriptive accounts, the particular ways in which technologies emerge and become embedded in particular social practices. Examples of such studies can be found in the work of Bijker (1995), Law (1991) and Latour (1991).
1.3 Information Technology as an Ongoing Horizon of Meaning and Action
For the phenomenologist the ‘impact view’ of technology as well as the constructivist view of the technology/society relationships is valid but not adequate (Heidegger 1977, Borgmann 1985, Winograd and Flores 1987, Ihde 1990, Dreyfus 1992, 2001). They argue that these accounts of technology, and the technology/society relationship, posit technology and society as if speaking about the one does not immediately and already draw upon the other for its ongoing sense or meaning. For the phenomenologist society and technology co-constitute each other; they are each other's ongoing condition or possibility for being what they are. For them technology is not just the artifact. Rather, the artifact already emerges from a prior ‘technological’ attitude towards the world (Heidegger 1977). For example, as the already technologically oriented human beings that we are, we will tend to conceive communication as a problem requiring a technological solution. Thus, technology is already the outcome of a technological way of looking and relating ourselves to the world. Once in place technology allows the world to ‘show up’ in particular ways (Introna and Ilharco 2003). For example you are a different person to me with a mobile phone than without one. With a mobile phone you become disclosed, or show up, as ‘contactable’, ‘within reach’ as it were. It is this way of thinking about information technology, as a horizon of meaning and action that we want to elaborate further before considering how these various ways of conceptualizing technology shape our views on the social and ethical implications of information technology.
2. Phenomenological Approaches to Technology
One must start by saying that there is not a unified phenomenological approach to information technology as such. There is not even a unified phenomenological approach as such. The term ‘phenomenology’ is often used to cover a wide family of related approaches that share some common characteristics but not all. Nonetheless, there are various examples of how phenomenological resources have been used to understand technology more generally and information technology more specifically. One might claim that all of these phenomenological studies share at least the underlying view that technology and society co-constitute each other by being each other's reciprocal and ongoing condition or possibility for being what they are. As such they continually draw on each other for their ongoing sense or meaning. For the purposes of this entry it might be best to indicate some of the existing phenomenological studies of technology in three different but intimately connected strands:
- Phenomenology as a fundamental critique of the technological attitude as such (Martin Heidegger 1977);
- The technological attitude as manifested in our contemporary relationship with particular technologies (Hubert Dreyfus 1992 and Albert Borgmann 1984);
- A phenomenology of the human/technology relationship (Don Ihde 1990).
2.1 A Fundamental Critique of the Technological Attitude
Probably the most famous phenomenological analysis of technology—or rather the technological attitude that give rise to artifacts—is Martin Heidegger's (1977) essay The Question Concerning Technology. This essay is important as a reference point because it illustrates forcefully the most important and distinctive claim of phenomenology vis-à-vis technology. Technology is not merely an artifact or our relationship with this or that artifact; rather, the artifact—and our relationship with it—is already an outcome of a particular ‘technological’ way of seeing and conducting ourselves in and towards the world. Heidegger (1977) famously claimed that “the essence of technology is nothing technological” (p.4).
For Heidegger the essence of technology is the way of being of modern humans—a way of conducting themselves towards the world—that sees the world as something to be ordered and shaped in line with projects, intentions and desires—a ‘will to power’ that manifest itself as a ‘will to technology.’ It is in this technological mood that problems show up as already requiring technical solutions. The term ‘mood’ here is used to refer to a collectively held ‘sense’ (or way of grasping) of an event or a situation; we often use it when we refer to the ‘mood of the meeting’ or the ‘mood of our times.’ He calls this technological mood enframing (Gestell in German). With this term he wants to denote that the modern mood takes or approaches the world as always and already framed, i.e., enframed. Heidegger claims that for us, living in the technological age, the world is already framed as a resource available for us, to be made, to be shaped for our ongoing possibilities to express our particular projects, to be whatever we are, as business people, engineers, consultants, academics, teenagers, etc. In short: technology makes sense because we already live in the technological age or mood where the world (and we as beings that are never ‘out’ of the world) are already framed in this way—as available resources for the ongoing challenging and ordering of the world by us. For him the essence of technology is not the particular artifacts but the technological mood that make this or that particular artifacts show up as meaningful and necessary.
Heidegger claims that there were other times in human history, a pre-modern time, where humans did not orient themselves towards the world in a technological way—simply as resources for our purposes. He suggests that in ancient Greek culture humans’ relationship with the world was one of ‘letting be’; a world approached in an attitude or mood of reciprocal ‘care.’ This must not be taken as some sort of romantic vision of the past in which everybody in ancient Greece ‘cared’ for the world in contrast to the modern technological mood where everybody takes the world as something to be challenged and ordered. It is rather saying that the mood, the horizon, in which we conduct ourselves tend to dispose us in particular ways. Obviously there were also artifacts in ancient times. However, according to Heidegger this ‘pre-technological’ age (or mood) is one where humans’ relation with the world and artifacts, their way of being disposed, was poetic and aesthetic rather than technological (enframing). The act of making and shaping the artifact was directed by a different attitude or mood. Since the world was not taken as ‘available for ordering’ a certain reciprocal care and intimacy was possible (and cultivated) in which the world was ‘let to be’—to let the world show itself in its own terms. He claims that the pre-modern craftsman makes “the old wooden bridge” that “lets the river run its course.” However, in the horizon of the technological mood this same river is disclosed as a possibility for a “hydroelectric plant” that turns the river into a reservoir, a resource on standby for our projects, challenged and framed as available.
There are many who disagree with Heidegger's account of the modern technological attitude as the ‘enframing’ of the world (Feenberg 1999, Pitt 2000). For example Andrew Feenberg (1999) argues that Heidegger's account of modern technology is not borne out in contemporary everyday encounters with technology. When we look more carefully we see many individual situations in which the technological attitude does not hold sway, where people have intimate relationships with artifacts that cannot merely be dismissed as instances of enframing. Others, such as Hubert Dreyfus and Albert Borgmann, have extended Heidegger's work into more specific critiques of particular technologies and particular contemporary ways of being.
2.2 The Technological Attitude in Contemporary Society and Technologies
In critiquing the artificial intelligence (AI) programme Hubert Dreyfus (1992) argues that the way skill development has become understood in the past has been wrong. He argues—using the earlier work of Heidegger in Being and Time—that the classical conception of skill development, going back as far as Plato, assumes that we start with the particular cases and then abstract from these to discover and internalize more and more sophisticated and general rules. Indeed, he argues, this is the model that the early artificial intelligence community uncritically adopted. In opposition to this view he argues, with Heidegger, that what we observe when we learn a new skill in everyday practice is in fact the opposite. We most often start with explicit rules or preformulated approaches and then move to a multiplicity of particular cases, as we become an expert. His argument draws directly on Heidegger's account in Being and Time of humans as beings that are always already situated in-the-world. As humans ‘in-the-world’ we are already experts at going about everyday life, at dealing with the subtleties of every particular situation—that is why everyday life seems so obvious. Thus, the intricate expertise of everyday activity is forgotten and taken for granted by AI as an assumed starting point.
Dreyfus proceeds to give an account of five stages of becoming an expert as a way to critique the programme of AI. In his account a novice acts according to conscious and context-free rules and generally lacks a sense of the overall task and situational elements. The advanced beginner adds, through experience, situational aspects to the context-free rules to gain access to a more sophisticated understanding of the situation. The relationship between the situational aspects and rules are learned through carefully chosen examples, as it is difficult to formalize them. The competent person will have learnt to recognize a multiplicity of context-free rules and situational aspects. However, this may lead to being overwhelmed as it becomes difficult to know what to include or exclude. The competent individual learns to take a particular perspective on the situation, thereby reducing the complexity. However, such ‘taking of a stand’ means a certain level of risk taking is involved that requires commitment and personal involvement. For the proficient most tasks are performed intuitively. As an involved and situated actor the relevant situational aspects show up as part of the ongoing activity and need not be formalized. Nevertheless, a pause may still be required to think analytically about a relevant response. For the expert relevant situational aspects as well as appropriate actions emerge as an implicit part of the ongoing activity within which the expert is totally absorbed, involved, and committed. The task is performed intuitively, almost all the time. In the ongoing activity of the expert thousands of special cases are discriminated and dealt with appropriately.
With this phenomenological account of skill development in hand it is easy to see the problem for the development of AI programmes. Computing machines need some form of formal rules (a program) to operate. Any attempt to move from the formal to the particular, as described by Dreyfus above, will be limited by the ability of the programmer to formulate rules for such a shift. Thus, what the computer lacks (and we as human beings have) is an already there familiarity with the world that it can draw upon as the transcendental horizon of meaning to discern the relevant from the irrelevant in ongoing activity—i.e., the computer is not already a skilled actor (a being-in-the-world in Heidegger's terms). The critique of Dreyfus pushed the AI researchers into new ways of thinking about AI. In particular it has lead to the embodied cognition programme of the Massachusetts Institute of Technology (MIT) AI Lab.
What Dreyfus highlighted in his critique of AI was the fact that technology (AI algorithms) does not make sense by itself. It is the assumed, and forgotten, horizon of everyday practice that make technological devices and solutions show up as meaningful. If we are to understand technology we need to ‘return’ to the horizon of meaning that made it show up as the artifacts we need, want and desire. We also need to consider how these technologies reveal (or disclose) us. For example the microscope can reveal us as ‘inquisitive’, the gun as ‘aggressive’, or the mobile phone as ‘communicative’, and so forth. The ongoing co-constitution of society and technology, phenomenology's insight, can help us to understand and make sense of information technology, such as AI, but also more mundane technologies such as word processors (Heim 1999).
In thinking about our relationship with technology in modern contemporary life Albert Borgmann (1984) takes up the question of the possibility of a ‘free’ relation with modern technology in which everything is not already ‘framed’ (in Heidegger’s sense) as resources for our projects. He agrees with Heidegger's analysis that modern technology is a phenomenon that tends to ‘frame’ our relation with things, and ultimately ourselves and others, in a one-dimensional manner—the world as simply available resources for our projects. He argues that modern technology frames the world for us as ‘devices’. By this he means that modern technology as devices hides the full referentiality (or contextuality) of the world—the worldhood of the world—upon which they depend for their ongoing functioning. Differently stated, they do not disclose the multiplicity of necessary conditions for them to be what they are. In fact, just the opposite, they try to hide the necessary effort for them to be available for use. A thermostat on the wall that we simply set at a comfortable temperature now replaces the process of chopping wood, building the fire and maintaining it. Our relationship with the environment is now reduced to, and disclosed to us as a control that we simply set to our liking. In this way devices ‘de-world’ our relationship with things by disconnecting us from the full actuality (or contextuality) of everyday life. By relieving us of the burden—of making and maintaining fires in our example—our relationship with the world becomes disclosed in a new way, as simply there, already available for us. Obviously, this is sometimes necessary otherwise the burden of everyday life might just be too much. Nevertheless, if the ‘device mood’ becomes the way in which we conduct ourselves towards the world then this will obviously have important moral and ethical implications for others who may then become disclosed as devices.
Against such a disengaging relationship with things in the world Borgmann argues for the importance of focal practices based on focal things. Focal things solicit our full and engaging presence. We can think of the focal practice of preparing and enjoying a meal with friends or family as opposed to a solitary consumption of a fast food meal. If we take Borgmann's analysis seriously we might conclude that we, as contemporary humans surrounded by devices, are doomed increasingly to relate to the world in a disengaged manner. Such a totalizing conclusion would be inappropriate since the prevailing mood does not determine our relationship with those we encounter. Nevertheless, Borgmann's analysis does point to the possibility of the emergence of a device mood—as we increasingly depend on devices—and our moral obligation not to settle mindlessly into the convenience that devices may offer us. Otherwise we might, as Heidegger (1977) argued, become the devices of our devices.
2.3 A Phenomenology of the Human/Technology Relationship
Phenomenology does not only function as an approach to reveal and critique our relationship with technology as suggested by Heidegger, Dreyfus and Borgmann above. Don Ihde (1990) has used the resources of phenomenology to give a rich and subtle account of the variety and complexity of our relationship with technology. In thinking about the human/technology relationship Ihde characterizes four different I-technology-world relationships. The first type of relationship he calls embodiment relations. In this case technology is taken as the very medium of subjective perceptual experience of the world thus transforming the subject's perceptual and bodily sense. In wearing my eyeglasses I do not only see through them they also become ‘see through.’ In functioning as that which they are, they already withdraw into my own bodily sense as being a part of the ordinary way I experience my surrounding. He denotes this relationship as having the form [I-glasses]-world. He further argues that this relationship has a necessary ‘magnification/reduction structure’ associated with it. Embodiment relations simultaneously magnify and amplify or reduce and place aside (screen out) what is (and is not) experienced through them. The moon through the telescope is a different from the moon in the night sky perceived by the naked eye. The person at the other end of the online chat is made present to me across a great distance at the expense of being reduced to text on the screen.
The second type of human/technology relation is what he calls hermeneutic (p. 80). In this type of relation the technology functions as an immediate referent to something beyond itself. Although I might fix my focus on the text or the map, what I actually see (encounter) is not the map itself but rather immediately and simultaneously the world it already refers to, the landscape already suggested in the symbols. In this case the transparency of the technology is hermeneutic rather than perceptual. As I become skilled at reading maps they withdraw to become for me immediately and already the world itself. He denotes this relationship as having the form I-[map-world].
The third type of human/technology relations Ihde calls ‘alterity’ relations. In these relations technology is experienced as a being that is otherwise, different from me, technology-as-other. Examples include things such as religious icons and intelligent robots (the Sony dog for example). In my interaction with these technologies they seem to exhibit a ‘world of their own.’ As I engage them they tend to disengage me from the world of everyday life and point to the possibility of other worlds, hence their pervasiveness in activities such as play, art and sport. He denotes these as having the form I-technology-[world], indicating that the world withdraws into the background and technology emerges as a focal entity with which I momentarily engage—as I play with my robot dog for example.
Ihde also recognizes a fourth type of human/technology relation in which technology is not directly implicated in a conscious process of engagement on the part of the human actor. Ihde refers to these as background relations. Examples include automatic central heating systems, traffic control systems, and so forth. These systems are ‘black-boxed’ in such a way that we do not attend to them yet we draw on them for our ongoing everyday existence. They withdraw as ongoing background conditions. Although he does not designate it as such one might formalize these relations in the form: I-[technology]-world. These invisible background technologies can be powerful in configuring our world in particular ways yet escape our scrutiny.
Ihde's phenomenological description of the human/technology relation provides a meaningful taxonomy or framework to give an account of many everyday technology relations in a manner that can facilitate our considerations of the social and ethical implications of information technology. For example, the withdrawal of technology, into my body, into my perception and into the background has important political and ethical implications for its design and implementation. Especially if one considers that every disclosure of the world ‘through’ technology is also immediately a concealment of other possible disclosive relations. The car discloses possibilities—for getting to places quickly—but also conceals, in its withdrawal, the resources (roads, fuel, clean air, etc.) necessary for it to be what it is—they act as devices in Borgmann's terminology. Indeed we often loose sight of the reduction/magnification structure as we simply use these technologies. As these technologies become more and more pervasive—almost a necessary condition of everyday life—it becomes more and more difficult to see that which has become concealed in their withdrawal. With Ihde's typology of I-technology-world relations we might be able to bring what has become concealed back to the foreground for our critical attention and ethical reflection. Let us now consider how these different ways of approaching the information technology/society relationship might dispose us in thinking about the social and ethical implications of information technology.
3. Ethics and Information Technology
3.1 The Impact of Information Technology and the Application of Ethical Theory
Much of the ethical debate about computers and information technology more generally has been informed by the ‘impact view’ of information technology. Within this tradition a number of issues have emerged as important. For example, whether computers generate new types of ethical problems that require new or different ethical theories or whether it is just more of the same (Gorniak 1996). These debates are often expressed in the language of the impact of information technology on particular values and rights (Johnson 1985, 1994). Thus, we have discussions on the impact of CCTV or web cookies on the right to privacy, the impact of the digital divide on the right to access information, the impact of the piracy of software on property rights, and so forth. In these debates Jim Moor (1985) has argued that computers show up policy vacuums that require new thinking and the establishment of new policies. Others have argued that the resources provided by classical ethical theory such as utilitarianism, consequentialism and deontological ethics is more than enough to deal with all the ethical issues emerging from our design and use of information technology (Gert 1999). Irrespective of whether information technology creates new types of ethical problems that require new ethical theory or whether established ethical theory is sufficient, one tends to find the debate centered on questions of policy that is intended to regulate or justify conduct. These policies are seen, and presented as ways to regulate or balance competing rights or competing values. For example, what sort of policies do we need to protect our children when they go on the internet? How would these policies affect the right to free speech? Or, what sort of policies do we need to secure the rights of producers of digital products? How would these policies affect the right of society to a reasonable access to these products? Furthermore, these debates are most often directed at an institutional level of discourse—i.e., with the intention to justify the policies or conduct for governments, organizations and individuals. In these debates, on the impact of technology, ethics and ethicists are primarily conceived as presenting arguments for justifying a particular balance, of values or rights, over and against other possibilities.
3.2 The Politics of Information Technology and a Disclosive Ethics
The constructivist view of the information technology/society relationship tends to lead to a different kind of reflection on the ethical significance of information technology. Social constructivists tend to argue that technology, as socially constructed, is already political as such and therefore already suggests an ethical concern. By this they mean that technology, by its very design, includes certain interests and excludes others. This does not mean that designers are always aware that they are making political and ethical decisions. In fact they are mostly not. They are mostly trying to solve very mundane everyday problems. For example the ATM bank machine assumes a particular person in front of it. It assumes a person that is able to see the screen, read it, remember and enter a personal identification (PIN) code, etc. It is not difficult to imagine a whole section of society that does not conform to this assumption. If you are blind, in a wheelchair, have problem remembering, or unable to enter a PIN, because of disability, then your interest in getting access to your account will be excluded by the actual design of the ATM.
If information technology is political—i.e., it already includes/excludes certain interests—then it is also immediately ethical. For the constructivist it is the particular way in which interests become built into the technology and practices within which it is embedded that is ethically significant (Brey, 2000). Indeed the particular concern is the way information technology ‘hides’ these values and interests in the logic of software algorithms and hardware circuits (Introna & Nissenbaum, 2000). Everyday technologies such as utensils are mostly open to scrutiny by the reflexive user. Information technology, on the other hand, is mostly not open to such scrutiny (Brey, 2000). Important assumptions and biases in information technology is mostly obscured and subsumed in ‘black-boxes’ in ways that makes it difficult even for the experts to scrutinize it. It is not possible for the ordinary computer user to scrutinize the assumptions and biases embedded in the code of the Microsoft Windows operating system. Imbedded in the software and hardware code of information technology applications are complex rules of logic and categorization that may have material consequences for those using it and for the production of social order more generally (Introna & Wood, 2004). In this view of information technology ethics, the task of ethics is to open up the ‘black box’ of information technology and reveal or disclose the values and interests it embodies for scrutiny and reflection—not only in its final design but also in the process of development. Such an approach to the ethics of information technology is most often informed by technology studies within the science, technology and society (STS) tradition.
3.3 Information Technology, Ethics and our Human way of Being
For the phenomenological approach the ethical questions, and ways of thinking, outlined above, are important. Nevertheless, it should be clear from the discussion above that the phenomenological approach to information technology and ethics would tend not to concern itself with this or that artifact or technology. It would rather be concerned with the attitude or mood that made these artifacts or technologies seem necessary or obvious in the first place. It would also be concerned with the ways in which particular technologies ‘frame’ us as we draw on them. The phenomenologist would claim that it is this ongoing co-constitution that we should focus on if we are to understand the social and ethical implications of information technology. This does not preclude the possibility that we should also consider the impact of particular technologies as well as unpack particular technologies to understand the values and interests they embody. However, the phenomenologist would argue that the impact analysis and the disclosive studies ought to be situated in a broader phenomenological approach to have the potential to be effective, critically and normatively. Let us consider this approach in more detail through an example of virtuality.
4. Phenomenology, Ethics and Information Technology: The Case of Virtuality
As indicated above, it would be misleading to suggest that there is a vast literature on the phenomenological approach to the social and ethical implications of information technology. Clearly, the work of Heidegger, Dreyfus, Borgmann and Ihde discussed above could be described as critical work that aims to open up a horizon for social and ethical reflection. Nevertheless, there seems to be at least one information technology theme that has attracted some sustained attention from phenomenologists (especially with regard to its ethical implications)—the phenomenon of virtualization or virtuality. The term ‘virtuality’ is used here to refer to the mediation of interaction through an electronic medium between humans and humans as well as between humans and machines. The World Wide Web (or Cyberspace as it is known in cultural discourse) is the most evident example of the virtualization of interaction.
The development of the Internet and the subsequent extension of computer networks into all domains of everyday life have prompted much speculation about the way in which this information technology will change human existence, especially our notion of sociality and community. Much of this speculation suggests that the virtualization of human interaction will lead to a multitude of new possibilities for humans—such as cyber communities, virtual education, virtual friendships, virtual organizations, virtual politics, and so forth. Clearly such claims about the transformation of the social domain have important implications for our understanding of ethics as most of our current thinking about ethics implies a certain sense of community based on reciprocal moral obligations that are largely secured through situated, embodied practices and institutions that are often overlapping and mutually inclusive. If these practices and institutions become virtualized then it would seem that we need to reconsider some of our most fundamental human categories.
The proponents of the virtualization of society (and its institutions) argue that virtuality extends the social in unprecedented ways (Fernback 1997, Rheingold 1993a, 1993b, Turkle 1995, 1996, Benedikt 1991, Horn 1998). They argue that it opens up an entirely new domain of social being. For example Rheingold (1993a) argues that it offers “tools for facilitating all the various ways people have discovered to divide and communicate, group and subgroup and regroup, include and exclude, select and elect. When a group of people remain in communication with one another for extended periods of time, the question of whether it is a community arises. Virtual communities might be real communities, they might be pseudocommunities, or they might be something entirely new in the realm of social contracts.” (p. 62). According to the proponents this new social space is novel in that it offers completely new ways to be and relate. They argue that through the plasticity of the medium it is possible to conceive, construct and present our identities in almost boundless ways. Turkle (1996) suggests that cyberspace “make possible the construction of an identity that is so fluid and multiple that it strains the very limits of the notion [of authenticity]. People become masters of self-presentation and self-creation. There is an unparalleled opportunity to play with one's identity and to ‘try out’ new ones. The very notion of an inner, ‘true self’ is called into question… the obese can be slender, the beautiful can be plain. The ‘nerdy’ can be elegant. The anonymity of MUDs (you are known only by the name you gave your characters) provides ample room for individuals to express unexplored ‘aspects of the self’” (p. 158). The claims by Rheingold, Turkle and others are certainly bold. If they are right then virtuality may indeed represent entirely new possibilities for humans to relate, extend, and express themselves, which should be encouraged, especially for those that have become excluded from the traditional domains of social relations, due to disability for example.
Those who treat the internet as an artifact may suggest that we look at the impact of mediation (or virtualization) on communication and relations of power; for example, the fact that certain social prejudices are circumvented because the individual responding to my online application for a particular service is not confronted with my physical appearance. They may also suggest, as Turkle (1995, 1996) has done, that we look at the way virtualization makes the presentation of self and identity more plastic and encourage us to think through the consequences of this for ongoing social interaction. The social constructivists may suggest that we need to look at the assumptions as values embedded in the artifacts as such. They may, for example, suggest that we consider the implicit assumptions about the nature of communication when considering e-mail applications— for instance, the fact that most e-mail applications assume and emulate the structure of a physical letter. They would argue that we need to trace through how people interpret this ‘letter’ structure to communicate and share objects (such as files and pictures) with others, as well as the sorts of communication such a structure excludes.
Phenomenologists would suggest that these responses are all important but they assume something more primary—i.e., the conditions that render such acts as the presentation of the self, ongoing communication and sharing meaningful and significant in the first instance. They might suggest that these social acts are all grounded in an already presumed sense of community. They might further argue that social interaction, community and identity (as we know it) are phenomena that are local, situated and embodied, which is characterized by mutual involvement, concern and commitment (Dreyfus 2001; Borgmann 1999, Ihde 2002, Introna 1997, Coyne 1995, Heim 1993). In other words that these phenomena draw on an implied sense of involvement, place, situation, and body for its ongoing meaning. Borgmann (1999) argues that the “unparalleled opportunity” of virtuality suggested by Turkle comes at a ‘cost.’ To secure “the charm of virtual reality at its most glamorous, the veil of virtual ambiguity must be dense and thick. Inevitably, however, such an enclosure excludes the commanding presence of reality. Hence the price of sustaining virtual ambiguity is triviality” (p. 189). Indeed such ‘fluid and multiple’ identity is only feasible as long as it is “kept barren of real consequences”. Dreyfus (1999, 2001) argues, in a similar vein that without a situated and embodied engagement there can be no commitment and no risk. They argue that in such an environment moral engagement is limited and human relations become trivialized. Ihde (2002) does not go as far as Borgmann and Dreyfus in discounting the virtual as ‘trivial.’ Nevertheless, he does claim that “VR bodies are thin and never attain the thickness of flesh. The fantasy that says we can simultaneously have the powers and capabilities of the technologizing medium without its ambiguous limitations is a fantasy of desire” (p.15).
Coyne (1995) argues that the proximity of community has nothing to do with physical distance. He argues that proximity is rather a matter of shared concerns—i.e., my family is ‘close’ to me even if they are a thousand miles away and my neighbors may be ‘distant’ to me even if they are next door. Levinas (1991, 1996) takes this claim even further. He suggests that proximity has nothing to do with either social or geographic distance. For him proximity is an ethical urgency that unsettles our egocentric existence. Proximity is the face—or our always already facing—of the Other (all other human beings) that unsettles the ongoing attempts by the ego to ‘domesticate’ the infinitely singular Other (a proper name) into familiar categories (race, ethnicity, gender, etc). For the phenomenologist any electronic communication (or any other communication) will find its meaning in a prior horizon of proximity. If we do not already share certain concerns then electronic mediation will not create proximity even if it does seem to break down the geographic distances between us—even if it is ‘shrinking’ the world as it were. These authors suggest that our sense of community and the moral reciprocity it implies comes from a sustained and situated engagement where mutual commitments and obligations are secured in the proximity of an already shared horizon of ongoing meaning.
Not all agree with this phenomenological analysis. Feenberg (1999, 2004), although he is not a phenomenologist as such, nevertheless uses phenomenological insights to argue that the messages exchanged are not ‘thin’ but can also be ‘thick.’ The messages exchanged through e-mail for example are also situated and already imply a certain minimal level of reciprocity and commitment—the mere fact of their exchange implies such a minimal horizon of meaning for the act of exchange itself to make sense. He argues: “the interpreted message stands in for the world, is in effect a world. In the case of mediated communication, a person and the social context of their presence are delivered in the message” (emphasis added). He argues that community is an intersubjectively constructed phenomenon that emerges from a mutual connection that may imply a physical co-presence but is not restricted to it. He acknowledges that a mediated community may be different and have its own particular problems. Nevertheless, he insists that “community needs to be interpreted from the inside out, not as a geographical fact”.
The phenomenological critique of virtuality is important because it forces us to reconsider some of our most fundamental human categories. Since there seem to be many economic and political reasons why virtualization is attractive (in spite of obvious problems) these debates need to be had. From these debates it is apparent that a simple dichotomy that poses the virtual as ‘thin and trivial’ and situated embodied co-presence (often referred to as the ‘real’) as ‘thick and significant’ is too simple to be helpful. From the phenomenological analysis presented it follows that one of the most important aspects of community is the presumed density of its mutually referring concerns and involvements (irrespective of mediation). Traditional communities often develop this dense referentiality, quite implicitly, by sharing involvement (and therefore concern) in many overlapping practices and institutions. It is conceivable that there are individuals that already share certain involvements and concerns (such as individuals that share a debilitating disease) that could become an online virtual community because they are in a very real sense already a community. It seems also evident that individuals who share limited involvements and concerns (such as playing games in MUDs) are unlikely to become a community simply because they share a virtual space. Without a dense horizon of mutual involvement and concern upon which we can base ourselves all choices and actions become equally significant or insignificant—i.e., trivial. Taylor (1991) in discussing the ethics of authenticity provides a good summary of the importance of this communal horizon of significance for the construction of a ‘thick’ self and therefore a ‘thick’ community: “The agent seeking significance in life, trying to define him- or herself meaningfully, has to exist in a horizon of important questions [shared concerns]. That is what is self-defeating in modes of contemporary culture that concentrate on self-fulfillment in opposition to the demands of society, or nature, which shut out history and the bonds of solidarity. These self-centred ‘narcissistic’ forms are indeed shallow and trivialised” (p. 40).
The purpose of this entry is to provide the reader with a sense of the phenomenological approach to information technology and its social and ethical implications by contrasting it with two other approaches. It might be useful to summarize our discussion in the Table below. Obviously such a summary must be seen within the context of the whole entry and will be subject to the normal problems of such summative statements, namely that they do not always do justice to the ideas so summarized. Nevertheless, as a general guide it might help to further clarify some of the contrasts that the entry tried to suggest as useful in understanding the distinctiveness of the phenomenological approach.
Approach or view
View of technology / society relationship
Approach to ethical implications of technology
Artifact / tool
Technologies are tools that society draws upon to do certain things it would not otherwise be able to do. When tools become incorporated in practices it tends to have a more or less determinable impact on those practices.
The task of ethics is to analyze the impact of technology on practices by applying existing or new moral theories to construct guidelines or policies that will ‘correct’ the injustices or infringements of rights caused by the implementation and use of the particular technology.
Technology and society co-construct each other from the start. There is an ongoing interplay between the social practices and the technological artifacts (both in its design and in its use). This ongoing interplay means that technological artifacts and human practices become embedded in a multiplicity of ways that are mostly not determinable in any significant way.
The task of ethics if to be actively involved in disclosing the assumptions, values and interests being ‘built into’ the design, implementation and use of the technology. The task of ethics is not to prescribe policies or corrective action but to continue to open the ‘black box’ for scrutiny and ethical consideration and deliberation.
Technology and society co-constitute each other. They are each other's condition of possibility to be. Technology is not the artifact alone it is also the technological attitude or disposition that made the artifact appear as meaningful and necessary in the first instance. However, once in existence artifacts and the disposition that made them meaningful also discloses the world beyond the mere presence of the artifacts.
The task of ethics is to point back to the attitudes and moods that made particular technologies show up as meaningful and necessary. It seeks to interrogate these assumptions and attitudes so as to problematize our ongoing relationship with technology.
- Bijker, W. E., 1995, Of Bicycles, Bakelites and Bulbs. Toward a Theory of Sociotechnical Change. Cambridge, MA: MIT Press.
- Bijker, W., T. Pinch, and T. Hughes, 1987, The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. MIT Press: Cambridge, Mass.
- Borgmann, A., 1984, Technology and the Character of Contemporary Life, Chicago: Chicago University Press.
- -----, 1999, Holding On to Reality, Chicago/London: The University of Chicago Press.
- Brey, P., 2000, “Disclosive Computer Ethics,” Computers and Society, 30/4: 10-16.
- Coyne, R., 1995, Designing information technology in the postmodern age: From method to metaphor. Cambridge MA: MIT Press.
- Dreyfus, H.L., 1999, “Anonymity versus commitment: The dangers of education on the internet,” Ethics and Information Technology, 1/1, p.15-20, 1999
- -----, 2001, On the Internet, London: Routledge.
- -----, 1992, What Computers Still Can't Do: A Critique of Artificial Reason, Cambridge, USA: The MIT Press,
- Feenberg, A., 1999, ‘Technology and Meaning’, in Questioning Technology, London and New York: Routledge. pp 183-199.
- -----, 2004, ‘Active and Passive Bodies: Comments on Don Ihde's Bodies in Technology’, available at http://www-rohan.sdsu.edu/faculty/feenberg/Ihde1.htm (last accessed 1 June 2004).
- Fernback, Jan., 1997, “The Individual within the Collective: Virtual Ideology and the Realization of Collective Principles.” In Steven G. Jones (Ed.), Virtual Culture: Identity and Communication in Cybersociety, London: Sage, 36-54.
- Gert, Bernard. 1999, “Common Morality and Computing,” Ethics and Information Technology, 1/1, 57-64.
- Gorniak, K., 1996, “The Computer Revolution and the Problem of Global Ethics,” Science and Engineering Ethics, 2/2, 177-190.
- Heidegger, M., 1977, The Question Concerning Technology and Other Essays, New York: Harper Torchbooks.
- Heim, M., 1993, The Metaphysics of Virtual Reality, New York: Oxford University Press.
- -----, 1999, Electric Language, New York: Yale University Press.
- Horn, S., 1998, Cyberville. Clicks, Culture, and the Creation of an Online Town. New York: Warner Books.
- Ihde, D. 1990, Technology and the Lifeworld: From garden to earth. Bloomington and Indianopolis: Indiana University Press.
- -----, 2002, Bodies in Technology, Minneapolis: University of Minnesota Press.
- Introna, L.D. 1997, “On Cyberspace and Being: Identity, Self and Hyperreality.” Philosophy in the Contemporary World, 4/1&2, 16-25.
- Introna, L.D. & H. Nissenbaum, 2000, “The Internet as a Democratic Medium: Why the politics of search engines matters,” Information Society, 16/3, 169-185.
- Introna, L.D. & F.M. Ilharco, 2003, “The Ontological Screening of Contemporary Life: A Phenomenological Analysis of Screens,” European Journal of Information Systems, 13/3, 221-234.
- Introna, L.D. & D. Wood, 2004, “Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems,” Surveillance and Society, 2/2&3, pp 177-198.
- Johnson D. G., 1985, Computer Ethics, Englewood Cliffs, NJ, Prentice-Hall.
- -----, 1994, Computer Ethics, 2nd edition, Englewood Cliffs, NJ, Prentice-Hall.
- Latour, B. 1991, “Technology is society made durable.” in J. Law (ed) A Sociology of Monsters: Essays on Power, Technology and Domination. London: Routledge, 103-131.
- Law, J., 1991, The Sociology of Monsters: Essays on Power, Technology and Domination. London: Routledge.
- Levinas, E., 1991, Otherwise than Being or Beyond Essence, Dordrecht: Kluwer Academic Publishers.
- -----, 1996, Ethics as First Philosophy, In The Levinas Reader (Ed, Hand, S.) London: Blackwell, 75-87.
- Moor, J H, 1985, “What is computer ethics?” Metaphilosophy, 16/4, 266-279.
- Pitt, J.C., 2000, Thinking about Technology: Foundations of the Philosophy of Technology. New York: Seven Bridges Press.
- Postman, N. 1993, Technopoly: The Surrender of Culture to Technology. New York: Alfred A. Knopf.
- Rheingold, H. 1993a, “A Slice of Life in My Virtual Community.” In L. Harasim (ed.), Global Networks. Computers and International Communication, Cambridge, MA: The MIT Press, 57-80.
- -----, 1993b, The Virtual Community: Homesteading on the Electronic Frontier. Reading, Mass: Addison-Wesley. [Preprint available online.]
- Taylor, C. 1991, The Ethics of Authenticity. Cambridge, Mass: Harvard University Press.
- Turkle, S. 1996, “Parallel lives: Working on identity in virtual space.” in D. Grodin & T. R. Lindlof, (eds.), Constructing the self in a mediated world, London: Sage, 156-175.
- -----, 1995, Life on the Screen - Identity in the Age of the Internet. New York: Simon and Schuster.