This is the paper I presented on our networkmeeting in Rotterdam on 16th December 2004. As we attach ourselves in numerous ways to others (friends, websites, buildings), traces of our behaviors are recorded, stored, processed and handed over to security agencies and/or sold to commercial enterprise. The profiles that are generated from these traces impact us, even when often implicit: they confront us with images of ourselves; they may limit access to goods we need or desire and they often open up new possibilities. To answer the question whether and if so, in which way, this should worry us I oppose David Brin’s Transparent Society to Jeffrey Rosen’s Naked Crowd. As I find both their projects illuminating but unsatisfactory, I will explore how a relational theory of law, as written into our project, could think the legal subject in relation to human imbroglio’s and their profiles.
1. Transparent society and the naked crowd
The case of Brin vs Rosen
1.1 An issue raised
On 15th September 2004 90 civil rights organisations (and 89 IT&C companies) raised the issue of the mandatory retention of data of online traffic, as proposed in the Draft Framework Decision on this matter of the Council of the European Union. The Framework Decision intends to regulate mandatory retention of data of online traffic, to be recorded, stored and made available by public communication networks to law enforcement agencies ’for the purpose of prevention investigation detection and prosecution of crime and criminal offences including terrorism’. Privacy International and EDRi co-ordinated the response to the European Commission’s call for comments, rejecting the proposed regime for indiscriminate retention of personal data as hazardous: Invasive, Illusory, Illegal, and Illegitimate. Invasive because ’traffic data can now be used to create a map of human associations and more importantly, a map of human activity and intention’ (see f.i. Hull 2004 on Bush funding US spying on Internet Chat Rooms). Illusory because ’the linking of one individual to a set of actions through checking logs is a tenuous link at best’ (producing heaps of false positives and false negatives). Illegal because in accordance with art. 8 of the ECHR ’citizens should have notice of the circumstances in which the State may conduct surveillance, so that they can regulate their behaviour to avoid unwarrented intrusions’. Illegitimate because mandated by Directive 2002/58/EC on Privacy and Electronic Communications and this Directive as well as many national laws on data retention were passed in response to terrorism, while in fact data retention has little to do with investigating terrorism and is more commonly used for investigations and surveillance of criminal infractions and tax-evasion.
A similar worry can be found in the words of Caspar Bowden, director of the Foundation for Information Policy Research (UK) when he writes about Part.11 of the UK Anti-Terrorism Crime and Security Bill (ATCS): ’Automated trawling of traffic databases is a powerful form of mass-surveillance over the associations and relationships that constitute private life. It also reveals the sequence and pattern of thought of individuals using the Internet it could be described as CCTV for the inside of your head’ (Bowden 2002).
1.2 The problem of data retention and automatic monitoring
What is the problem with data retention? Imagine that your on-line behaviour is observed, registered, monitored and stored. For instance by Google. Imagine that your medical records are shared across national borders and between all institutions that are engaged in healthcare - to optimize your health of course (think of the advance of preventive healthcare in combination with what is called evidence based medicin, see Rose (2003: 77) on the imbroglios of global pharmaceutical companies, venture capital and universal healthcare). Imagine that your off-line behaviour (shopping, visiting theatres, applying for permits, using your mobile phone, borrowing books from a library, having a drink somewhere) is observed, registered, monitored and stored. Imagine that the data-bases that store this information are integrated by bying and selling data on individuals or groups (commodification of information, Prins 2004). Imagine automatic monitoring creates a diversity of profiles out of these data - obtained legally or illegally - that track you down as a possible customer, as a possible employee, as a possible investor, as a candidate for some terrible disease, as a credit-risk, as a security-risk, as a potential terrorist, as a militant activist, as a health-risk, as an insurance risk. Whatever. These profiles may be simply wrong, in the sense that the data are incorrect, they may also be wrong because the inferences made during the processing of the data inevitably produce false negatives and false positives. They may even be right (based on correct data while the inferences are adequate). (Why) should we worry about them?
Profiles are constructs and as such they have a life of their own, they may influence all kinds of decisions concerning your options in life - most of the time without you being aware of it. They sort of define you, per context, as a risk, a nobody or a potential. Once again, they may be wrong but they will probably often be right. In his The Power of knowledge. Ethical, Legal, Technological Aspects of Data Mining and Group Profiling in Epidemiology Custers discriminates 5 possible disadvantages of (group )profiling technologies: selection, stigmatisation, confrontation, customisation, de-individualisation (Custers 2004:74-79). One of the points he makes is that these disadvantages or dangers do not always relate to the use of incorrect profiles but also to the unfair use of correct profiles. I will first focus on the impact of profiling even in the case of the ’fair’ use of correct profiles.
1.3 The impact of profiles on our construction of self
When amazon.com constructs a profile and offers me books that might interest me, I am confronted with an image of myself, that I may find stupid, silly, interesting or revealing. It may disclose something to me about the way others look at me (in this case the way a certain software program looks at me). Whatever my opinion about this image, there is a kind of impact, a subtle reconstruction of the self (also and maybe especially when I think the image is false). Actually the confrontation may invigorate me, strenghten my belief in who I think I am. This is nothing very special, it happens every day and lies - as far as I am concerned - at the basis of our ’self’, ’identity’, ’subjectivity’ or whatever you want to call it. The self is under permament reconstruction, nourished by this curious reflection created by looking back at ourself via the other With G.H. Mead, Plessner and others I think the self or the subject is born in this ’taking the perspective of another’: when you adress a child using the word ’you’ it will first refer to itself as ’you’. The moment the child suddenly realises that this ’you’ is its ’I’, it is born as a subject, beginning the development of an identity (as someone that is different from ’you’ and from ’her’ and from ’him’ and - curiously - identical with the ’I’ of yesterday and tomorrow, even if very different at the same time).
So, the way others perceive us, or to be more precise, the way we perceive others to perceive us, constitutes our - evershifting - self. Evershifting because the self is an event, a performative action, a recurring attempt to cohere a multiplicity of perspectives as belonging to oneself (not always easy, and a lot of fun sometimes).
1.4 Warwick and his building: a human imbroglio
What does this mean for the advance of identification and profiling technologies? Imagine you have a chip implanted beneath the skin of your wrist. This chip is what is called an intelligent agent, or a digital persona (Clark 1994): it communicates with intelligent devices in the building where you work. It learns what time you enter the building, where you hang your coat, when you turn on the light, how you want your coffee, when your computer should be turned on, which doors should open when. This is an experience/experiment undergone/carried out by Kevin Warwick, ’famous’ professor of cybernetics at Reading University who studies artificial intelligence, robotics etc. You could say, as he frequently does, that Warwick is a cyborg at that moment, because some kind of machinery is integrated into his body. Just like - as a matter of fact - people who wear glasses or pacemakers are cyborgs. However he didn’t do it because he was physically handicapped, but - so he claimed - to enhance or upgrade his human-ness. More interesting to me than his entanglement with the chip beneath the skin, is the fact that Warwick attached himself to the building. Via his digital persona he correlated himself to the building, thus forming an imbroglio or correlated human with the building. When I asked one of his researchers how Warwick felt when the chip was taken out of him after several months, he said he felt amputated. I like that expression. It demonstrates how the building became part of him, just like the feather of Merleau-Ponty’s lady with a hat. It extended his entanglement (imbrogliare is Italian for to entangle) with the world in a very direct - phenomenological - sense. Warwick dreams of enhancing humanity, speaking of upgrading humans and even calling cyborgs post human (Warwick 2003). As Don Ihde describes in his Technology and the lifeworld (1990) and Bodies in Technology (2002) the mixing of human and technology is as old as human society: humans with hammers, glasses, pace-makers, guns and silicon breast-implants can very well be understood as cyborgs (imbroglios of humans and machines). Or, as Latour would say: ’you are a different person with the gun in your hand’ (Latour 1993:179).
As Warwick himself realizes the electronic device he wears (in- or outside his body) raises issues of privacy. The way Warwick attaches himself to or correlates with the building produces - in real time - another type of correlations that are produced by this kind of ambient intelligence. As the intelligent agents communicate with your intelligent agent a massive amount of personnel data are generated (recorded, processed and stored). These data will be continuously correlated to enable the agents to learn and accommodate to your inferred wishes and of course these correlated data (or profiles) can again be recorded, processed and stored. After that they can be sold to business enterprise, or delivered to law-enforcement agencies. In other words, when our attachments produce profiles, and when we are confronted with them - even unknowlingly - these profiles will impact the construction of self (constrain and constitute the way we go about things). This impact is obvious at the moment when our environment interacts intelligently with the strings of profiles that our digital persona produces: these profiles enable the ubiquitous interactions between my electronic agent and my environment. But the impact also occurs as new products are offered to or withheld from us, an insurance policy accepted or rejected, access to jobs limited or opened up.
As Lawrence Lessig, author of the classic Code and other laws of cyberspace, indicates (Lessig 1999:143-146), the new thing about information society is not so much the fact that we attach ourselves or even that we are observed or monitored. Whoever lived in a village or any kind of face-to-face community knows all about observation, monitoring and the way this can impact our reconstruction of self. The new thing is the fact that our atttachments produce searchable records or profiles to a far larger extend than thinkable in the past. On top of that the transaction costs of searching these records are minimal in comparison to what they would have been before the advance of computerized search engines ?nd the search can be done without us being aware of it (in a way this search is less intrusive, which is often used as an argument to claim there is no invasion of privacy). The combination of traceable records, the reduced cost of searching these records and the ubiquitous unobtrusive way the search can be performed make for an enormous reservoir of data and profiles that can boomerang back to us, surprising us with information about our life-style and personality - perhaps seeming to know us better than we know ourselves.
Interesting point is also that Custers claims that ’group profiling using data mining and group profiling based on empirical statistical research (...) may be different (Custers 2004:56-58). The first reason he give is that the scale on which data can be correlated is nearly unlimited (the idea of taking a sample and extrapolating to a population is part of traditional methodologies, the availability of enormous amounts of data seems to make possible research on the population itself). Second other than traditional social science research, data mining actually generates hypothesis (f.i. multifactor regression analysis can generate correlations, without however implying causal relationships; it is interesting to note that mathematical models like regression analysis started in the 70-ties as marketing-techniques). Third he points out that trivial information can be linked to sensitive information, after which the trivial data can be used to circumvent liability on grounds on discrimination (this is sometimes called masking). Fourth he refers to the ’lack of forgetfullness’ of information and communiationtechnologies, that implicate that once information is disclosed it is very hard to withdraw.
1.5 Brin vs Rosen: transparent society vs the naked crowd
How to strike a balance between this new feature of searchable records, based on statistical inference from often real time monitoring on the one hand and our need to reconstruct ourselves beyond our past behaviors and to develop and nourish desires shielded from the gaze of profiling technologies on the other hand? And how to balance this need for an anomynous space for private reconstruction (that can very well occur in the public domain) with the need to hold human imbroglios accountable for their actions? Or, in other words, how to prevent our lives from being dictated entirely on the basis of plausibilities (extrapolation on basis of past behavior); how to contruct spaces within which possibilities can be explored, starting from a clean slate.
David Brin, PhD in space physics, wrote a plea for a transparant society, in which not only law enforcement agencies have access to all these data and profiles, but also you, me and everybody else. Not only regarding suspects of crime or terrorism, but regarding nearly everything else. Not only surveillance and sousveillance (Mann, Nolan and Wellman 2004) but co-veillance. Not a panopticon but an omnopticon (Rosen 2004: 11). You spy on me, I spy on you. As far as Brin is concerned living in a glass house is the only option to enable accountability. His first point is that to ensure security we must organise ways to hold each other accountable, which is only possible if our interactions become transparent. His second point is that access to the glasshouse should not be centralized but brought under anarchic control of all citizens, while its scope should include all government and business agents (rename the White House the Glass House, Rosen 2004:195).
Legal scholar Jeffrey Rosen (George Washington University) wrote a more traditional plea for a society in which transparency is limited to those in power, claiming that disclosure of sensitive data is destructive of freedom, with a forcefull argument that the problem with profiles is that they confront us with sets of data taken out of context (as correlations cannot be equated with causes or reasons, profiles are disentangled from the desires, motivations, hopes and fears of the humans that produced them). His point is that ’dataveillance threatens to make it harder for individuals to redefine themselves; it is a technology of classification and exclusion that attempts to put people in different boxes, predicting their behavior in the future based on their behavior in the past’. This point is somehow related to the advance of actuarial justice, that focuses on statistically inferred risks of re-offending - redefining individuals as members of groups produced by statistical inference and mathematical models (think of the Hare check-list for psychopathy, De Ruiter ).
As to Brin’s compelling picture of our options in information society, Lawrence Lessig, Harvard Law Professor (author of the classic Code and other Laws of Cyberspace 1999) asks himself why we should have to choose between anarchic transparency and authoritarian panoptical surveillance: ’Why can’t we both control spying and build in checks on the distribution of spying techniques?’ Lessig’s second doubt about Brin’s solution is the effectiveness of this anarchic transparency: how could counterspying work in holding people accountable? Lessig rightly argues that this presumes shared norms according to which one is accountable: whose norms? Are we then opting for contingent power-relations? Tyranny of a majority or of those that make time to raise an issue? Rosen in the meantime believes that Brin ’exaggerates the benefits of living in a transparent society and seriously underestimates the costs’. These costs are overlooked because - according Rosen - Brin conflates information (correlated data) with knowledge (something that takes time to grow into a coherent picture, while being part of a particular context - deictic knowledge vs propositional knowledge?) (Rosen 2004: 197-198).
As to Rosen’s Naked Crowd a penetrating smell of nostalgia surrounds his book as far as I am concerned. He seems lost in a paradise that never was. His tendency to define privacy as the right to be left alone is very 19th century liberal. I agree that we don’t want to be part of a naked crowd, whose every secret is disclosed in function of the common good (J.S. Mill and Isiah Berlin had a good point there about the importance of what Berlin termed negative freedom, it seems to me that Brin is only interested in positive freedom). But the scale of anonimity in the virtuel urban society cannot be unlimited, we need mechanisms for accountability, because without them we end up in another anarchy. However I agree with Rosen that Brin’s ’belief’ in an overall reduction of crime after camera’s have been installed everywhere is very na?ve: as a 2002 research study (Welsh and Farrington 2002) of the UK Home Office tells there are no hard facts that demonstrate that crime rates have gone down substantially after the installation of CCTV (a phenomemon dominating the entire UK landscape). Whereas Brin’s solution to the proliferation of dataveillance is the emergence of popular co-veillance, Rosen puts his hope in the legislator. Pointing to the European model of protecting personal data by means of data-protection law, he pleads for Congress to further codify the principles of Fair Information Practices that should ensure notice, choice, access and security (obligation to provide notice to consumers, obligation to gain consent before using data for unrelated purposes, obligation to allow consumers access to and correction of their personal data, obligation to provide redress in case of violation of these principles). He dismisses the Supreme Court as an effective remedy for the protection of privacy, since the Court has reduced this protection to expectations of privacy. He rightly argues that this can only be a sensible criterion if the expectation is not understood in a purely subjective way. The point is what a citizen should expect to be consistent ’with the aims of a free and open society’ (Rosen 2004:205, referring to La Forest 2002). Rosen thus hopes for a reasonable balance between liberty and security on the basis of a ’political oversight’ in which checks and balances are constructed to produce the mix of positive and negative freedom that is distinctive of the democratic constitutional state.
This brings me to the second part of my talk. What can be the meaning and significance of law in relation to human attachments and profiling?
2. WP 8: Correlated humans and the legal subject
A relational theory of law
2.1 Legalism and a relational concept of legal norms
A central tenet of a relational conception of law is that law should keep its conceptual framework open. According to Foqu? and ’t Hart (1990) legal concepts should be understood as counterfactual (and counteridealistic), which basically indicates that the meaning of a concept cannot be taken for granted or determined entirely in advance (compare Hart’s open texture). This implies that such a conception of law should distance itself from any kind of legalism. Legalism - in an epistemological sense, not to be confused with fetisjism of statute-law, which is just one form of legalism - implies, according to Foqu? four things: (1) an absolute emphasis on the imperative character of law, conflating law with the national state (Kelsen) and reducing law to the commands of a sovereign (Hobbes, Austin), (2) an absolute emphasis on the instrumental character of law, leading to instrumentalism (especially after the 19th century advance of the social sciences), (3) an absolute emphasis on the systematic character of law (American legal formalism, German Begriffsjurisprudenz, English stare decisis, French legalism in the sense of statute-fetisjism) and (4) a positivistic emphasis on a correspondance theory of truth, connected with the idea that law is the corresponding off-print of a naturalized social order. Instead of this one-sided emphasis on state-authority, instrumentalism, systematics and positivism, Foqu? defends a conception of law that builds checks & balances into a world that is thorougly relational. This way of understanding law recognises the delicate balance between authoritative and normative aspects of legal norms, (and between written and unwritten law); it recognises the double instrumentality of law that constitutes and constrains state-powers at the same time (this double instrumentality excludes trade-offs between instrumentality and protection, since they have an internal connection); it also recognises the importance of resistance against the attempts to create a perfect system of law since this would nullify laws capacity to mediate between different perceptions of the world and last but not least it rejects legal positivism even while emphasising the importance of positive law.
2.2 The legal subject and the human imbroglio
Another central tenet of this relational conception of law is that the human imbroglio (the human of flesh, blood and all its attachments) has to be protected from disciplinary redefinition (or dominating frames of interpretation, ’t Hart 1993). This protection is instrumented by the concept of the legal subject or legal person, a term that refers to the mask (persona) in ancient Greek theatre. The fact that the legal subject is not conflated with the embedded, situated, attached human imbroglio creates the negative freedom that is constitutive for the positive freedom of our democratic constitutional state. This positive freedom, the freedom to act in the public domain is different from the positive freedom that David Brin seems to embrace, precisely because of its relation to the negative freedom that is its precondition. This negative freedom opens the space for the exploration of possibilities, dreams, desires, change or as Hannah Arendt would say ’natality’.
2.3 Attachments and profiles: two types of correlated humans?
If we now look back to the correlated humans of the first part of this paper, I see two types: (1) the human imbroglio that consists of a human that has attached itself to friends, websites, buildings, cars, guns and (2) the profiles that are generated by these attachments, that - other than the human imbroglios just mentioned - have a life of their own, disentangled from the humans that produced them. The profiles seems to be taken out of context, while at the same time creating a new context, a perspective that redefines the humans that were used to generate them.
2.4 Protection of the human imbroglio
At this point I think that a relational conception of law can provide us with the legal instruments to protect the human imbroglio with all its attachments, while regulating the generation and use of profiles. Building on the original and stimulating work of Paul de Hert en Serge Gutwirth I we could indicate legal instruments of opaqueness (limiting access to home, familiy life and personal data) and legal instruments of transparency (regulating access when this is allowed). As far as I am concerned both instruments protect the right to privacy, understood as ’the freedom from unreasonable constraints on the construction of one’s own identity’ (Agre and Rotenberg 1998:7). As elaborated before I understand identity as being the perspective from which we hold together our sense of self as human imbroglios with all their attachments. Legal and also technological instruments of opaqueness protect this imbroglio from deterministic (or plausibilistic, if that is a term) redefinition by limiting access; legal and technological instruments of transparency do the same by making the access conditional in terms of the principles of fair information practices.
2.5 Legal rules and human habits
The problem is that data protection legislation doesn’t seem to work. The principles of fair information practices are codified but compliance is very questionable. On top of that is does not seem to be a matter of concern to many citizens, since September 11 2001. The issue-raisers mentioned in the beginning are mainly civil-rights advocates and ICT companies, while the publics in the sense of those that are and will be affected are not really interested [my sense of public is different from the way some of us refer to Dewey’s public].
I think that we need legal tools that are better tuned to the technological instruments, and legal tools that are better tuned to the legal instruments. These two types of instruments should form legal-technological imbroglios to accommodate both the freedom of the human imbroglio and the dealings with information. The point is, however, that law is more than a set of legal rules that can be imposed and implemented, and more than a set of judicial decisions. Just like technology is more than a set of tools that dictate their use (again referring to Ihde, where he speaks of multistable technology, Ihde 1990:144-151).
Both the construction and the use of technologies depends on the habits developed by the human imbroglios that produce these technologies. These habits - norms or rules in Wittgensteinian sense - are not necessarily legal norms, but from a relational perspective of law it is precisely the normative aspect of legal norms that needs to be faced here. This normative aspect concerns our social habits (Winch: 1958:57-62, Hildebrandt 2002:33-102), or the expectations we expect others to have regarding our actions. The question is how to create habits in the practices of citizens, corporate enterprise, government officials, legal experts, scientists and technologists regarding the construction and use of these new technologies (identification, biometrics, ambient intelligence, profiling). These habits are core-business of the law: a rule on paper, a legal text, has no meaning untill the relevant practices have incorporated the rule as a habit. So if it is the case that data protection legislation imposes rules that don’t work, however legal and legitimate they may be, these rules may have to be reconsidered, precisely because law is more than a set of written legal rules. And perhaps the way to change these rules or reinvent them demands more interdisciplinary research and communication with the relevant publics (those that will be affected by these new technologies).
Bibliography:
David Brin, The Transparent Society. Will Technology Force Us to Choose Between Privacy and Freedom?, Reading, Massachusetts: Perseus Books 1998
Caspar Bowden, CCTV for inside your head. Blanket Traffic Data Retention and the Emergency Anti-Terrorism Legislation, Computer and Telecommunications Law Review 2002-2
Roger Clarke, The Digital Persona and its application to data surveillance, The Information Society (10) June 1994, published at
www.anu.edu.au/people/Roger.Clarke/DV/DigPersona.html
Michael Hill, Bush Funds US Spying on Internet Chat Rooms, published October 12, 2004 by Associated Press, to be accessed at
http://www.commondreams.org/headlines04/1012-02.htm
Don Ihde, Technology and the Lifeworld. From Garden to Earth, Bloomington and Indianapolis: Indiana University Press 1990
Don Ihde, Bodies in Technology, Minneapolis London: University of Minnesota Press 2002
Justice G?rard V. La Forest, Opinion on Video Surveillance, written for the Canadian Privacy Commissioner, April 5 2002 (www.privcom.gc.ca/media/nr-c/opinion_02410_e.asp)
Bruno Latour, We Have Never Been Modern, Cambrigde: Harvard University Press 1993
Lawrence Lessig, Code and other laws of cyberspace, Basic Books 1999
Steve Mann, Jason Nolan and Barry Wellman, Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments, Surveillance and Society 1(3), p. 331-355
J.E.J. Prins, The propertization of personal data and identities, Electronic Journal of Comparative Law (8) 2004-3, www.ejcl.org
Privacy International responds to the European commission on Data Retention, Invasive, illusionary, illegal and illegitimate. Privacy International and EDRi response to the Consultation on a Framework Decision on Data Retention, 15th September 2004, to be accessed at
http://www.privacyinternational.org/issues/terrorism/rpt/responsetoretention.html
Hilary Rose, The Commodification of Virtual Reality. The Icelandic Health Sector Database, in: Alan H. Goodman, Deborah Heath, M.Susan Lindee (ed.), Genetic Nature/Culture. Anthropology and Science beyond the two-culture divide, Berkeley and Los Angelos: University of California Press 2003
Jeffrey Rosen, The Naked Crowd. Reclaiming security and freedom in an anxious age, New York: Random House 2004
Brandon C. Welsh and David P. Farrington, Crime prevention effects of closed circuit television: a systematic review, Home Office Research Study 252, August 2002