Category Archives: Writings

Just Got My Advance Copy of ‘Robot Law’!

This is exciting: just got my first copy of “Robot Law,” a book I edited with Ryan Calo and Ian Kerr. I suppose I might be a little biased, but I think it’s a pretty darn good collection that will give anyone interested in how society will cope with robots plenty to think about.

Robot Law is apparently going to list for $165 when it’s out in (very) late March, which is a lot, but you can pre-order it for less, or buy an online copy for much less. Meanwhile, however, you can peek inside, and read my introductory essay which gives you a tour of the wonderful contributions by our extraordinarily varied contributors. This is not a book just by some law profs: it’s an attempt to do real interdisciplinary work and, more importantly, to foster an ongoing series of interdisciplinary conversations.

Of course, the real-life place where we do that is at We Robotregistration for this year’s conference is now open and the early-bird discounted registration ends Friday.

Posted in Robots, Writings | Comments Off on Just Got My Advance Copy of ‘Robot Law’!

Kewl

The HTTP 451 Error Code for Censorship Is Now an Internet Standard.

I believe this action of the IETF is consistent with the claims I made in my article Habermas@discourse.net: Toward a Critical Theory of Cyberspace, 116 Harv. L. Rev. 749 (2003).

Posted in Law: Internet Law, Writings | Comments Off on Kewl

From Anonymity to Identification

The inaugural issue of the a talk I gave in Heidelberg last December. I’m in good company: other authors in this issue are Markus Beckedahl, Jeanette Hofmann, Marianne Kneuer, Milton L. Mueller, Ekkehart Reimer, William Binney, Kai Cornelius, Myriam Dunn Cavelt, Sebastian Harnisch and Wolf J. Schünemann.

The full text of this open-access journal is available online, including a .pdf of From Anonymity to Identification. As Larry Solum likes to say, download it while it’s hot.

Here’s the abstract for “From Anonymity to Identification”:

This article examines whether anonymity online has a future. In the early days of the Internet, strong cryptography, anonymous remailers, and a relative lack of surveillance created an environment conducive to anonymous communication. Today, the outlook for online anonymity is poor. Several forces combine against it: ideologies that hold that anonymity is dangerous, or that identifying evil-doers is more important than ensuring a safe mechanism for unpopular speech; the profitability of identification in commerce; government surveillance; the influence of intellectual property interests and in requiring hardware and other tools that enforce identification; and the law at both national and supranational levels. As a result of these forces, online anonymity is now much more difficult than previously, and looks to become less and less possible. Nevertheless, the ability to speak truly freely remains an important ‘safety valve’ technology for the oppressed, for dissidents, and for whistle-blowers. The article argues that as data collection online merges with data collection offline, the ability to speak anonymously online will only become more valuable. Technical changes will be required if online anonymity is to remain possible. Whether these changes are possible depends on whether the public comes to appreciate and value the option of anonymous speech while it is still possible to engineer mechanisms to permit it.

Posted in Law: Internet Law, Surveillance, Writings | Comments Off on From Anonymity to Identification

Into the SOUPS

I’m off to Ottawa for the 2nd Annual Privacy Personas and Segmentation (PPS) Workshop which is being held in conjunction with the Symposium on Usable Privacy and Security (SOUPS).

The organizers selected me to give the keynote for the workshop, and I’ve produced a provocation for them. Here is the introduction:

Users are notoriously bad at safeguarding their online privacy. They do not read privacy policies, which in any case are mostly contracts of adhesion. They make over-optimistic assumptions about protections and dangers.[15] They use weak passwords (and repeat them), accept cookies, and leave their cell phones on thus facilitating location tracking, which is vastly more destructive to privacy than almost any user grasps. [8] Contrary to Alan Westin’s privacy segmentation analysis [31], most privacy choices are not knowing and deliberate because they are not within the user’s control (e.g. surveillance in public). Other ‘choices’ happen because users believe, correctly, that they in fact have no choice if they want the services (e.g. Google, mobile telephony) that large numbers of consumers consider necessary for modern life. [27]

The systematic exposure of the so-called “privacy vulnerable” user [27] suits important public and private interests. Marketers, law enforcement, and (as a result) hardware and software designers tend towards making technology surveillance-friendly and tend towards making communications and transactions easily linkable.

If we each have only one identity capable of transacting–even if it is mediated through multiple logins–and if our access to communications resources, such as ISPs and email, requires payment or authentication, then all too quickly everything we do online is at risk of being linked to one master dossier. The growth of real-world surveillance, and the ease with which cell phone tracking and face recognition will allow linkage to virtual identities, only adds to the size of that dossier. The consequences are that one is, effectively, always being watched as one speaks or reads, buys or sells, or joins with friends, colleagues, co-religionists, fellow activists, or hobbyists. In the long term, a world of near-total surveillance and endless record-keeping is likely to be one with less liberty, less experimentation, and certainly far less joy [16] (except maybe for the watchers). In a country such as the US where robust data-protection law is deeply unlikely, a technological solution is required if privacy is to continue to be relevant in the era of big data; one such, perhaps the best such, technological improvement would be to create an IMA designed to give every person multiple privacy-protective transaction-empowered digital personae. Roger Clarke provides a good working definition of the “digital persona” as “a model of an individual’s public personality based on data and maintained by transactions, and intended for use as a proxy for the individual.” [4]

Whereas Clarke presciently saw (and critiqued) the ‘dataveillance’ project as being an effort to create a single, increasingly accurate, digital persona connected to the person, the objective here is to undermine that linkage by having multiple personae that would not be as easy to link to each other or to the person.

(Updated to correct link to workshop.)

Posted in Talks & Conferences, Writings | 1 Comment

Link to My Paper

I neglected to link to Lessons Learned Too Well: Anonymity in a Time of Surveillance, the paper I’m presenting at #yalefesc. A very very small number of people will recognize this as a partial redraft of a paper I started a few years ago, but never published because it didn’t seem quite right. My plan is to get it as right as I can in the next few months, which is why I’m workshopping it.

Posted in Talks & Conferences, Writings | Comments Off on Link to My Paper

IETF’s Habermasian Resolve to Work Against Pervasive Monitoring

The IETF has issued RFC 7258, aka Best Current Practice 188, “Pervasive Monitoring Is an Attack”. This is an important document. Here’s a snippet of the intro:

Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise.

The IETF community’s technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community’s consensus and establishes the technical nature of PM.

The term “attack” is used here in a technical sense that differs somewhat from common English usage. In common English usage, an attack is an aggressive action perpetrated by an opponent, intended to enforce the opponent’s will on the attacked party. The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties.

The conclusion is simple, but powerful: “The IETF will strive to produce specifications that mitigate pervasive monitoring attacks.”

I can’t help but see this as a shining example of the IETF living up to its legitimate-rule-making potential, as I described in my 2003 Harvard Law Review article Habermas@discourse.net: Toward a Critical Theory of Cyberspace.

Below, I reprint my abstract: Continue reading

Posted in Internet, Surveillance, Writings | Comments Off on IETF’s Habermasian Resolve to Work Against Pervasive Monitoring