The controversial Loi sur le renseignement (“Intelligence Law” ‒ literally “Law on Information”) authorising mass surveillance of French citizens has just been enforced and continues to cause turmoil. In the midst of this particularly tumultuous time frame, we would like to present a series of publications on creative strategies which can be used to skirt surveillance and to pinpoint the normalisation of online behaviour that surveillance entails.
Thursday, October 8. 2015
Finn Brunton and Helen Nissenbaum have just published their guide Obfuscation, A user's guide for privacy and protest (M.I.T Press), a “survival kit” teaching us how to cope in a world governed by widespread digital surveillance, in which our data is widely collected by governments, companies, advertising companies and hackers. In addition to common techniques used to protect privacy such as cryptography, they suggest also using “obfuscation” techniques, that is to say deliberately using ambiguous and unclear information in order to make the collected data murky. The manual targets ordinary people who are not experts ‒ i.e. those who are not in the position to delete their traces or control the use of their data on the net ‒ and aims at providing them with the tools to counter the unbalanced power and knowledge relationships existing today between the surveillants and the surveilled.
In their book, Brunton and Nissenbaum, who are both Professors in Media, Culture and Communication at the University of New York, review a large range of strategies that can be chosen from according to each person’s needs and context. They were inspired by both old and contemporary examples, such as the use of metal filaments by planes during the Second World War to blur enemy radar signals (Chaff countermeasure), the armies of “Twitter bots” deployed by the Russian and Mexican governments to manipulate public opinion, or plug-ins such as Ad Nauseam, which helps “make a research profile obscure”.
Could you tell us what “obfuscation” means for you? In which forms can it exist?
Finn Brunton: I don't think we can do better, for simplicity, than our definition in the book: "Obfuscation, at its most abstract, is the production of noise modeled on an existing signal in order to make a collection of data more ambiguous, confusing, harder to exploit, more difficult to act on, and therefore less valuable. The word 'obfuscation' was chosen for this activity because it connotes obscurity, unintelligibility, and bewilderment and because it helps to distinguish this approach from methods that rely on disappearance or erasure. Obfuscation assumes that the signal can be spotted in some way and adds a plethora of related, similar, and pertinent signals—a crowd which an individual can mix, mingle, and, if only for a short time, hide.
As highlighted by you in your introduction, obfuscation is not a new strategy. Could you remind us of some ways in which it was used in the past and explain in what way these examples are also contemporary?
F.B.: Part of what makes obfuscation interesting is that it's fairly intuitive -- the idea of hiding like with like has a rich history in fact, folklore, and myth. We have examples from the Second World War, from the struggle against Apartheid, even -- with examples dearest to my heart -- from things like the Roman ancilia, where a sacred object would be kept with excellent fakes to deter thieves. It's not that these techniques in particular are relevant, as that they suggest the diversity of contexts in which obfuscation is usefully applied.
The design of your book cover refers to “Chaff”. Could you explain to us how this anti-radar countermeasure worked and in what way it has influenced your “thought”?
F.B.: Chaff was a starting point for us because it came to mind as a point of comparison with TrackMeNot, and the fact that there was a similar mechanism at work in such disparate things suggested there was a larger approach to the problem of surveillance that we could identify and discuss. It's also a great way to frame obfuscation because it's so humble -- just pieces of foil, useful only for a limited period of time and a very specific adversary, but quite effective for its purpose. When you can't hide the radar echo of the plane, you make many echoes.
You argue that obfuscation is a necessary strategy in order to rebalance relations between the governed and the governing in the age of mass data.
F.B.: It is obvious that information collection takes place in asymmetrical power relationships: we rarely have a choice as to whether or not we are monitored, what is done with any information that is gathered, or what is done to us on the basis of conclusions drawn from that information. Obfuscation is related to this problem of asymmetry of power, as the camouflage comparisons suggests, it is an approach suited to situations in which we can't easily escape observation but we must move and act. A second aspect, the informational or epistemic asymmetry is a deeper and more pernicious problem. We don't know what near-future algorithms, techniques, hardware and databases will be able to do with our data.
The problem is the convergence of asymmetries: those who know about us have power over us. They can deny us employment, deprive us of credit, restrict our movements, refuse us shelter, membership or education.
Your guide includes several examples in connection with the obfuscation of data. What advice would you give to a non-expert user?
F.B.: This is actually a harder question than you might think, but for an important reason: obfuscation isn't a one-size-fits-all solution, or even a specific approach that be readily slotted into a set of practices everyone should adopt. Our goal for this book is to present this general technique, which can be applied in a lot of different areas by different parties. We want people to start thinking in obfuscation terms about infringements on their data, and see what they can do with it and how it can supplement the existing world of privacy and protest systems.
Some forms of obfuscation can be carried out by one individual, while others are based on cooperation, such as collective identities, the exchanges of loyalty cards or the anonymity network TOR. Do you think that a massive involvement in these obfuscation strategies could produce a decisive political impact?
F.B.: Specifically, it really depends. (For instance, much broader adoption -- particularly in setting up relays, exit nodes, etc. -- of Tor would be a fantastic thing; broader adoption of some of the other obfuscation methods, by bringing greater attention to them, could make them more vulnerable.) However, in general, massive participation would, we hope, herald public dissatisfaction and resistance on a scale that would provoke business and governmental changes.
Are artists often involved in these obfuscation strategies? Is it because this practice requires a certain degree of creativity?
F.B.: This is a great question! I think one of the really valuable things about obfuscation is that it's an expressive technique, which is why it's useful for protest as well as privacy. Depending on the precise approach, adversary, and goal, you're not just hiding but making something. Noise, fake profiles, masks, card-swapping events. There's a particularly playful, trouble-making element of obfuscation as compared to other, more rigorous techniques specifically for protecting privacy, and that expressive, playful aspect of what can be done with obfuscation can be a fruitful place for artistic work.
You also provide several examples of how obfuscation can be used by governments, for instance the Twitter bots used in Russia ou Mexico to cloud their opponents’ channels.
F.B.: Indeed! Obfuscation is used by the powerful -- just not often, because they have much more immediately effective tools (like direct political influence, wealth and legal recourse) on their side. Another example, too recent to be in the book, is the use of an obfuscation approach to conceal traffic by "personas" managed by the US to manipulate conversations on forums and social network sites.
On 21st October 1999, The Electronic Disturbance Theater organised a Jam Echelon Day, which encouraged the community of internet users to send as many e-mails as possible containing certain keywords that were thought to trigger the surveillance network. The organisers believed that if the amount of surveilled e-mails became too big, they would be able to overload the intelligence system and diminish its efficiency. In spring 2015, when the French parliament passed the Loi sur le renseignement (Intelligence Law) making the mass surveillance of citizens legal, the project sur-ecoute.org was launched. This is an application which every time you write on Twitter or Facebook generates a cloud of keywords that are thought to be suspicious for the black boxes that the government has planned to set up on the internet service providers. Do you see obfuscation as an act of protest or rather as a strategy which can actually successfully protect us?
F.B.: It's a mix of the two. With all due respect to the EDT, a major signals intelligence operation is in no danger of being overworked, even with all the additional input! GCHQ, we've recently learned, is storing and managing "events" -- metadata records -- on the scale of trillions, with billions of new additions per day. It's not about trying to swamp the systems of state intelligence services, but rather to find targets, occasions, or adversaries where obfuscation can work effectively. In the French case, for instance, the utility of generating trigger words could lie not in overwhelming the system technically, but in making the words -- and their use -- less actionable by French law enforcement, as well as a protest action that's visible on social networks and mocks the attempt to use language as an automated flagging system. (I don't know enough specifics about the "black boxes" in this case.) It's to say that obfuscation must be applied, thoughtfully, relative to one's threat model and goals.
How can we assess the actual efficiency of these techniques, when we think of the sophisticated analysis tools that exist today? Wouldn’t it be easier to just learn about criptography?
F.B.: This is a related case. We'd never discourage people from using cryptography -- quite the contrary! It's a question of when obfuscation could be useful (potentially in conjunction with crypto, or in a context where one cannot encrypt content but can still try to render it confusing, uncertain, ambiguous, etc). One of the really interesting areas of research for obfuscation in software is figuring out how to model the kinds of analysis it will be subject to. Can we beat them? What about situations where we just need to briefly avoid detection rather than sustained scrutiny? Are certain kinds of obfuscation better against human observers than machine learning, or vice versa?
Obfuscation only seems a temporary option. Wouldn’t disengaging oneself completely from the Internet be a more radical solution?
F.B.: We talk a bit about the fantasy of opting out on p 54. It's very, very difficult in an industrialized and urbanizing country to really "opt out," and the tradeoffs are severe. Many activist movements face the hard choice between ubiquitous, easy-to-use corporate platforms like Facebook and what are often self-marginalizing alternatives. There's something deeply unsatisfying, as well, about being in a position of privilege where you can choose to avoid all these data-collecting systems (presumably you don't need to work!) and then simply abandoning those less fortunate to their fate.
How do you see the future of this method?
F.B.: We are very excited for the wider application of these ideas! In particular, I'm interested to see if they can be used to help companies provide a truly robust promise of long-term privacy to their users. A tailored obfuscation layer can make it so a company can provide a service to the user, and be unable to use that data for any other purpose. More important, no one else can use that data either -- so if they go out of business, or have a change of leadership, your data can't be put to use in some very different context than you planned. The company can make a promise -- that your data is safe with them -- that doesn't require you to trust them and anticipate all future circumstances.
Here are some projects that use obfuscation:
Facial Weaponization Suite de Zach Blas
TrackMeNot (Daniel C. Howe, Helen Nissenbaum, Vincent Toubiana): works as a background process of your browser to regularly run searches on the search engines you use (for example Yahoo !, Google or Bing). TrackMeNot hides your searches in a “ghost” search cloud with the aim of making user profile-forming more complex and ultimately not efficient.
AdNauseam (Daniel C. Howe, Helen Nissenbaum, Mushon Zer-Aviv) clicks on the ads for you. This browser extension aims at obfuscating browsed data and protecting users from the tracking carried out by advertising networks by automatically clicking on all the adverts in the background, thereby polluting user profiles and giving rise to dissatisfaction between the advertisers and the networks they pay for clicks. Through AdNauseam users can also visually explore the adverts submitted to them.
I like what I see by Steve Klise is a Chrome extension that automatically clicks on all the “Like” links on Facebook.
FaceCloak: a research project in the form of a Firefox extension that replaces your personal information with fake information before sending it on a social network. Your personal information is encrypted and securely stocked elsewhere. Only authorised friends have access to this information and FaceCloak replaces the fake information when your friends look at your profile.
ScareMail by Ben Grosser is a browser extension that writes “scary” e-mails to disrupt NSA surveillance, adding algorithmically-generated text containing a collection of keywords which are in all likelihood researched by the intelligence agency.
CacheCloak (Meyerowitz and Choudhury) puts forward the possibility of accessing localised services making the user localisation anonymous. “Whereas other methods attempt to cloud user routes by deleting certain parts, we cloud the user’s localisation by surrounding it with other routes”.
Facial Weaponization Suite by Zach Blas involves creating (through workshops) 3D plastic masks modelled on the basis of facial data from the participating communities. The result evokes what could be called a collective, bulging and shapeless face, an inhuman and paranoid character which cannot be grasped through algorithms. This aims at confronting the audience about the downward spiral of biometry and at providing protection against the “digital eyes”.
Invisible, a project by the American artist Heather Dewey-Hagborg, presents two spray products to delete one’s DNA traces. “Erase” destroys 99,5% of the DNA and “replace” obfuscates the remaining 0,5%. The recipe is available in open source on the new platform biononymous.me and represents a first step in claiming the protection of biological private life.
Posted by Marie Lechner in Frankreich at 09:58
Related entries by tags:
Display comments as (Linear | Threaded)
The author does not allow comments to this entry