• 13Feb

    (I know I’ve been MIA. I have all kinds of ideas, but just haven’t had time to get them down. I’m not giving up, though!)

    Yesterday, ASU’s Center for Science and the Imagination organized an event with visiting author Cory Doctorow to discuss issues of hacktivism in light of Aaron Swartz’s recent suicide. I was honored to be invited to take part in the discussion. The full name of the event was “Hackers + Activism: Aaron Swartz, Anonymous, and the Ethics of Digital Community.” (The full video of the event is also available at that link if you’re interested in seeing it.)

    Before the panel, I jotted down some notes to myself, as I thought through the panel topic and had a few ideas about where the discussion might lead. Some of the points came up in our conversation (which was really a lot of fun), and some didn’t. I thought I’d post them here, as much for my own eventual future reference as to share. Who knows, maybe we can continue the conversation in this different context? I’d love to know how other people approach these issues.


    Hacking is a complex, contested term. In my understanding, it refers to a set of skills that can be used to circumvent boundaries or reframe, remix, re-elaborate systems or processes. This skill set–like, say, knowing how to hotwire a car–can be used for a variety of purposes, some more legal and/or ethical than others.

    Hacking culture is about creative disruption. It can be done purely “for the lulz,” for relatively harmless trolling; for criminal purposes, like theft or blackmail; or for the collective good. The latter would be what we typically refer to as “hacktivism.” In general, hacking culture is built around principles of freedom and flexibility. Many people are uncomfortable with anonymity, but want their own privacy. The issue is where to draw those boundaries. Who gets to have privacy, and who is required to be transparent?

    The problem, of course, is that “good” is also a relative term. Whose good? Who is the collective? And more importantly, what political, economic, and/or cultural power structures are being disrupted by the hacking? This is where things get sticky.

    We have institutions, like corporations and governments, which are embedded in larger legal and economic systems. These institutions and systems are pretty well entrenched and are built around order and restrictions of movement. This is the kind of power they rely on. And there’s nothing inherently wrong with that: it’s good to know what the laws are, so you know if you’re breaking them, and to have some reasonable expectation that they’ll be applied consistently. We want to know that our currency will be accepted in exchange for goods and services.

    The problem is that these systems are wholly inadequate when it comes to addressing today’s knowledge and information environment. Institutions change slowly, and those who have benefited from the status quo aren’t very eager to give up their power and economic interests. So, creative disruption can be a force for change. Legal systems don’t much like change, as a rule, and so often become even more rigid as a result–like a hedgehog.

    Last week, in my social media class, I had students look up the Terms of Service of some of their favorite social media sites. Many of them were appalled when they saw what they’d agreed to when joining these sites. What rights they’d given up, in some cases, and even more often by what rights the sites took for themselves. These terms are usually written using very broad language by lawyers who want to leave their clients plenty of leeway for future technological advances and… commercial opportunities. In the case of Aaron Swartz (and others who’ve been prosecuted based on the CFAA, or Computer Fraud & Abuse Act) these TOS agreements have been treated less as private contracts and more like actual laws, with much more severe penalties.

    They say that history is written by the victors. Recently you’ve probably all heard the story about the scientists who unearthed the bones of England’s Richard III, who has a pretty terrible reputation as a monarch. But there’s long been evidence that he was beloved by many of his subjects, and he instituted some important reforms, like actually taking away taxes after they’d raised sufficient funds for wars, or trying to make the legal system more equitable. The Tudors, anxious for the throne, had a vested interest in ensuring that Richard and the Plantagenets went down in history as terrible leaders, the better to reinforce their own “obvious” superiority as leaders of the populace.

    Everyone wants to be the voice of history. Hacktivists, I think, see themselves as having a moral duty to use their specialized knowledge and skills to prevent those with economic and political power from being the ultimate victors. Meanwhile, the powerful have both the interest and the means to depict these creative disruptors as threats to security, to privacy, to society. And using ambiguous language (like the term “hacker,” as far as the general public is concerned) is a big part of how they do it. And when the war of language and public opinion isn’t enough, they can unleash their institutional power to crush those who pose a real threat to the system.

    This, it seems, is where Aaron Swartz ran into trouble. He was too creatively disruptive for his own good. A lot of people see him as a hero for the work he did against SOPA, but he made a lot of powerful enemies in the process (as well as with the PACER incident, even before JSTOR).

    In a lot of ways, unfortunately, his situation proved his larger point: freedom of information and knowledge is the only way to empower citizens to fight against actors and institutions who seek to limit their access, and it’s the one thing those actors and institutions really fear.

    At the same time, information as power means that, if knowledge is freely available, power will also fall into the hands of people with nefarious goals. Of course, there are certainly malicious hackers, and sociopathic trolls who don’t care who they harm or what damage they leave in the wake of their lulz. This can make it even harder to figure out who the good guys are. And even good guys can make mistakes, or push too far.

    The only defense we have is to become knowledgeable about the subject, which in turn leads to responsibility. This includes the responsibility of demanding that our lawmakers and law enforcers understand the online environment. It’s easy to feel as though these are topics far away from us, personally, that we don’t really have an individual stake in them, especially for those of us who aren’t technologically inclined. But we who are privileged enough to be educated and who work in media-related careers should understand clearly that our current laws actively prevent much of our collective knowledge from being shared with the public, and there are immense pressures on legislators to restrict access even more. I think that being a hacktivist, or even just a regular activist, is an ethical imperative for those who really understand the issues at stake. If we create a caste system of access to knowledge–or harden the existing barriers even further–then we’ll just increase the divide between haves and have-nots in terms of political, economic, and cultural power.


    I also started to write some thoughts about the world of academic publishing, in case we went down that road, but since those aren’t written in complete sentences (and we didn’t address the topic in the panel discussion) I’m going to leave those out for now. It’s definitely a related subject, though, insofar as it’s another case of entrenched knowledge systems in which many actors have vested economic and cultural stakes, which finds itself at odds with a fast-changing world of rapid reorganization and an increasing demand for open access to knowledge.