In the early 1990’s, Daniel Bernstein, a Berkeley mathematics PhD student, wanted to publish the source code for an encryption algorithm he had written along with an accompanying mathematics paper. In the age of Github, such an event would go largely noticed. Particularly if the author is a student, who usually struggle to get their professors to read their work, let alone anyone else. But in the 1990’s, this was groundbreaking. This is because until what became known as Bernstein v. Department of Justice,' the US government designated encryption as a ‘munition,’ classifying it along with a range of deadly weapons and thus subject to export restrictions.
The Electronic Frontier Foundation (EFF) disagreed with this designation and sued the US government on behalf of Bernstein. During the course of the case, Judge Marilyn Hall Patel ruled that:
This court can find no meaningful difference between computer language, particularly high-level languages as defined above, and German or French —.Like music and mathematical equations, computer language is just that, language, and it communicates information either to a computer or to those who can read it —.Thus, even if Snuffle source code, which is easily compiled into object code for the computer to read and easily used for encryption, is essentially functional, that does not remove it from the realm of speech —.For the purposes of First Amendment analysis, this court finds that source code is speech.
At a first reading, this is not only an important ruling, but a critical step in the development process of the very algorithms which can keep our data relatively safe and offer us some privacy protection. But on a closer reading, it raises some challenges which need addressing if we are to find meaningful ways to protect our data privacy in the future. As always, I begin this process of understanding by re-framing the question, as failings are seldom the result of giving the wrong answer, but giving the right answer to the wrong question. In this case, do questions about code resolve themselves in the form of ‘is it speech?’
I would argue such questions are in error. In the first place, to frame it as an issue of freedom of speech is problematic because in many cases, laws which ‘protect’ freedom of speech don’t actually speak to our right to express ourselves, rather they circumscribe other people’s, or more usually our government’s, right to violate or restrict our freedom. This may seem like semantics, but it is the heart of the issue because it shines a light on the tangibles of what is happening — censorship, prohibition — rather than an abstraction. Is it speech?
The other issue with focusing on protecting things by categorising them as speech is that many things which ought not to be protected are a form of speech: sexual harassment, incitement to violence. The crucial element here isn’t about levering ever more abstract elements into the notion of speech, but in ensuring freedom of expression and freedom of dissent remains permissible while safeguarding the individual’s privacy and security.
In this context the right question ought to be if government regulation of code ultimately threatens freedom of expression. For example, restricting the use of algorithms on which services rely: internet banking, messaging apps etc.
The benefit for privacy of such an approach is it would move away from the existing model that invariably frees companies from obligations under the spurious argument that to regulate their practice is to curtail freedom of speech. A point to remember when next your private moments are leveraged to improve a marketing algorithm. The unintended consequence of code is speech.