The New Cathedral Door

In taking up the keyboard again, since I finished 100DaysToOffload last year, I have been digging through my thoughts folder, if you will permit a reification, and came across some musings from last year when I was listening to the Big Brother Watch podcasts. One episode in particular, Social Media Censorship and the Impact on Free Speech, presented some chilling changes which happened when we began our first spate of lockdowns as the Covid epidemic grew into a global pandemic.

On April 22, 2020, Twitter announced it was broadening it’s guidance on unverified claims about Covid-19. In the announcement tweet, they noted:

They defended their decision with some very plausible examples of why their policy has been expanded. For example:

The National Guard just announced that no more shipments of food will be arriving for two months — run to the grocery store ASAP and buy everything

or

5G causes coronavirus — go destroy the cell towers in your neighborhood!

Dangerous and True

However, to this writer these examples are analogous to Justice Oliver Wendell Holmes, Jr.’s opinion in the United States Supreme Court case Schenck v. United States. In his opinion he held that:

The most stringent protection of free speech would not protect a man falsely shouting fire in a theatre and causing a panic… The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent. It is a question of proximity and degree.

The Supreme Court Justice was seeking to draw a line between speech which is dangerous and false and speech which is dangerous and true. The notion being that even if dangerous, speech should be protected if true, but that speech which is false should not be protected by the right to free speech.

While this would seem to support Twitter, and other’s, censorship choices it differs in one important way. In a formal court ruling, the ancient concept of auctoritas patrum [authority of the fathers] is in play. A notion which carries with it something that falls short of a command but is much more than mere advice: advice which shouldn’t be ignored due to its authoritative nature. This goes to the heart of online censorship and raises two critical questions:

  1. Is it censoring that which is dangerous and false or that which is dangerous and true?
  2. Is the censorship decision following a deliberative process and arriving at an authoritative decision?

At the time of the Twitter policy change, several commentators raised concerns about both the transparency of the censorship processes by big tech and the arbitrary way in which they can enforce their rules. As judge, jury and executioners on their platforms, with little or no authoritative or transparent grounding to their rulings, and certainly no recourse of appeal to a higher and independent court, the process is more akin to authoritarian decision than democratic deliberation.

Minding the Door

Fast-forward ten months and the after tremors of allowing a company to determine what is and is not true should create a sense of unease no matter which side of the debate you are on.

This is because of what the internet is rather than what it seems. It seems to be a near unlimited tangle of free speech in which people say what they want, when they want and how they want. The reality, even in ‘free’ and easy going western democracies, is the net has come increasingly under the control of a very small number of providers. Even those with the technical knowledge and budget to live the POSSE dream, most are in one way or another bound to services such as AWS, Azure and app stores such as Google Play. Unelected and absolute arbitrators of who will be heard and who will have their service terminated.

While many will have cheered the blackout of Parler, many of those same voices have flocked to services such as Element in an attempt to keep their conversations in their control, safe from data-mining and ads. Yet for all its noble ideals, Element too found itself on the receiving end of an arbitrary decision to ban their Android client from the Play Store. Although the Element ban was swiftly reversed, it highlighted the fundamental problem: that the modern day cathedral door remains as tightly controlled by a minority as it was in 1517.

French Finance Minister Bruno Le Maire put it well when observing:

What shocks me is that Twitter is the one to close his [Donald Trump’s] account. The regulation of the digital world cannot be done by the digital oligarchy.

Source: France Inter

Cédric O drew a similar conclusion:

Translation: The closure of Donald Trump’s account by @Twitter, while it can be justified as some form of emergency protection, still raises fundamental questions. The regulation of public debate by the main social networks with regard to their T & Cs alone …

I am certainly not the first to recognize this growing problem, as the quotes above show. Nor am I immediately offering solutions. A recent article by ProtonMail, which re-stimulated my long dormant thinking on this from a year ago, offered the following solutions:

1). While it is not perfect, we support the European Union’s proposal, the Digital Markets Act. If vigorously and swiftly enforced, it could effectively curb Google’s and the other monopolists’ anticompetitive behavior. If you live in Europe, write or call your MEP and tell them you are in favor of a strong DMA.
2). And we are encouraged by the dozens of US state attorneys general who have filed suit against Google, Facebook, and other Big Tech companies for antitrust violations. If you live in the United States, write or call your elected representatives and tell them you are in favor of strong antitrust investigations into Big Tech.
3). Finally, you can choose to do business with companies and organizations that respect your privacy and freedom.

Source: ProtonMail Blog

Safeguarding the Dangerous

While I would certainly advocate the third solution, the first two bring us full circle to the line between speech which is dangerous and false and speech which is dangerous and true. More specifically, who makes the judgement call and what process is used to arrive at that decision. To replace a Facebook, Twitter or Amazon with another entity, is but to change the flag while maintaining the same living conditions within the country; to use a geo-political metaphor.

However, while the democratic process may be an imperfect one, particularly in the age of cancel culture where vocal minorities (I am using this term in the sense of a small number) frequently seek to ban speech contrary to their beliefs, it is preferable to the arbitrary rule of a corporation. Yet to latch on to legislation and the breaking up of monopolistic activity as the best solution, will fall short of the desired outcome in the long term. History teaches as much if we look at the breakup of past monopolies, such as the case of Standard Oil. One hundred years on, and one would be hard pressed to make the case that control of the energy supply has been liberated from minority control or the arbitrary will of corporations.

Yet hope is not lost. Light can always be found, even in the darkest times. If one only remembers to turn on the light. Thus it is seldom about who controls, how few control or even if the process is legislated. Rather, it is about the first principles on which our software and hardware infrastructure is managed.

Does it truly safeguard that which is dangerous and true while restricting that which is dangerous and false, and does it arrive at this determination through a process of deliberation; or, does it merely replace one arbitrary determination for another about that which is dangerous.

Goodnight and good luck.


Hercules’ fight with the Nemean lion by Peter Paul Rubens (1577-1640) is licensed under Public Domain.