top of page

Post

Trump Blocking and the Responsibility for Online Harms

In a turn of events that seemed to horrify everyone, yet surprise no one, a mob of Trump supporters stormed and breached the US Capitol in a protest against Joe Biden’s presidential victory on the 6th January 2021. This article builds on what followed, namely the wide-reaching block placed on Trump by Big Tech and its implications in the broader context of online harms.


Trump, the Big Tech and Online Harms

Following the storming of the Capitol, major social media platforms such as Twitter, Facebook and Snapchat proceeded to block the former US president’s accounts. This is a welcome response from the Big Tech, owing to their usual claims of neutrality. Many were happy to see the intermediaries finally taking a stance on the incitement of violence, particularly by such an authoritative figure.


However, arguments can be made that the actions are ‘too little, too late’. As put by Catherine Bennet, ‘it only took four-plus years of incendiary tweets, racist statements and violent threats’ for platforms to finally react (Bennet, 2021). It is difficult to claim that the violence following the election results was unexpected. It had been simmering throughout the 2020 election, encouraged by the spread of conspiracy theories and misinformation via social media platforms, until it reached its boiling point on 6th January 2021. Had the digital intermediaries engaged in more active combatting of the spread of fake news earlier on in the year, maybe the US Capitol riots could have been prevented. Arguably, Big Tech’s reactive involvement is no different from inaction.


Understanding Big Tech’s role in preventing online harms is significant as there are uncountable instances of harm being propagated online. As many opportunities as the digital world provides, it carries just as many risks. The Covid-19 pandemic has only proven this, with Lord David Puttnam calling it a ‘“pandemic of misinformation” that poses an existential threat to our democracy and way of life’ (Dubbins, 2021). Platforms such as Facebook or YouTube acted as breeding grounds for Covid-19 or vaccine-sceptics, who not only shared disinformation online but also voiced their distrust in the virus publicly by refusing to comply with lockdown restrictions or wear masks. The platforms are also a prime location for those with violent tendencies – only in 2020 did thousands of extremist groups use Facebook to organise anti-democractic, armed actions, a shooting in Wisconsin that left two dead, and the violent kidnapping of Michigan governor Gretchen Whitmer (Paul, 2021). Similarly, YouTube has proven slow to take down videos supporting outrageous conspiracy theories such as QAnon, with its algorithms at times increasing their popularity instead.


Overall, it can be seen that the level of online harms is worryingly high. Data gathered has shown that across adults and 12 to 15-year-olds, social media platforms are the most commonly cited sources of potential online harm (Ofcom, 2020). Overall, 61% of adults and 79% of teen internet users claim they have had at least one potentially harmful experience in the past 12 months (Ofcom, 2019). The list of harms reported stretched far wider than misinformation, including fraud, bullying, material showing sexual child abuse, material promoting self-harm, hate speech and violent or disturbing content (Ofcom, 2019). It therefore comes as no surprise that an outlet of the online violence has taken place in the real world.


The role of digital intermediaries

So, who should we look to for a response to online harms? Before jumping to any conclusions, it is important to understand the role social media platforms and other digital intermediaries play in the overall media environment.


A ‘digital intermediary’ can be understood as a broad range of services that provide the infrastructure for communications, including broadband providers, search engines, social networks and other publishing platforms (Rowbottom, 2018). Initially, such platforms were merely being viewed as technology companies. This is exemplified by the enactment of a range of safe harbours and exemptions in the early 1990s for digital intermediaries to promote the emerging internet market (Frosio, 2017). Since then, the digital world has continued to develop. Yet, in light of the growth in use of large social media platforms and their advances in surveillance, encryption and obfuscation, it leads us to question whether it is still appropriate to adopt a rather lenient approach towards digital intermediaries.


The general consensus appears to be that things have changed and digital intermediaries should be viewed as an important media body. As put by Rowbottom, nowadays such services are ‘a sector of the media that complements the traditional media institutions’ (Rowbottom, 2018). They not only provide a platform for speakers, but also play a role in curating the content online, making them crucial for the functioning of media in the digital era. As their significance in the media industry increases, it seems logical that their responsibilities should heighten proportionately.


Yet that is not the case – many intermediaries continue to behave as though they are still merely providing technology, and thus claim that they should not be held accountable for any harms taking place on their platforms. Characterised as the ‘techlash’ phenomenon, it is unsurprising that digital intermediaries are criticised as falling short of standards that one could reasonably expect (Sithigh, 2019). The European Commission emphasised that ‘the open digital spaces [online platforms] provide must not become breeding grounds for spaces that escape the rule of law’ (EC, 2016). Since digital intermediaries increasingly take centre stage in respect of access to information and content, more focus should be placed on the responsibilities that come with such a role. As Lynskey puts it, platforms are gatekeepers, which ‘control what content we access and the terms on which this content can be accessed’, while individuals ‘lack the knowledge and power to have a disciplining influence’ due to the lack of knowledge of how this control is exercised (Lynskey, 2017).


Should intermediaries take responsibility?


Despite the calls for platforms to recognise their roles, Big Tech has continued clinging to their supposed neutrality. Not only do they argue against being held responsible for the content published on their platforms, they also endorse free-speech absolutism. This means that they view any attempt at regulation or monitoring from their side as potential censorship of the users, which they fundamentally disagree with.


This is problematic in two ways. Firstly, although it is users who publish content, this doesn’t necessarily render the platforms neutral. The argument is increasingly made that design choices by service providers are motivated by ‘the business interest in maintaining user engagement without regard to any possible side effect’ (Woods, 2019). Similarly, Lessig argues that while there are other behavioural constraints placed by the law, market and social norms, it is ultimately the code that is the architecture of cyberspace which affects what people do online (Lessig, 1999). For example, it has been noted that people click on one of the first three results displayed when conducting an online search more than 70% of the time (Costa and Halpern, 2019). Moreover, disinhibited behaviours have been noted more often in online decision-making (ranging from purchases to bullying), with platforms taking advantage of it by using subtle cues. This suggests that our focus in fighting online harms should not be merely restricted to users. Digital intermediaries need to shoulder responsibility for how their platforms’ design may increase the risk of harm.


Secondly, it may be time for ‘a move to take place towards an understanding of speech moderation as a matter of public health’, as pointed out by professor Ethan Zuckerman of the University of Massachusetts-Amherst (Orturay, 2021). It is not disputed that freedom of speech is a fundamental human right, worthy of the extensive protection it receives worldwide. Yet the absolutist US approach may not be best-suited towards the new role being played by digital intermediaries. Other jurisdictions, such as the UK, have been willing to recognize that not all speech is equal. This is particularly well explained by Lady Hale in the Campbell judgment, where she placed different forms of speech alongside a protection spectrum – there is clearly a difference in what level of protection is warranted by political speech, and what level by hate speech (Campbell v MGN Ltd, [2004]). Therefore, we should seek to strike a balance in the digital world, between censorship and the abuse of our rights to the detriment of others.


Digital intermediaries cannot remain complacent in the face of online harms. Although some action has been taken, with the crack-down on QAnon taking place on Youtube or Facebook beginning the take-down of hate groups inciting violence on its platform, these reactive responses are not enough since harm has already been done. The Big Tech needs to cooperate with the governments to come up with the most effective form of regulation, steering clear of censorship while ensuring some moderation of both content and the algorithms. As cautioned by Nash, it would be naïve to think that we could design a uniform regulatory approach that is effective and proportionate in tackling all problems contributed by social media platforms (Nash, 2019). However, we should at least begin addressing inherently problematic behaviours both on the side of designers and consumers. The Big Tech has to stop acting as though the responsibility lies solely on the shoulders of the states.


What happens next?


With social media platforms taking such drastic response towards Trump, what will happen next? Will the blocking lead to the Big Tech holding other world leaders to a similar standard? After all, it is not only this former US president who has a controversial online presence. Or will the effects be even more widespread, with the intermediaries engaging more actively with the harms taking place on their platforms? Certainly, the banning of Trump’s accounts represents a move away from the claims of complete neutrality on part of social media giants and evidence of active engagement with online harms. However, we should bear in mind that action was only taken against Trump when the companies knew he no longer presented a threat. It seems unclear what path will further be taken to regulate online harms. We can only hope that more responsibility will be shouldered by those in the position to make a change.


A piece by Julia Laganowska



Bibliography


Bennet C (2021). After cutting Trump off, Big Tech charts new course in Washington. France24 [online] Available at: https://www.france24.com/en/americas/20210110-after-cutting-off-trump-big-tech-charts-new-course-in-washington[accessed 22 Jan 2021]

Campbell v MGN Ltd [2004] UKHL 22

Costa E and Halpern D (2019). The Behavioural Science of Online Harm and Manipulation, and What to Do about It: An Exploratory Paper to Spark Ideas and Debate. Behavioural Insights Team, 2019. Available at: https://www.bi.team/wp-content/uploads/2019/04/BIT_The-behavioural-science-of-online-harm- and-manipulation-and-what-to-do-about-it_Single.pdf

Dubbins J (2021). The Capitol-coup shows online harms are now real-world harms – are your ads funding them? The Drum [online] Available at: https://www.thedrum.com/opinion/2021/01/07/the-capitol-coup-shows-online-harms-are-now-real-world-harms-are-your-ads-funding [accessed 21 Jan 2021]


European Commission (2016). Online Platforms and the Digital Single Market: Opportunities and Challenges for Europe. COM (2016)


Frosio GF, (2017). ‘Reforming Intermediary Liability in the Platform Economy - A European Digital Single Market Strategy’, 112 NORTHWESTERN U. L. REV 19


Lessig L (1999). The Law of the Horse: What Cyberlaw Might Teach. 113 Harvard Law Review 501


Lynskey O (2017). Regulating ‘Platform Power’. LSE Law, Society and Economy Working Papers 1/2017


Nash V (2019). Revise and Resubmit? Reviewing the 2019 Online Harms White Paper. Journal of Media Law 11 (1) 18-27


Ofcom (2019). Internet users’ concerns about and experience of potential online harms. Jigsaw research May 2019. Available at: https://www.ofcom.org.uk/__data/assets/pdf_file/0028/149068/online-harms-chart-pack.pdf


Ofcom (2020). Internet users’ experience of potential online harms: summary of survey research. Jigsaw research Jan/Feb 2020. Available at: https://www.ofcom.org.uk/__data/assets/pdf_file/0025/196414/concerns-and-experiences-online-harms-2020-chart-pack-accessible.pdf


Orturay B (2021). Tech giants banished Trump; Now things get complicated. Japan Today [online] Available at: https://japantoday.com/category/features/opinions/tech-giants-banished-trump.-now-things-get-complicated [accessed 23 Jan 2021]


Paul, K (2021). Twitter and Facebook lock Donald Trump’s accounts after video address. The Guardian [online] Available at: https://www.theguardian.com/us-news/2021/jan/06/facebook-twitter-youtube-trump-video-supporters-capitol[accessed 22 Jan 2021]


Rowbottom J (2018). Media Law. Hart Publishing.


Sithigh DM (2019). The road to responsibilities new attitudes towards Internet intermediaries. Information & Communications Technology Law, 29:1, 1-21


Woods L (2019). The duty of care in the Online Harms White Paper. Journal of Media Law 11 (1) 6-17

23 views0 comments

Recent Posts

See All
bottom of page