The GDHRNet Working Paper Series
The Working Papers of the Global Digital Human Rights Network contain both an idealistic and a practice-oriented dimension. It is often difficult but always unavoidable for academia to reach out to the “real world”. Scholars working with digital human rights have for some time realized that in the digital domain of human rights theory matters less and technical solutions matter more. The Working Paper series, again idealistically, attempt to reverse this pattern. The level of this goal’s pragmatism depends on the Network’s capability to break or at least question the strengthening grip of the online companies as powerful actors in defining the image of human rights in the digital landscape.
The current inaugural edition clearly shows how turbulent times accelerate the solidification of the novel “digital paradigm” in human rights protection. What during ordinary times could have taken decades may show itself as an important trend within a brief time due to the pandemic crisis. This means “normalization” of features which previously were considered contestable. For example, absence of transparency and foreseeability as inherent characteristics of private content governance has long been tacitly accepted because of the focus shift from the content assessment process to the outcome. But during difficult times, people expect answers and justification for decisions that impact how they can communicate.
The pandemic crisis has turned private platforms into even more powerful actors that largely set speech standards freely. Operation models of content governance are sometimes at odds with human rights principles developed by courts in and for the offline domain. This study saliently shows how the increased role of private platforms in crisis communication translates into daily decisions on what to delete and what not to delete. In doing so, platforms have become essential communicative actors in pandemic times. But do they enjoy the same level of legitimacy? How have they acted in different countries? What is their relation to states? Edited by Matthias C. Kettemann and Martin Fertmann at the Leibniz Institute for Media Research and based on submissions by COST Action participants from 20 countries, this first paper sets out to find answers. Many more will follow throughout the life-time of the COST Action.
Prof. Dr. Mart Susi
Chair of the COST Action Global Digital Human Rights Network
Selected Trends in Covid-19-Related (Dis)Information Governance on Platforms
While the Covid-19 pandemic has forced schools, many jobs and most social interactions to go online, the transformative power of online communication is not a new phenomenon. Already in 2015, the European Court of Human Rights (ECtHR) noted that the Internet “has now become one of the principal means by which individuals exercise their right to freedom to receive and impart information and ideas, providing [...] essential tools for participation in activities and discussions concerning political issues and issues of general interest.” A lot of this communication takes place in online settings that are ruled and regulated by private companies.
These rules are increasingly sophisticated but continue to be criticized widely for the opacity of their development, the arbitrariness of their application, and the non-plausibility of their execution. The general commitment of private actors to the Ruggie Principles as a “social license to operate” often ends when economic questions become dominant. It is therefore essential to underline states‘ primary obligation to respect, protect and ensure human rights on and vis-à-vis private platforms, coupled with a secondary obligation of companies to apply local law in light of international human rights standards. As one of us put it:
“States have a duty to protect their citizens with regard to the internet (and regarding their online activities, including the exercise of freedom of expression). Companies, too, have a corporate social responsibility to respect human rights within their sphere of influence, which – on the internet – is growing rapidly as the majority of relevant communicative acts take place in private spaces. The special role of intermediaries is another challenge for regulating the internet. As the majority of online spaces lie in private hands, it is private law that prima facie frames many norm conflicts online. When states react belatedly through laws or judgments, these may lead to overblocking or legal conflicts between competing jurisdictions. This is why states, offline just as online, have both the negative obligation to refrain from violating the right to freedom of expression and other human rights in the digital environment, and the positive obligation to protect human rights and create an enabling and safe environment for everyone. Due to the horizontal effects of human rights, the positive obligation to protect includes a duty for states to protect individuals from the actions of private parties by making intermediaries comply with relevant legal and regulatory frameworks.”
This duty to protect becomes especially important in pandemic times. The possible effects of health-related disinformation in a global health emergency, but also the effects of measures taken to tackle such disinformation are raising the stakes in the ongoing, and largely open-ended, discussion. How and by whom should rules for online communication be formulated and enforced?
How can we as researchers navigate the plethora of platforms, governance approaches, disinformation narratives and the respective societal contexts in such a volatile situation? We are convinced that this can only be achieved through collective scientific action. This study may therefore function as a proof of concept.
This study explores the spread of disinformation relating to the Covid-19 pandemic on the Internet, dubbed by some as the pandemic’s accompanying “infodemic,” and the societal reactions to this development across different countries and platforms. Its focus is on the role of states and platforms in combatting online disinformation.
Through synthesizing answers to twelve questions submitted by more than 40 researchers about 20 countries within the GDHR Network, this exploratory study provides a first overview of how states and platforms have dealt with Corona-related disinformation. This can also provide incentives for further rigorous studies of disinformation governance standards and their impact across different socio-cultural environments.
The individual submissions within this study are not intended to function as stand-alone, comprehensive assessments of the respective country. Rather, they function as pixels that collectively constitute a picture of the Covid-19 disinformation landscape.
The initial situation in terms of the popularity of digital platforms in the surveyed countries offers a largely homogenous picture: in almost all responses, Facebook and YouTube belong to the top five, accompanied mostly by Instagram and Twitter and sometimes by Pinterest and LinkedIn. The search engine Google and the messaging service WhatsApp are mentioned less often, but if so, they rank first or second on the list of platforms. This finding most likely hints at different definitions of the term “platform,” which in some instances appears to refer to social media services, while in other cases content intermediaries like Google and messaging services like WhatsApp are included as well. For supposedly similar reasons, popular national and regional news sites as popular websites and “platforms for content” appear on some countries’ top lists.
Said differentiation between (social media) platforms and messaging services appears to play an even more important role with regard to the spread of Corona-related (dis)information. While Facebook and, to a slightly lesser extent, Twitter, Instagram and other popular platforms are nearly always mentioned as a spreading medium, some replies explicitly point towards an increasing importance of messaging apps in circulating Covid-related disinformation. One report explicitly mentions the increasing practice of “chain-messaging via Viber and WhatsApp platforms, with disinformation about various aspects of the pandemic.” Adding to this, a reference from Israel argues that in the country “WhatsApp's groups are more dangerous in this time than public platforms such as Twitter, [as] the spreader identity provides credibility to the message delivered.” This is in line with a (non-representative) inquiry from Germany, arguing that the disinformation is published on content platforms like YouTube and spread via messaging or social networks like WhatsApp and Facebook.
In terms of distribution channels, several submissions have also shown a shift from social networks towards – in terms of measures against disinformation – more lenient, if not indifferent, messaging services like Telegram and Viber. It could be argued that groups/channels on such messaging platforms are on the rise as adversary (and non-observable) public fora internationally. This international notion also includes the transnational spread of certain disinformation, especially due to the common Russian language proficiency in Eastern Europe, as the example of Latvia shows: its submission details that “several fake news have also been distributed in Russian or have been translated from this language”.
The reported counter-measures against such disinformation generally belong to one of two categories: the first one concerns the platforms’ own efforts to counter disinformation, e.g. labels for potentially harmful, misleading information on Twitter; Covid-19-related content moderation rules on YouTube; a WHO chat bot on WhatsApp; and increased content moderation in cooperation with third-party fact-checkers on Facebook. Those measures, however, are not country-specific and (apparently step-by-step) applied without significant national differences – if differences in language and cooperating fact-checking organizations are left aside. Of all surveyed countries, only South Africa appears to be an exception here, as “misinformation is removed in response to public outrage or the possibility of criminal prosecution rather than any measures imposed by the social media platforms themselves.”
Besides the platforms’ own efforts, there have been notable examples of external initiatives to counter disinformation on platforms. One is a service to support journalists in verifying social network content: the platform called “Truly Media” was developed already before the pandemic, but has recently gained additional traction in Cyprus. Another platform-external effort is a new bottom-up initiative to counter disinformation online: a regional central/south-eastern European consortium of fact-checking portals created its own public Viber Channel titled (translation) “Covid-19 Checked” – thereby entering the area of direct messaging to counter disinformation. Apart from such single cases, governments and traditional news media have been named by many submissions as providers of specific initiatives against disinformation outside the platforms, but sometimes also as “trusted sources” in cooperation with social networks.
In general, governments of all surveyed countries have urged the public to consume information about Covid-19 in a responsible manner and warned against the dangers of disinformation in social networks. While in most countries the administrations’ argument refers to the danger for individual and public health, some warned against a threat to public order as such. Beyond the governments’ usual public relations outreach, several submissions describe the establishment of new governmental institutions to counter online disinformation during the pandemic, e.g. an anti-fake-news task-force in Italy or a special team at the Belgian police to search specifically for Covid-19-related disinformation.
When it comes to public comments on the topic by politicians and other officials, the majority of reported statements include the importance of a careful handling of Covid-19-related information. However, they do not elaborate on potential sources for the disinformation. If those are mentioned, the diagnosis is inconsistent: while some argue there was no “organized disinformation, rather more emotion-driven circulation of false information,” in other cases the involvement of foreign actors is at least implied. Similarly inconsistent is their attribution of responsibility for a proper handling of disinformation. On the one hand, politicians “expect social networks to live up to their responsibility,” on the other hand, they emphasize that “the main emphasis is precisely on individual responsibility and not one of the platforms.”
Besides the abstract mentioning of Covid-19-related disinformation, some officials explicitly responded to certain content, for example the conspiracy theory that Covid-19 was caused or spread by 5G mobile communication technology or that potential vaccinations were intended to implant subdermal microchips. Such and similar conspiracies occur in many of the surveyed countries, often including certain “perpetrators” or “scapegoats” on which the pandemic is blamed. Most common in this regard is the idea that Bill Gates was “behind” Covid-19. Likewise prevailing is the notion that the virus was a biological weapon developed by China or the US.
In addition to such theories of the origin of Covid-19, the pandemic is used to foster existing prejudices against minorities and already vulnerable societal groups – including Jews, Asians, and migrants. Equally notable is the reported prejudice mentioned in the reports from Lithuania and Serbia that Covid-19 was spread by NATO troops stationed in the respective country.
Court decisions on the removal of Covid-19-related content were not available to all researchers. This might also be due to the impact the pandemic had on court proceedings. However, information concerning criminal proceedings against users was available in multiple countries. In that regard, contributions reflect different national approaches to governing speech online: while there is a group of countries in which individual users have been prosecuted for spreading “fake news” as a misdemeanor under the respective national criminal law, reported criminal proceedings in other countries are limited to violations of social distancing rules offline, documentations of such violations in content posted online or cases of online-incitement to commit offline-violations (such as to attend prohibited protests).
Overall, it seems the pandemic has not (yet) reframed the way private actors are conceived of as potential enforcers of public rules. On the one hand, the enforcement of mask-wearing obligations by railway companies was debated in three countries, and shopkeepers and education institutions were widely obliged to ensure mask-wearing and social distancing rules within their respective spaces. However, as for the broader discussion on private enforcement of public rules, participating countries are both individually and collectively far away from a consensus whether private actors, be they railway companies, shopkeepers or online platforms, should or should not “play policeman”.
Across all submitting countries, online platforms have been used to disseminate governmental or municipal restrictions and suggestions pertaining to Covid-19, underlining the importance platforms such as Facebook or Twitter have for communicating governmental information.
The use of these platforms seems to focus on spreading easy-to-access overviews of rules and suggestions, likely in response to the complexity and volatility of infection prevention rules and suggestions: in the pandemic, platforms seem to become increasingly crucial spaces to receive information about changes to (infection prevention) rules and, perhaps even more importantly, to receive information that helps to make sense of the fast-changing letter of the law. In this respect, governmental entities have in two submitting countries formed new alliances with social media influencers as a means to convey accessible Covid-19-related information.
According to the submitting researchers’ individual, qualitative assessments, the role of platforms in dealing with Covid-19-related discourses/disinformation has not (yet) significantly impacted the way these platforms are considered. Although some submissions point to an increasing public awareness regarding issues such as Covid-19 disinformation, most of those assert, in one form or another, that “it does not appear (for now) that (…) Covid-19-related disinformation has impacted public opinion regarding the role of platforms (...)”, assessing that any current debate’s “focus is usually on specific instances of moderation decisions, (…) and a wider or a more systematic reconsideration of the role of platforms is lacking.”
Assessments regarding the question whether platforms have dealt with Covid-19-related discourses and disinformation sensibly vary: some submissions assert that platforms have done rather well in striking the necessary balance between respecting freedom of expression and necessary intervention, while other researchers fear that the platforms’ removal of Covid-19-related content might be a gateway to overly invasive content moderation practices in general.
The importance attached to the problems of private content moderation seems to vary due to differences in perceived reliability or trustworthiness of other, official information about the spread of Covid-19: where official information on Covid-19 is scarce, addressing this scarcity is considered as a more pressing step (than private intervention) towards limiting the spread of disinformation. Moreover, contributions underscore the need for further research into the pandemic’s effect on private content moderation practices. Overall, responses to the question whether platforms dealt with the issue sensibly were positive. In most states, platforms succeeded in providing access to authoritative information on the pandemic. Covid-19-related moderation in some cases even led to positive spillover effects on moderation practices regarding hate speech. The interplay of information provided by states, traditional media outlets and platforms is explicitly mentioned as fruitful in combating disinformation in some cases.
Some submissions, however, point out inadequacies in the moderation of disinformation on platforms. This relates to inadequate expertise and insufficient staffing, lack of effort, lack of a country-tailored approach, missing interlinkage with reliable official sources, and unclear duties of platforms. The main challenges identified in some countries relate to ensuring the authenticity of information on platforms, the misuse of disinformation on the pandemic as a tool for party politics, and a lack of private-public cooperation in the combat of disinformation, for example when content flagged as disinformation by officials was only removed by platforms in 50% of the cases. The Portuguese submission suggests, instead of removal, a real-time fact-checking system that uses a color scheme to classify information – “green for OK, yellow for unchecked, and red for confirmed ‘fake news’”. In addition, the interests behind content should be made transparent, e.g., on the funding of the respective sites. A challenge to such a system could be, as pointed out in another submission, conflicting statements from experts and investigative journalists and inadequacies of the official information system.
There are some common denominators to be identified in the recommendations on the roles of state authorities, companies/platforms, and civil society. One of these denominators is the need for more efforts regarding active information and transparency: states should act more transparently themselves regarding their emergency measures and reasons therefore and communicate accurately, timely and responsibly. To be able to do so in a credible manner, the quality of the underlying emergency legislation is important, as contradictory norms result in contradictory governmental communication. Another common denominator is a call for active cooperation of states, platforms and civil society. One concrete recommendation in this regard is the establishment of contact points in every country that coordinate cooperation. Critical information on (seemingly) divisive topics, such as purchasing agreements for vaccines in the present climate, should be communicated particularly transparently by governments.
States should actively use platforms in their efforts, according to some of the recommendations. One submission calls for positive incentives provided by the state for platforms to prioritize “truth” instead of profits. Others point towards a need for restraint from the state when trying to legislate against the spread of disinformation in order not to harm freedom of expression. When considering legislative action, some contributions call for specific national regulation of social media platforms, while the contribution from Germany, where such regulation is already in place, underlines the need to assess the possible adverse impacts of this legislation (NetzDG) during the pandemic.
Researchers recommend that platforms continue their efforts in enhancing access to reliable information as well as in removing disinformation. A number of contributions underline the need for platforms to more transparently communicate the extent in which Covid-19- related disinformation is removed. Recommended new models include the establishment of co-regulatory measures on a country-by-country basis. This would represent a paradigm shift, considering the fact that platforms have, so far, succeeded in ensuring the opposite: the submissions show that the nationalization of platforms’ responses to Covid-19-related disinformation is limited to fairly narrow, globally rolled-out “docking sites” for national authorities within their platforms (e.g. featured spaces for health authorities or access to chatbot-channels).
There is a broad consensus within the submissions that platforms should actively and transparently check content to prevent disinformation and provide access to reliable information. To do so effectively, they should not merely rely on algorithms, but use sufficient human moderators and provide adequate funding. One part of the recommendations explicitly calls for self-regulation of platforms in this regard. Platforms should moderate bearing in mind their users’ right to freedom of expression and avoid the impression of censorship. There is no clear preference for either deletion or flagging of content conveying disinformation. Algorithmic content classification should be further developed to be able to take context into account.
Public/private collaborations for spreading official information are contextualized in significantly different ways across submissions. While the Finnish submission evaluates the national authority’s approach to collaborate with influencers as multipliers for reliable information as an efficient way to combat disinformation, the German submission focuses on the dangers for misuse of platforms’ power to magnify governmental information. It explicitly calls for limiting such governmental use of platforms to the ongoing health crisis. It can be concluded that this should hold true also for other, comparable crises.
The (present and future) role of civil society is mainly portrayed as a provider of (social) media literacy, multiplier of reliable information, factchecker, flagger of disinformation, and watchdog keeping in check platforms and governments.
Concluding from the above observations and recommendations, the present health and information crisis has led to broad common understandings in many aspects. Lawmaking, political communication, the creation of information and the power structures behind it, and the moderation of content ought to be more transparent – with or without a crisis. However, the statements we analyzed in this study also highlight some of the disputed territories of (social) media regulation in Europe. The underlying questions about the existence of objectifiable truth (as opposed to “fake news”), the danger of opening Pandora’s box of governmental control over platforms as private entities used to disseminate this unclear “truth”, and the danger of encouraging overbroad content governance by private actors are among the most pressing of these questions. These potential negative impacts on human rights might also explain the preference of some commentators to remain within the boundaries of platform self-regulation and not to overstress state’s responsibilities. Some submissions, on the other hand, show an openness for more government intervention and regulation – a development that is well underway during the Covid-19-pandemic. The nature and exact scope of the related national norm-making if implemented, together with the harmonization efforts at the EU level, will play a key role in the shaping of the post-pandemic information society.
Contributions by Question and Country
The survey preceeding this study was completed by experts from 20 countries. Their response to 12 questions and further material have been attached to this study in full length.
Please change to the tab National Contributions to read all submissions.
Making Sense of Conspiracy Theories
The conspiracy theorist tends to adopt a stance of savvy, world-weary cynicism, always expecting the worst of officials and experts, all too ready to suspect anyone’s motives as corrupt. This default “hermeneutic of suspicion” has much in common with the politically progressive project of critique that also tries to delve beneath the confusion of surface detail to find the real sources of power that shape our societies. Indeed, many commentators have worried that precisely because critique has come to resemble conspiracy theory it has run out of steam. But at the same time the conspiracy theorist’s view of how history works is oddly naïve – gullible even. It can end up distracting us from a more convincing explanation of the world’s problems, and diverting political energies from actually doing something about them. Where those trained in social sciences see the complex interaction of social and economic forces, powerful institutions, ideological persuasion and conflicts of vested interests, the conspiracy theorist personifies those abstractions and focuses instead on a story of the intentional actions of a small, but hidden group of conspirators. For the social scientist, there is no need for a conspiracy theory to explain why, for example, the 1% succeed in shaping the world to their will. The elite as a social class with shared interests openly pursue their transparent goals of self-advancement, and it does not take a secret conspiracy of obscure plotters for them to be able to achieve this. In addition, experience suggests that what we’re witnessing with the pandemic is not the result of some four-dimensional chess (whether by Dominic Cummings or the Illuminati), but an omnishambles created by a government finding itself serially out of its depth, convinced of its superior wisdom and repeatedly resorting to cronyism.
Conspiracy theory, we might therefore say, functions as a form of pop sociology, with the crucial difference that (in Michael Butter’s terms) it engages either in deflection (it identifies the right issue, but blames the wrong people) or distortion (it latches onto the right group to blame, but for the wrong reasons). It is not surprising that the more that people feel powerless in the face of political, financial and technological vested interests, the more they turn to narratives involving powerful but shadowy agents behind the scenes pulling the strings. It might be scary and depressing to believe that there is a vast, evil conspiracy secretly controlling events, but that can be oddly comforting because it leaves open the possibility that the righteous might one day take hold of the levers of power themselves. There’s a New Yorker cartoon that sums up the position that a lot of us find ourselves in. We know that there is probably not a vast conspiracy that has made the world as fucked up as it is, but we can’t help shake the nagging feeling that it sure looks as if someone planned it. The cartoon shows a lone guy protesting on the street with a placard that reads, “We are being CONTROLLED by random outcomes of a complex system.”
For some people conspiracy theories undoubtedly fulfil psychological needs, especially in times of crisis, conflict or rapid social change. The stories of how a particular individual came to embrace full-blown conspiracism are regularly fascinating and moving. Psychologists now tend to think that belief in conspiracy theories is not the product of abnormal psychology, but the result of cognitive biases that we all share to a greater or lesser extent, coupled with specific emotional and social needs. We are attracted to explanations that promise to make sense of the seeming randomness and complexity of current affairs; we like to feel that we are one of the clever few who have managed to see through the lies and manipulation; we are drawn to theories that make us feel not so powerless; and we reach out for compelling accounts of why our particular group or nation is being victimised. But these insights into the psychological mechanisms at work downplay other social and political reasons why sizeable numbers of people are attracted to conspiracy thinking in particular historical moments. People believe in conspiracy theories not (or, not merely) because they are misinformed or stupid or crazy or their brains are hard-wired to see patterns, but because conspiracy theories fulfil the need to find someone to blame for genuine problems in society. However, we also now need to be alert to the possibility that there are malicious groups (both foreign disinformation units, and domestic political groups and alt-right trolls) engaged in campaigns of so-called coordinated inauthentic behaviour on social media to promote conspiracy theories and other forms of “problematic information.” Often the motivation is not to champion one particular alternative view but to sow the seeds of doubt about all evidence, science and expertise. The aim of polluting the online information environment is to increase distrust, stoke resentment and destabilise society, and this might well be the most damaging effect of online conspiracism.
Likewise, we need to think about the financial incentives of the “conspiracy entrepreneurs” who make a healthy living from promoting conspiracy theories, along with their side-line in snake-oil cures (e.g. “Modern Miracle Solution” and “Colloidal Silver”). Professional charlatans such as Alex Jones and David Icke make a living from peddling their speeches, books and other merch, and it was no surprise to find this latter veteran conspiracy-mongerer jumping on the bandwagon of the coronavirus pandemic with his ready-made conspiracy explanations that mix bizarre alien fantasies with all-too-familiar antisemitic myths. Finally, we need to be alert to the possibility that sometimes conspiracy theories are not the sincere expression of a deeply held belief, but are a pragmatic, tactical stance that people adopt to help bolster other positions they do genuinely believe. For example, research has shown that climate change conspiracy theories are often used strategically by those opposed to the political consequences of recognising climate change as real. If you are as a matter of ideological faith against government regulation of markets, then it’s politically convenient to claim that climate scientists are corrupt and it’s all a hoax.
The stereotypical picture of the conspiracy theorist is a socially awkward guy in his parents’ basement, a keyboard warrior wearing a tin foil hat. But research has shown that this clichéd portrait is not entirely accurate. In general, men are no more likely to believe in conspiracy theories than women, but it all depends on the particular example. Surveys show that most hard-core moon landing conspiracy theorists are men, for example, but anti-vaxxers are more likely to be women. In a similar fashion, there’s not that much difference in general between young and old, black and white, religious or not when it comes to conspiracy belief, but once again it depends on the particular case. The only significant difference comes with income and education: the richer and the better educated you are, the less likely you are to believe in conspiracy theories. In the case of coronavirus conspiracy theories, for example, a recent survey in the US found that 48% of those with only a high school level of education think it is probably or definitely true that powerful people intentionally planned the COVID-19 outbreak, whereas only 15% of those with a postgraduate degree think that is the case. The only other significant predictor is that if you believe in one conspiracy theory, you tend to believe in many – which makes sense, if you start from the conviction that everything is connected.
But what about political belief: are those on the right wing more likely to believe in conspiracy theories than those on the left? Again, it all depends on context, not least where you live and what’s happening politically. Belief in conspiracy theories is often partisan, with people – unsurprisingly – more likely to believe in conspiracy theories about the authorities when the party they identify with is not in power. (The exception to this rule is Trump, of course, who promoted conspiracy theories about Obama and Hillary Clinton when he was on the campaign trail, but continued to do so while in office.) Research in a number of countries indicates that in general conspiracy belief is higher at the extreme ends of the political spectrum. However, there are reasons to think that there is increasingly a connection between conspiracism and right-wing politics. If you think that, as Ronald Reagan famously said, government is the problem not the solution, then it stands to reason that you might well view any encroachment of the “nanny state” into your personal life as part of a bigger conspiracy to deprive you of your freedoms.
Conspiracy theories have a long history, but have the internet and social media made conspiracy theories go viral? There are good reasons to think that the internet and conspiracy theory are made for one another. Not only is it simple for anyone to distribute professional-looking materials online with virtually no gate-keeping and at incredible speed, but it is now easy to find a like-minded audience in ways that were unthinkable in the past. Some commentators have suggested that conspiracy theorists often become trapped in digital “echo chambers” where they only engage with like-minded fellow believers. This is coupled with the power of search engine results to create a “filter bubble” effect, in which individuals only receive information that reinforces their blinkered worldview. While this is undoubtedly sometimes the case, the online world is far more diverse than the filter bubble and echo chamber theories suggest. Search engine results are rarely completely uniform, and online communities are seldom totally immune to outside influence. People’s media diet is in reality quite varied. When an echo chamber does emerge online, it is not necessarily caused by the inherent nature of the technology itself but by a process of social self-selection by participants that is also visible in the offline world. Likewise, there is a tendency to exaggerate the power of online communication, suggesting that viral memes – like actual viruses – can take over the mind and body of a vulnerable recipient, brainwashing them. Those who engage in online conspiracy communities are far from passive, and we therefore need to understand both their personal involvement but also the group dynamics that particular platforms generate.
However, fuelled by the financial incentive of encouraging ever more divisive, emotive and engaging content, the recommendation algorithms of social media platforms can end up pushing some users down the rabbit hole of radicalisation. With their seductive rhetoric, conspiracy theories play a central role in this process. The social media companies have been slow to acknowledge the role that their platform design choices play in encouraging the spread of harmful misinformation and hateful extremism, hiding behind the defence that their algorithms are merely giving people more of what they like. But this ignores the tendency of the recommendation algorithms to promote content that is ever more extreme. In the case of Dylann Roof, who killed nine African Americans in a church in Charleston in 2015, detectives were able to reconstruct his browser history, showing his online journey into violent white supremacism. In the face of a public outcry about this and other mass shootings in which the gunman had clearly been heavily invested in online racist conspiracy-mongering, social media platforms such as YouTube began in 2019 to remove some conspiracist content and reduce its prominence by changing their algorithm. With the coronavirus pandemic, the platforms have taken a more proactive stance on content moderation, removing material that promotes harmful medical information relating to COVID-19. In October 2020, Facebook, for example, announced that it will ban ads that merely discourage people from getting vaccinated, tightening up their earlier ban on ads that actively promoted vaccine misinformation. But the volume, speed and viral spread of misinformation means that often the platforms are trying to close the stable door long after the horse has bolted. Ultimately, their business model is based on stoking controversy to generate engagement and advertising revenue, and conspiracy theories fit the bill perfectly.
If, as we’ve been arguing, conspiracy theories are highly resistant to correction, no amount of fact checking, flagging mechanisms and promotion of accurate information on the part of the platforms are likely to make much difference. Those approaches are just as likely to make red-pilled conspiracy theorists dig in their heels, convinced that Silicon Valley is itself part of the conspiracy to suppress the truth. Conspiracy theories about the coronavirus are spreading not so much because people are unable to access vital information, but because they distrust official sources of information – even fact checkers. That doesn’t mean we should give up on putting out correct information about COVID-19 and linking to point-by-point debunking of conspiracy myths, but we need a sense of realism that the facts won’t simply speak for themselves and win the argument.
So, what can we do about conspiracy theories in the time of corona? First, independent regulation of social media platforms is vital, although we have to recognise that it is not a panacea, and it needs to be nuanced. Outright deplatforming is sometimes necessary for content that clearly promotes hatred and violence, but making borderline problematic content harder to find or demonetising it might be enough to help stop some stories going viral. One of the investigations we are running on the Infodemic project is into the effectiveness of the various changes that internet companies have introduced during the pandemic. Social media platforms need to change their algorithms to ensure that they are not actively promoting harmful conspiracy materials, and they need to allow independent auditing of their black-box technologies. Second, we need to choose which battles to fight. Hard-core believers often make up only a small percentage of the total number of those who show an interest in a conspiracy theory, and they might well be a lost cause. It therefore makes more sense to engage with people who don’t fully believe in a theory, but don’t fully disbelieve in it either. Teaching analytical thinking skills and digital media literacy are undoubtedly an important tool in the fight against the pollution of the online information ecosystem, but they have their limitations. For one thing, conspiracy theorists often seem to have learned the lessons of information literacy all too well: they are the first to cast suspicion on a story in the press, pointing out the vested interests and the techniques of persuasion.
But this might give us our first way in. If you’re so sceptical, this line of engagement goes, then maybe you need to be a bit more sceptical about your own beliefs and sources of information, including taking a closer look at the financial motives of conspiracy entrepreneurs, and getting them to consider with a more sceptical eye what else would need to be the case if there really was a secret cabal pulling the strings behind the scenes as they claim. Of course, there is no guarantee that this approach will have any effect, but it has the advantage of opening up a conversation, rather than instantly descending into a face-off of my facts against your facts. Establishing a sense of connection with a conspiracy believer is crucial. Tempting though it is to ridicule anyone willing to even entertain such ideas, we need to show a bit of empathy. We need to understand that conspiracy theories can be a way for people to give vent to a sense of grievance about the injustices of the world (or, at the very least, their own situation in life). Those grievances are often very real, even if the specific theories and scapegoats are wide of the mark. Conspiracy theorists are often motivated by a sense of justice or patriotism or anger that we all can identify with, even if we think that their explanations of what is happening are completely mistaken. We also need to recognise the pleasures and thrills of conspiracy theorising, to try to understand why these kinds of story are so appealing to so many people. It’s unlikely, however, that the popularity of conspiracy theories is going to diminish unless people have more reason to trust that we are all, genuinely, in this together.