S3, E1: Something Wicked This Way Comes - A Primer
Series 3: Misinformation and Manipulation - by Their Works You Shall Know Them
“By the pricking of my thumbs,
Something wicked this way comes.”
Macbeth, act 4, scene 1 (second of three witches)
Misinformation is a twofold offence because its creation is accompanied by the displacement or destruction of valid information. A fact is not simply replaced by a falsehood on a one-to-one basis because everything that depends on that fact is immediately threatened. Think of it as ‘truth Jenga’.
Every time misinformation deprives someone of seeing, hearing or being able to speak the truth, the crime is compounded. Misinformation is alive, active, fecund and difficult to kill. It survives in public discourse, feeding like a parasite, infecting like a virus and spreading like a cancer1.
It’s not possible to destroy information or knowledge without creating a void and that is where dogmatists move in to take up space.
What power have you got?
Where did you get it from?
In whose interest do you exercise it?
To whom are you accountable?
How can we get rid of you?
“Five Questions to Power”, Tony Benn, UK Statesman
The Power of Misinformation
Misinformation is a route to influence and in some cases an illegitimate way to acquire power. It can bend reality to suit a purpose, but in doing so, destroys the common ground that makes sensible debate possible.
The distribution of misinformation requires the collaboration of acolytes who often unwittingly act against their own interests in evangelising for special interests from oligarchy or dogmatists.
What questions can be asked in any given situation to awaken the senses to the possibility of being duped? A good place to start is with Tony Benn's 'five questions to power’.
"What power have you got?" (1/5)
The first question relates to the limits of power, or equivalently, of influence. Those who acquire disproportionate power and influence have the ability to impact more lives for better or worse. Having power is seductive and the opportunity and license it gives to self-interest is corrupting.
Even those with selfless aims can quickly have their good intentions subverted by a life without practical limits in a bubble of sycophancy. Yet power does not only attract a fawning entourage but those who want influence. Those who will debase themselves and appeal to ego or greed to buy it. And yes, whether by flatterer or lobbyist, through patronage or donation, ‘buy it’, is the correct term. It’s always a trade.
So the first question to ask yourself becomes ‘do I consent to the power you have over me? ’
"Where did you get it from? ” (2/5)
Where the misinformation originated would be a clue to the interests it was intended to serve, but that is less of a factor in why it spreads, because there is a ‘pay-off ’ for everyone who passes it on.2
It’s personally rewarding to share information that helps somebody; misinformation can be given a foothold in our social network when we believe it to be true and innocently pass it on. Yet more frequently misinformation is knowingly shared.
Researchers at the USC Marshall School of Business and the USC Dornsife College of Letters, Arts and Sciences, identified various factors that contribute to a willingness to share misinformation; unsurprisingly these include political beliefs that are aligned to the particular misinformation and a lack of critical skills.
To determine the dominant drivers of misinformation they conducted a paid online survey of 2,476 active Facebook users between the ages of 18 and 89.
Due to the reward-based learning systems on social media, users form habits of sharing information that attracts others' attention. Once habits form, information sharing is automatically activated by cues on the platform without users considering response outcomes such as spreading misinformation. As a result of user habits, 30 to 40% of the false news shared in our research was due to the 15% most habitual news sharers. Suggesting that sharing of false news is part of a broader response pattern established by social media platforms, habitual users also shared information that challenged their own political beliefs.
“Sharing of Misinformation Is Habitual, Not Just Lazy or Biased” Ceylan, Anderson and Wood
The ‘15%’ reminds me of what Malcolm Gladwell calls ‘mavens’; the minority that are knowledgeable and responsible for disseminating most of the information. Mr Gladwell assumes that mavens are trustworthy without ulterior motive, yet I question if those are really preconditions for the function they perform?
I argue that the ‘knowledgeability’ dimension is more to do with how credible the provider of the information appears to be in a certain situation - it’s the credibility that makes the information appear acceptable. This is why arguments from authority are common. Yet a maven’s information might be incorrect, perhaps deliberately so although, it is not something Mr Gladwell’s model entertains.
This is similar to the super-spreaders in a pandemic who disproportionately infect large numbers of people. There may be lots of other confounding elements in play, but ultimately transmission depends their mobility in the population or to put it another way, their social habits. Its the habitual aspect that finds its counterpart in the study.
We recognize that habitual users are integral to social media sites’ ad-based profit models …, and thus these sites are unlikely to create reward structures that encourage thoughtful decisions that impede habits. However, social media reward systems built to maximize user engagement are misaligned with the goal of promoting accurate content sharing, especially among regular, habitual users.
“Sharing of Misinformation Is Habitual, Not Just Lazy or Biased” Ceylan, Anderson and Wood
The Genetics of Misinformation
created the neologism ‘meme’ as being conceptually comparable to that of a ‘gene’. It describes how concepts are transferred between humans to be replicated in the minds of others. The power of this analogy has become more evident with social media because ideas replicate, mutate and evolve so much quicker, in fact even the application and understanding of the term ‘meme’ has evolved.3
Viruses are genetic and self-replicate, so the analogy of information becoming ‘viral’ on social media is likely to be better than you imagined.
An article that discussed the Ceylan et al study made this observation about the reward mechanism.
[on motivation] much like any video game, social media has a rewards system that encourages users to stay on their accounts and keep posting and sharing. Users who post and share frequently, especially sensational, eye-catching information, are likely to attract attention.
“Study reveals the key reason why fake news spreads on social media”, PHYORG
The reward is attention through reactions and comments but in essence that is all about being validated by others4. I argue that group membership can provide this benefit too.
Once habits form, information sharing is automatically activated by cues on the platform without users considering critical response outcomes, such as spreading misinformation.
ibid.
This reward-seeking behaviour is Pavlovian, but instead of salivating at the sound of a bell, a promising sharing opportunity or a notification can trigger the anticipation of social reward and a habitual routine to close the deal. Users are effectively trained to react to the stimulus by the prospect of reward rather than consider the veracity of the content prior to sharing. The study sets out the terms of what it considers to be misinformation.
Misinformation can be defined in various ways …, and in the present research, we focus on information content that has no factual basis … as well as content that, although not objectively false, propagates one-sided facts (i.e., partisan-biased news). Such misinformation changes perceptions of and creates confusion about reality …
“Sharing of Misinformation Is Habitual, Not Just Lazy or Biased” Ceylan, Anderson and Wood
Of course in consideration of the factors that influence sharing the research article also says:
One answer is that people often lack the ability to consider the veracity of such information (i.e., limited reflection, inattention). … false claims may seem novel and surprising and thereby activate emotional, noncritical processing … Furthermore, older people and those with weaker or less critical reflection tendencies may fail to detect the veracity of information and thus be less discerning in their sharing [another explanation is] people are motivated to evaluate news headlines in biased ways that support their identities (i.e., motivated reasoning) [example] rumor cascades in online social platforms are most marked when shared within homogenous and polarized communities of users.
ibid.
So those are all important factors but the study found that the biggest influence on sharing misinformation was habit and not any kind of affiliation. This may well be a distinction without a real difference. The PHYORG article on the study makes this succinct observation.
“Surprisingly, the researchers found that users' social media habits doubled and, in some cases, tripled the amount of fake news they shared.”
“Study reveals the key reason why fake news spreads on social media”, PHYORG
However, it is worth recalling that the purpose of the habit is to generate validating responses, but that objective overlaps with the attractions that group membership offers, i.e. in the form of mutual validation.
So the second question to ask yourself becomes ‘do I consent to being a vector for this message even if it diminishes me? ’
"In whose interest do you exercise it?" (3/5)
So from the study, ‘my side’ bias is a lesser driver of misinformation than the sort of social media habits, that were rewarded by the platform. Following on from my comment at the end of the previous section, the observation that misinformation is mostly shared by homogenous groups, suggests a preference for homogeneity and the desire to be in a group.
Some Informal Observations and Research Candidates
There is nothing startling about the attraction of ‘belonging’ which must include ad-hoc online groups that come together in common cause. Of course they generally have a self-selected membership on the basis of similar opinion or world-view. But that invites the question of what ‘self-selection’ really means.
It is a reasonable to suppose that the desire for acceptance (arguably the reward that social media offers) is the same driver behind radicalisation or the forming of ingroups from discarded outgroups.5
In 2022 I started to monitor the activity of some lobbying groups who were advocating for or against certain energy technologies. By casual inspection some of the traits of these groups became very obvious.
I recognise that this is anecdotal, speculative and unsupported by data, but much of it seems self-evident. The following conjectures might be profitably tested by a controlled study.
Online groups tend to become more homogenised with time which I suggest is a result of group pressure and in-group incentives.
Group opinion is steered by a few influential people and these individuals are identifiable by the reverence given to them by other group members.
What is deemed to be objectively true or false by the group is more dependent on the source of the information than the rationale or evidence, viz.
In-group contributions tend to be uncritically accepted
Out-group contributions are usually summarily rejected
As a consequence of #3 there is a tendency to avoid internal conflict and ongoing group membership becomes predicated on agreement.
Also from #3, just as the group will readily reject something that is objectively true, because it was asserted by someone in the out-group; they will also fail to correct a group member when they say something that is obviously false. These have similar presentations but with slightly different outcomes.6 In the first case the group acts on the individual to discourage engagement with certain facts. In the second case the group offers the membership benefit of not being contradicted7.
Rejecting scientific or engineering studies as ‘fake’ when they are inconvenient to the group narrative.
To attack information they believe to be antagonistic to their position - usually in ways that demonstrate they aren’t familiar with the content.
A fear of exposure to contrary viewpoints.
Can this group-behaviour be characterised as having a conditioning effect on the habitual response of members? I think so. The reward on offer is the prospect of being recognised and respected by the group.
The Cost-Benefit of Misinformation
The closest thing to conflict within these groups that I observed directly is what I call ‘competitive outrage’. It might be that this is a safe way to establish a pecking order within such groups. This is still consistent with the study findings that attracting attention is the prime motivator.
"The habits of social media users are a bigger driver of misinformation spread than individual attributes. We know from prior research that some people don't process information critically, and others form opinions based on political biases, which also affects their ability to recognize false stories online.”
Gizem Ceylan, study lead at USC Marshall (now at the Yale School of Management as a postdoctoral researcher) as quoted in “Study reveals the key reason why fake news spreads on social media”, PHYORG
"However, we show that the reward structure of social media platforms plays a bigger role when it comes to misinformation spread."
ibid. quoting Gizem Ceylan
Therefore the factors that take priority in a given case are subject to a kind of unconscious cost-benefit assessment.
“In another experiment, the researchers found that habitual sharing of misinformation is part of a broader pattern of insensitivity to the information being shared. In fact, habitual users shared […]—news that challenged their political beliefs—as much as concordant news that they endorsed.”
ibid.
There is cost to blindly seeking social reward or avoiding opprobrium. It is disempowering requiring the sacrifice of many aspects of self, including personal integrity and opinions. So the drive for attention far outstrips any other consideration on social media. What is really in play is a cost-benefit assessment between sharing and getting ‘likes’. The more significant cost does not enter into the equation.
Algorithms determine what people see based on previous engagement and this drives scrolling behaviour which can be monetised via advertisements. What is rewarded is compliance.
So the third question to ask yourself becomes ‘do I consent to serving these interests even if they don’t align to what I care about? ’
"To whom are you accountable" (4/5)
To be accountable is to take personal responsibility, but without a society that is willing to hold people to account, it is useless. That can’t happen if the conversation is forbidden.
Misinformation cannot be banned because who would determine what constitutes misinformation? The crowd, the mob, the elites or people paying to have their content more conspicuous? It is not just a free speech issue (although that’s no trivial matter) but to suppress anything we risk suppressing the truth, the conversation and any chance of accountability.
Ensuring the appropriate distinction is made between agreed facts, assumptions and opinion is far better than legislating for speech or, making offence a crime or allowing ourselves to be floating free of any axioms in a soup of post structuralism.
The Ceylan et al paper is optimistic about the ability of social media platforms to move away from this reward model but ….
Yet how do we democratise truth? To allow it is a failure of democracy.
So the forth question to ask yourself becomes ‘do I consent to being held accountable for this information? ’
“How can we get rid of you?” (5/5)
It takes a lot of effort to deconstruct a lie because it often involves trying to prove a negative, and even then, it may make no difference. People don’t tend to welcome the suggestion that they have been duped; similarly, those organisations, institutions and publications that have unwittingly provided a platform for false narratives are unlikely to risk their own reputation in pursuit of the truth either. In this series I give examples where group membership has become more important than the truth they profess to prioritise.
Influence that can be bought favours those with most resources; it provides access to a form of executive class democracy without any added accountability. Lobbying may or may not be in the service of misinformation, but insofar as it intended to unfairly change the narrative, its purpose is identical. What if what we are supporting actually removes the ability to change our collectively change our minds democratically?
It is not possible to legislate against misinformation for many reasons, not the least of which is, who would define it? How do we rid ourselves of it? First we need to recognise it for ourselves. Then we can withhold our power by refusing to share it.
So the fifth and final question to ask yourself becomes ‘do I consent to being someone else’s misinformation-zero? Is this how I want to use my credibility? ’
If your answer to both of those is ‘no’, you already know how to get rid of all the misinformation, downstream of you.
This is my understanding of
’ original concept of a meme - which I take to essentially mean ‘contagious patterns of thoughts or thinking’.It is of course possible to be an unwitting vector of misinformation but to do so willingly is not to be a blameless passive conduit. It is a cost-benefit decision so to share misinformation deliberately, it is necessary to believe that the benefit to self carries more weight than the cost to society. The power of misinformation has to come from people and they have to disempower themselves to relinquish it.
Take a social media dance craze as an example. The original typically contains a visually original interpretation of a catchy tune or sample that ‘goes viral’. Initially it is imitated faithfully but as the trend takes hold, the most successful versions include new elements that are emulated in most of the generations that follow. They craze evolves.
You have surely noticed people on social media, perpetually reminding everyone that they are ‘comfortable with who they are’ and ‘don’t care what anyone thinks’. They are obviously looking for the validation they claim not to need. It’s this deep unhappiness with personal recognition that can be exploited. Misinformation can be a way of giving someone a social presence or the ability to be seen.
Perhaps targeted radicalisation and recruitment are exceptions to ‘self-selection’, but perhaps not if the individual signalled either consciously or unconsciously that they may be persuadable. What seems clear to me is that if group acceptance is the main incentive, it can be used as leverage, i.e. compliance may quickly become a condition of ongoing ‘membership’. That may well open the way to ideological creep.
In her thought provoking article linked below,
warns us against broad classifications of certain crimes that may obscure the underlying problem. The temptation to apply labels as a means to telegraph our contempt for criminal actions might be a mistake.This can lead to the bizarre situation, where someone knowledgeable in a specialist area, uses their credibility to argue against the axioms of their own discipline. I found this very difficult to understand until I read the study. This accords with my assertion that ‘mavens’ need not necessarily be uncorruptable.
This is evident in the discussions about domestic hydrogen pilot schemes where residents have been recruited by anti-hydrogen lobbyists into acting (in my opinion) against their own interests. I noticed many instances of condescension whereby people are humoured in this misapprehensions in order to keep them onside. I have several articles that touch on this including some of that are yet to be published. However, a good primer for understanding that would be Whitby Hydrogen Village and the Weaponisation of Fear.
But I think there is a way to sort through this garbage to establish a baseline of veracity.
I agree that the social media reward system is a strong factor in sharing information before verifying it.