TransWikia.com

Bayesian models of conspiracy theorists

Psychology & Neuroscience Asked on August 22, 2021

Are there any theories in cognitive psychology that try to model the belief in conspiracy theories through the lens of Bayesian decision theory?

For reference, in Bayesian decision theory a rational agent often behaves so as to minimize its expected (projected) loss. This expected loss is subjective and involves:

  1. An estimated probability over a set of events (or possible explanations)
  2. The loss the subject individually assigns to or perceives associated with a given event (or explanation)

Under this model a rational agent can make decisions as per:

$d^* = underset{d}{operatorname{argmin}} mathrm{E}^pileft[Lleft(theta,dright)| text{D}right]$

where we have:

  • $L$ is the (subject’s) loss function
  • $pi$ is the subject’s posterior or prior beliefs over a set of parameters / events / explanations $theta$
  • $d$ is the decision the agent is trying to make
  • $text{D}$ is the observed data (e.g. available evidence to the subject)

Fear and loss aversion

One could argue that if a subject assigns a high loss to a specific belief (e.g. a conspiracy theory that the subject is particularly afraid of), the subject may choose to believe it or at least behave as if it was true, even if there is little evidence to support it. In other words, subjects may act and believe in a conspiracy theory out of fear and loss aversion.

Side observation: Note that a model where the agent instead maximizes the utility it derives from a given belief is mathematically equivalent. Loss (negative utility) and regret minimization are often just used as a canonical model for both.

Ill-conditioned optimization

From a computational standpoint, the optimization (minimization) of the expected loss can be ill-conditioned if the probability $pi$ is small (little evidence supporting an explanation) but the assigned loss $L$ is large (big implications if true). This could lead different agents to believe and act very differently depending on how they approximate and optimize the above expectation.

To illustrate this point, consider a conspiracy theorist arguing: "I know there is little direct evidence for [conspiracy theory] X, but what if it’s true?". This tension could lead the conspiracy theorist to "accept" the theory and behave accordingly.

Confirmation bias still applies

Note that the above still allows a straight bias in $pi$ (e.g. evidence selection based on confirmation bias) to heavily influence $d^*$. This Bayesian and subjective model just happens to allow for the subject’s perceived loss or utility to also contribute to how a given conspiracy theory may shape the agent’s behavior, conclusions or beliefs.

Most interestingly, perhaps, this framework shows that the computation of beliefs and behavior can naturally be very ill-conditioned, so small differences in how different agents aggregate sparse evidence and model their losses can lead them to draw significantly different conclusions.

Note: I’m not familiar with the psychology of conspiracy theories, so apologies if I am missing a trivial connection in the literature.

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP