By Eric Vandenbroeck and
co-workers
Why bias occurs across a wide variety of
judgment domains
We all have biases, whereby one of the reasons is how
we have been socialized. Microsoft’s online UB training, also available publicly, includes videos depicting various
everyday workplace scenarios. In one, the only woman on a team tries to add her
views and is interrupted multiple times until another member finally notices
and asks her to speak.
Yet bias occurs
across a wide variety of judgment domains. People in all demographic groups
display it, and it is exhibited even by expert reasoners, the highly educated,
and the highly intelligent. It has been demonstrated in research studies across
various disciplines, including as we have seen in
psychology and psychiatry, political science, behavioral economics,
legal studies, cognitive neuroscience, and the informal reasoning literature.
Bias has been found to occur in every
stage of information processing. Studies have shown a tendency toward the
biased search for evidence, biased evaluation of evidence, biased assimilation
of evidence, biased memory of outcomes, and biased evidence generation.
Our problem is not that we cannot value and respect
truth and facts, but that we cannot agree on commonly accepted truth and facts.
We believe that our side knows the truth. Post-truth? That describes the other
side. The inevitable result is political polarization. However, today science
can tell us about confirmation bias: how common it is, how to avoid it, and
what purposes it serves.
So, for example,
models focusing on the properties of acquired beliefs rather than cognitive
processes provide better frameworks for studying confirmation bias.
And
because confirmation bias is not predictable from traditional
psychological measures, we next will explain how it creates a true blind spot
among, for example also cognitive elites. Cognitive elites (those high in
intelligence, executive functioning, or other valued psychological
dispositions) often predict that they themselves are less biased than other
people when queried about other well-known psychological biases (overconfidence
bias, omission bias, hindsight bias, anchoring bias).
Noting that
researchers in several political and social psychology areas have found content
factors to be more potent predictors than individual difference
variables. Confirmation bias might be another area of psychology
where the properties of the beliefs themselves are more predictive than the
individual psychological characteristics of the subjects (such as intelligence
or open-mindedness) who hold those beliefs.
Studying confirmation bias is important because the tragedy of the communications
commons arises when both sides in public policy debates that could be decided
by evidence process new information with a confirmation bias. In some
cases, when people project prior beliefs that have been arrived at by properly
accommodating previous evidence, then some degree of projecting the prior
probability, a local confirmation bias, onto new evidence is normatively
justified. When we lack previous evidence on the issue in question, we should
use the principle of indifference and set our prior probability at .50, which
will not influence our evaluation of the new evidence. Instead, what most of us
tend to do in this situation is assess how the proposition in question relates
to some distal belief of ours, such as our ideology, set, and then project this
prior probability onto our evaluation of the new evidence. This is how our
society ends up with political partisans on both sides of any issue, seemingly
unable to agree on the facts of that issue and never to reach Bayesian
convergence. We can at least mitigate, if not remedy, the tragedy of the
communications commons by rethinking how we relate to our beliefs. First, it
would help if we could realize that we have thought our way to our beliefs much
less often than we may have imagined, that the distal beliefs we hold are
largely a function of our social learning within the valued groups to which we
belong and of our innate propensities to be attracted to certain types of
ideas. Dual-inheritance theories of culture have stressed for some time that
most people feel in control of their culture and believe they came by most of
it by choice. But the truth is, we often have much less choice than we think.
We treat our beliefs as possessions when we think that we have thought our way
to serve us. The meme’s-eye view leads us to question whether we have thought
our way to these beliefs and serve our personal ends. Our memes want to
replicate whether they are good for us or not, and they don’t care how they get
into us, whether they get in through our conscious thoughts or are simply an
unconscious fit to our innate psychological dispositions. The focus of memetics
on the properties of beliefs rather than the psychological characteristics of
those who hold these beliefs also seems to be consistent with research showing
that the degree of confirmation bias is better predicted by the
former than by the latter. We are often in situations where we have to
calibrate the reasoning of others. This calibration often involves judging the
degree of confirmation bias they exhibit, one of the trickiest
judgments we have to make.
Subjects are scoring
higher on rational thinking dispositions scales and are better able to avoid
most biases. This is not true in confirmation bias, where it is the
strength of the belief itself rather than the cognitive sophistication of the person
holding the belief that predicts the level of confirmation bias. This
situation presents a particular obstacle for cognitive elites when
evaluating confirmation bias. Their assumption that they are less
biased than other people is actually correct for most biases in the heuristics
and biases literature. However, this assumption does not hold
for confirmation bias, which contributes to our currently vexing
partisan political standoffs, both in terms of what knowledge is acquired and
how that knowledge is acquired; there is no strong evidence that the Trump
voters were more epistemically irrational than the Clinton voters were.
There is no strong
support in the empirical literature for attributing a unique problem of
rationality to Trump voters. Those who do not find this conclusion palatable
might object that the analysis so far seems too narrow. We have discussed that
to think rationally; a person needs to take appropriate action given the
person’s goals and beliefs and hold beliefs congruent with the available
evidence. This third consideration moves us, critically, from a narrow
conception of instrumental rationality to a broad one. Traditional views of
instrumental rationality are narrow theories because a person’s goals and
beliefs are accepted as they are, and evaluation centers only on whether a
person optimally satisfies their desires.
It might seem that a
narrow conception of rationality that fails to evaluate desires allows a great
deal of bad thinking to escape evaluation. But most work in cognitive science
disproportionately addresses this narrow form of rationality for a good reason.
Broad theories implicate some of the most difficult and vexing issues in
philosophy, such as “When is it rational to be (narrowly) rational?” and “What
goals are rational to pursue?” Our earlier discussion, in fact, strayed into
the territory of broad rationality with my Ted Cruz versus Al Sharpton
thought experiment. With that example, I was trying to illustrate the
difficulty of evaluating goals. From the standpoint of a voter with the global
and groups worldview, the choice was between a candidate who shared the voter’s
worldview but whose temperament was poorly suited to the presidency (Sharpton)
and a candidate whose worldview was unpalatable to the voter but whose
temperament was much better suited to the presidency (Cruz). The point was not
to show that one or the other choice was correct for this voter, but to
illustrate the difficulty of this type of trade-off and to provoke some
associated recognition of the fact that the voter with a citizen and country
worldview was presented with a similarly difficult trade-off when faced with a
Trump versus Clinton choice. It highlighted the
potential confirmation bias involved in such judgments: a Democrat
feeling the attraction of Sharpton over Cruz should similarly understand the
attraction of Trump over Clinton for Republicans.
Of course, the
thought experiment does not depend on a one-to-one feature analogy between
Trump versus Clinton and Cruz versus Sharpton, only a gross similarity in the
feature trade-off (worldview versus fitness for office). It is also revealing
of the confirmation bias operating when we attempt to evaluate
goals.
Also, recent
developments within the university are making it harder to teach students the
decoupling and decontextualizing skills necessary for
avoiding confirmation bias.
Why is cognitive
decoupling central to avoiding confirmation bias? Two critical functions are
enabled by decoupling: inhibition and sustained simulation. The first function,
suppression of the automatic response, is akin to the inhibitory processes
studied in the executive functioning literature. When we reason hypothetically,
we create temporary world models and test out actions in that simulated world.
Decoupling our representations from the world enables us to do this. Dealing
with these "secondary representations,” keeping them decoupled, is costly
in terms of cognitive capacity. However, the tendency to initiate such
decoupling for simulation is a dispositional variable, separable from cognitive
capacity. This tendency can be developed through experience and training.
Many different
theorists have emphasized the importance of decontextualizing in the
development of higher-level thought. Thus Jean Piaget’s (1972)
conceptualization of formal operational thought places the mechanisms of
decontextualizing in positions of paramount importance, and many scholars in
the critical thinking literature have emphasized the decontextualizing modes of
decentering, detaching, and depersonalizing as the foundational skills of
rational thought. Looming large in that literature is the ability to adopt
perspectives other than one’s own. The avoidance of confirmation bias
is dependent on these perspective-switching abilities and tendencies.
But our ability to
switch perspectives will be limited because our human brains are cognitive
misers— their basic tendency is to default to processing mechanisms of low
computational expense. This is a well-established theme throughout the past
fifty years of research in psychology and cognitive science. Miserly cognitive
processing arose for sound evolutionary reasons of computational efficiency.
Still, that same efficiency will guarantee that perspective switching (to
avoid confirmation bias) will not be the default processing action.
As we have known for some time, it is cognitively demanding to process
information from another person's perspective. Thus we must practice the
perspective switching needed to avoid confirmation bias until it becomes
habitual. But identity politics prevents this from happening by locking in an
automatized group perspective, contextualizing based on preapproved group
stances, and viewing perspective switching through decoupling as a sellout to
the hegemonic patriarchy.
True perspective
switching, reframing that allows us to conceptualize the world in new ways,
requires alienation from the self. It requires that we sometimes avoid framing
things from the easiest perspective to model, inevitably our own and our most
important affinity groups. However, university undergraduates are precisely the
stage of life as young adults who need to learn other framing strategies.
Perspective switching
is a type of cognitive broccoli. Taking students out of the comfort zone of
their identities or those of their tribes was once seen as one of the key
purposes of university education. But when the university affirms students in
identities they had assumed before they arrived on campus, it is hard to see
the value added by the university anymore. In stressing identity politics, the
university is simply cheerleading for ice cream. Instead, students need to be
taught that the benefits of leaving the comfort and safety of perspectives they
have long held are worth the risks, that, in the long run, confirmation
processing will never lead them to a deep understanding of the world in which
they live.
If we are ever to
remedy the tragedy of the science communications commons, if we are ever to
have a society that can converge on the truth about important social and public
policy issues, we must have institutions that foster decoupling by discouraging
the projection of convictions onto evidence. Although they once served that
purpose, universities are now the primary incubators and purveyors of
intellectually deadening identity politics, whose thinking styles have spread
widely into the corporate world in recent years. The James Damore incident at
Google is a primary case; an employee was fired for circulating an essay that
contained fair comments based on largely accurate social science findings on
sex differences.
Members of the
identity-politics left have succeeded in making certain research conclusions
within the university verboten and making it very hard for any university
professor to publish and promote any conclusions they dislike. Faculty now
self-censor on a range of topics. The identity politics ideologues have won the
on-campus battle to suppress views that they do not like. But they have made
the public rightly skeptical about any conclusions that now come out of
universities on charged topics, even conclusions that are congenial to the
political positions that the ideologues advocate.
Rather than
inculcating specific beliefs, open inquiry used to be the sine qua non of the
university. With the advent of diversity statements, the goal seems to be
tribal: requiring allegiance to specific political content on faculty and
students. If the state universities do not require diversity statements, state
legislatures should withhold funds from them until they do. Although
administrators and faculty organizations may view my recommendation as an
attack on the institution as a whole, it is not. Rather, compelling state
universities to do away with these statements and, one hopes, persuading
private universities to follow their example would represent a valid attempt by
the public to steer all universities back to their true mission. Only then will
the universities staunch the confirmation bias that is ruining our
public communications commons.
Once we have decided
on a partisan side, we tend to allow party elites to bundle specific issue
positions for us. In many cases, we have given no thought to the issue beyond
knowing that our party endorses a certain stance toward it.
And of course, it does
not primarily show up at Universities; another example for sure is the media.
The selective exposure problems surrounding media entities like Fox News have
increased, as other networks (CNN, MSNBC) and traditional media entities such
as the New York Times (particularly after the 2016 election) adopt or imitate
its business model.
Many true scientists
in sociology are demoralized because their field has transitioned from being a
social science into a social movement. Soon, granting agencies will become more
aware of the ideological bias of state legislatures and taxpayers funding their
state universities. This shows that they employe not to foster
intellectual diversity but instead often focus on demographic groups.
For updates click homepage here