By Eric Vandebroeck and
co-workers
As seen in part one since the mid-2000s,
artificial intelligence (AI) has rapidly expanded as a field in academia and as
an industry. Now a small number of powerful technology corporations deploy AI
systems at a planetary scale, with their systems hailed as comparable or even
superior to human intelligence.
At the height of AI
hype, many AI technology vendors claimed that their algorithms could ingest
data that was incomplete or of poor quality and were smart enough to find
patterns and make predictions even if the data was in poor shape. As we have
seen in part one, this is not the case.
The new Mappa Mundi?
The label artificial
intelligence, as we have seen, can
be said to be a misnomer. How computers process data is not at all how humans
interact with the world. The conclusions computers draw do not involve the
wisdom, common sense, and critical thinking that are crucial components of
human intelligence. Artificial unintelligence is more apt in that even the most
advanced computer programs are artificial and not intelligent.
Computers do not
understand information, text, images, sounds, the way humans do. For example,
our mind’s flexibility allows us to handle ambiguity easily and go back and
forth between the specific and the general. We can also recognize things
we have never seen before, such as a tomato plant growing in a wagon, a wagon
tied to a kangaroo’s back, an elephant swinging a wagon with its trunk. Humans
can use the familiar to recognize the unfamiliar. Computers are a long way from
being able to identify unfamiliar, indeed incongruous, images. Humans also
notice unusual colors, but our flexible, fluid mental categories can deal with
color differences. For humans, the essence of a wagon is not its color. White
wagons, red wagons, and red-and-white wagons are still wagons. Not so for a
pixel-matching computer program.
Each way of defining
artificial intelligence is doing work, setting a frame for how it will be
understood, measured, valued, and governed. If AI is defined by consumer
brands for corporate infrastructure, then marketing and advertising
have predetermined the horizon. If AI systems are seen as more reliable
or rational than any human expert, able to take the “best possible action,”
then it suggests that they should he trusted to make high-stakes decisions in
health, education, and criminal justice. When specific algorithmic techniques
are the sole focus, it suggests that only continual technical progress matters,
with no consideration of the computational cost of those approaches and their
far-reaching impacts on a planet under strain.
As we have
seen, AI is neither artificial nor intelligent. Rather,
artificial intelligence is both embodied and material, made from natural
resources, fuel, human labor, infrastructures, logistics, histories, and
classifications. AI systems are not autonomous, rational, or able to discern
anything without extensive, computationally intensive training with large
datasets or predefined rules and rewards.
The expanding reach
of AI systems may seem inevitable, but this is contestable and incomplete. The
underlying visions of the AI field do not come into being autonomously but
instead have been constructed from a particular set of beliefs and perspectives.
The chief designers of the contemporary atlas of AI are a small and homogenous
group of people, based in a handful of cities, working in an industry that is
currently the wealthiest in the world. Like medieval European mappae mundi, which illustrated religious and
classical concepts as much as coordinates, hence the maps made by the AI
industry can in some way also be seen as political interventions, as opposed
to neutral reflections of the world.
The information structure of AI
One of the less
recognized facts of artificial intelligence is how many underpaid workers are
required to help build, maintain, and test AI systems. This unseen labor takes
many forms, supply-chain work, on-demand crowd work, and traditional
service-industry jobs. Exploitative forms of work exist at all stages of the AI
pipeline, from the mining sector, where resources are extracted and
transported to create the core infrastructure of AI systems, to the software
side, where distributed workforces are paid pennies per microtask. Mary Gray and Sid Suri refer to such
hidden labor as ‘ghost work” Lilly Irani calls it “human-fueled
automation” These scholars have drawn attention to the experiences of
crowd workers or micro workers who perform the repetitive digital tasks that
underlie AI systems, such as labeling thousands of hours of training data and
reviewing suspicious or harmful content. Workers do the repetitive tasks that
backstop claims of AI magic, but they rarely receive credit for making the
systems function.
Although this labor
is essential to sustaining AI systems, it is usually very poorly compensated.
A study from the United Nations International Labor Organization
surveyed 3,500 crowd workers from seventy-five countries who routinely
offered their labor on popular task platforms like Amazon Mechanical Turk,
Figure Eight, Micro workers, and Click worker. The report found that a
substantial number of people earned below their local minimum wage even though
the majority of respondents were highly educated, often with specializations in
science and technology. Not surprising concerns arose
regarding Mechanical Turk (MTurk) data
quality leading to questions about the utility of MTurk
for psychological research.
But without this kind
of work, AI systems won't function. The technical AI research community relies
on cheap, crowdsourced labor for many tasks that can't be done by machines.
Between 2008 and 2016, the term "crowdsourcing' went from
appearing in fewer than a thousand scientific articles to more than twenty
thousand, which makes sense, given that Mechanical Turk launched in 2005. But
during the same time frame, there was far too little debate about what ethical
questions might be posed by relying on a workforce that is commonly paid far
below the minimum wage.
Of course, there are
strong incentives to ignore the dependency of underpaid labor from around the
world. All the work they do, from tagging images for computer-vision systems
to testing whether an algorithm is producing the right results, refines AI
systems much more quickly and cheaply, particularly when compared to paying
students.
The Myth of Clean Tech
Minerals
are the backbone of AI, but its lifeblood is still electrical energy.
Advanced computation is rarely considered in terms of carbon footprints, fossil
fuels, and pollution; metaphors like ‘the cloud” imply something floating and
delicate within a natural, green industry. Servers are hidden in nondescript
data centers, and their polluting qualities are far less visible than the
billowing smokestacks of coal-fired power stations. The tech sector heavily
publicizes its environmental policies, sustainability initiatives, and plans to
address climate-related problems using AI as a problem-solving tool. It is all
part of a highly produced public image of a sustainable tech industry with no
carbon emissions. In reality, it takes a gargantuan amount of energy to run
the computational infrastructures of Amazon Web Services or Microsoft’s Azure,
and the carbon footprint of the AI systems that run on those platforms is
growing.
As Tung-Hui Hu writes
in A Prehistory of the Cloud, “The cloud is a resource-intensive,
extractive technology that converts water and electricity into computational
power, leaving a sizable amount of environmental damage that it then displaces
from sight.” Addressing this energy-intensive infrastructure has become a
major concern. Certainly, the industry has made significant efforts to make
data centers more energy-efficient and to increase their use of renewable
energy. But already, the carbon footprint of the world's computational
infrastructure has matched that of the aviation industry at its height, and it
is increasing at a faster rate. Estimates vary, with researchers like
Loth Belkhir and Ahmed Elmeligi estimating that the
tech sector will contribute 14 percent of global greenhouse emissions by 2040,
while a team in Sweden predicts that the electricity demands of data centers
alone will increase about fifteenfold by 2030.
By looking closely at
the computational capacity needed to build AI models, we can see how the desire
for exponential increases in speed and accuracy is coming at a high cost to
the planet. The processing demands of training AI models, and thus their energy
consumption, is still an emerging area of investigation. One of the early
papers in this field came from AI researcher Emma Stribell
and her team at the University of Massachusetts Amherst in 2019. With a focus
on trying to understand the carbon footprint of natural language processing
{NLP) models, they began to sketch out potential estimates by running AI
models over hundreds of thousands of computational hours. The initial numbers
were striking. Strubell’s team found that running
only a single NLP model produced more than 660,000 pounds of carbon dioxide
emissions, the equivalent of five gas-powered cars over their total lifetime
{including their manufacturing) or 125 round-trip flights from New York to Beijing.
Worse, the
researchers noted that this modeling is, at minimum, a baseline optimistic
estimate. It does not reflect the true commercial scale at which companies like
Apple and Amazon operate, scraping internet-wide datasets and feeding is
of the earth, and to keep it growing requires expanding resources and layers
of logistics and transport that are in constant motion.
The dizzying
spectacle of logistics and production displayed by companies like Amazon would
furthermore not be possible without the development and widespread acceptance
of a standardized metal object: the cargo container. Like submarine cables,
cargo containers bind the industries of global communication, transport, and
capital, a material exercise of what mathematicians call “optimal
transport", in this case, as an optimization of space and resources across
the trade routes of the world.
Here, too, the most
severe costs of global logistics are borne by the Earth’s atmosphere, the
oceanic ecosystem and low-paid workers. The corporate imaginaries of AI fail to
depict the lasting costs and long histories of the materials needed to build
computational infrastructures or the energy required to power them. The
rapid growth of cloud-based computation, portrayed as environmentally
friendly, has paradoxically driven an expansion of the frontiers of resource
extraction. It is only by factoring in these hidden costs, these wider
collections of actors and systems, that we can understand what the shift toward
increasing automation will mean.
AI and algorithmic exceptionalism
Tung-Hui Hu, the
author of A Prehistory of the Cloud (2015), describes the cloud as we know it,
nu just a technology and a
fantasy made by people. In fact, artificial intelligence is not an objective,
universal, or neutral computational technique that makes determinations
without human direction. Its systems are embedded in social, political,
cultural, and economic worlds, shaped by humans, institutions, and imperatives
that determine what they do and how they do it. They are designed to
discriminate, amplify, and encode narrow classifications. When applied in
social contexts such as policing, the court system, health care, and education,
they can reproduce, optimize, and amplify existing structural inequalities.
This is no accident: AI systems are built to see and intervene in the world in
ways that primarily benefit the states, institutions, and corporations that
they serve. In this sense, AI systems are expressions of power that emerge from
wider
economic and political forces created to increase profits and centralize
control for those who wield them. But this is not how the story of artificial
intelligence is typically told.
The standard accounts
of AI often center on a kind of algorithmic exceptionalism, the idea that
because AI systems can perform uncanny feats of computation, they must be
smarter and more objective than their flawed human creators. Consider this
diagram of AlphaGo Zero, an AI program designed by Google's DeepMind to play
strategy games. The image shows how it “learned” to play the Chinese strategy
game Go by evaluating more than a thousand options per move. In the paper announcing
this development, the authors write: “Starting tabula rasa, our new program
AlphaGo Zero achieved superhuman performance” DeepMind cofounder Demis Hassabis
has described these game engines as akin to alien
intelligence. “It doesn’t play like a human, but it also doesn’t play like
computer engines. It plays in a third, almost alien way... It’s like chess from
another dimension.” When the next iteration mastered Go within three days,
Hassabis described it as '‘rediscovering three thousand years of human
knowledge in 72 hours!.”
The Go diagram shows
no machines, no human workers, no capital investment, no
carbon footprint, just an abstract rules-based system endowed with
otherworldly skills. Narratives of magic and mystification recur throughout AI's
history, drawing bright circles around spectacular displays of speed,
efficiency, and computational reasoning. It’s no coincidence that one of the
iconic examples of contemporary AI is a game.
The enchanted determinism of games without
frontiers
Games have been a
preferred testing ground for AI programs since the 1950s.6 Unlike everyday
life, games offer a closed world with defined parameters and clear victory
conditions. In World War II, AI's historical roots stemmed from military-funded
research in signal processing and optimization that sought to simplify the
world, rendering it more like a strategy game. A strong emphasis on
rationalization and prediction emerged, along with a faith that mathematical
formalisms would help us understand humans and society. The belief that
accurate prediction fundamentally reduces layering abstract representations of
data on top of each other, enchanted determinism acquires an almost theological
quality. That deep learning approaches are often uninterpretable, even to the
engineers who created them, gives these systems an aura of being too complex to
regulate and too powerful to refuse. As the social anthropologist F. G. Bailey
observed, the technique of “obscuring
by mystification” is often employed in public settings to argue for a
phenomenon s inevitability. We are told to focus on the innovative nature of
the method rather than on what is primary: the purpose of the thing itself.
Above all, enchanted determinism obscures power and closes off the informed
public discussion, critical scrutiny, or outright rejection.
Enchanted determinism
has two dominant strands, each a mirror image of the other. One is a form of
tech utopianism that offers computational interventions as universal solutions
applicable to any problem. The other is a tech dystopian perspective that
blames algorithms for their negative outcomes as though they are
independent agents, without contending with the contexts that shape them
and in which they operate. At an extreme, the tech dystopian narrative ends in
the singularity, or superintelligence, the theory that a machine intelligence
could emerge that will ultimately dominate or destroy humans. This view rarely
contends with the reality that so many people worldwide are already dominated
by systems of extractive planetary computation.
These dystopian and
utopian discourses are metaphysical twins: one places its faith in AI as a
solution to every problem, while the other fears AI as the greatest peril.
Each offers a profoundly ahistorical view that locates power solely within
technology itself. Whether AI is abstracted as an all-purpose tool or an
all-powerful overlord, the result is technological determinism. AI takes the
central position in society’s redemption or ruin, permitting us to ignore the
systemic forces of unfettered neoliberalism, austerity politics, racial
inequality, and widespread labor exploitation. Both the tech Utopians and
dystopian frame the problem with technology always at the center, inevitably
expanding into every part of life, decoupled from the forms of power that it
magnifies and serves.
When AlphaGo defeats
a human grandmaster, it’s tempting to imagine that some otherworldly
intelligence has arrived. But there's a far simpler and more accurate
explanation. AI game engines are designed to play millions of games, run
statistical analyses to optimize for winning outcomes, and then play millions
more. These programs produce surprising moves uncommon in human games for a
straightforward reason: they can play and analyze far more games at a far
greater speed than any human can. This is not magic; it is statistical
analysis at scale. Yet the tales of preternatural machine intelligence
persist. Over and over, we see the ideology of Cartesian dualism in AI: the
fantasy that AI systems are disembodied brains that absorb and produce
knowledge independently from their creators, infrastructures, and the world at
large.
Enchanted determinism
also explains newspaper headlines such as ‘Can algorithms prevent
suicide?’ or ‘can AI read emotions?’. Nobody
has come close to such remarkable results, yet the dream that a machine can
read emotions or stop people from ending their life is stronger than
ever. These illusions distract from the far more relevant questions: Whom
do these systems serve? What are the political economies of their
construction? And what are the wider planetary consequences?
The description of AI
as fundamentally abstract distances it from the energy, labor, and capital
needed to produce it and the many different kinds of mining that enable
it. Thus, for example, AI is also born from Nevada's
lithium mines, one of the many mineral extraction sites needed to power
contemporary computation. Mining is where we see the extractive politics of AI
at its most literal. The tech sector’s demand for rare earth minerals, oil, and
coal is vast, but the industry itself never bears this extraction's true costs.
On the software side, building models for natural language processing and
computer vision are enormously energy-hungry. The competition to produce faster
and more efficient models has driven computationally greedy methods that
expand A Is carbon footprint. From the last trees in Malaysia that were
harvested to produce latex for the first transatlantic undersea cables to the
giant artificial lake of toxic residues in Inner Mongolia, we trace the environmental
and human birthplaces, planetary computation networks and see how they
continue to terraform the planet.
The digital
pieceworkers paid pennies on the dollar clicking on microtasks so that data
systems can seem more intelligent than they are. Or in the Amazon
warehouses where employees must keep in time with the algorithmic cadences
of a vast logistical empire, the Chicago meat laborers on the disassembly lines
where animal carcasses are vivisected and prepared for consumption. And the
workers who are protesting against the way that AI systems are increasing
surveillance and control for their bosses.
Labor is also a story
about time. Coordinating humans' actions with the repetitive motions of robots
and line machinery has always involved controlling bodies in space and time.
From the invention of the stopwatch to Google’s
TrueTime, the process of time coordination is at
the heart of workplace management. AI technologies both require and create the
conditions for ever more granular and precise mechanisms of temporal
management. Coordinating time demands increasingly detailed information about
what people are doing and how and when they do it.
Facial recognition and language prediction
We can make an ad
that all publicly accessible digital material, including data that is personal
or potentially damaging, is open to being harvested for training datasets used
to produce Al models. There are gigantic datasets full of people’s selfies,
hand gestures, people driving cars, babies crying, newsgroup conversations from
the 1990s, and improving algorithms that perform such functions as facial
recognition, language prediction, and object detection. When these
collections of data are no longer seen as people’s personal material. Still,
merely as infrastructure the specific meaning or context of an image or a
video is assumed to be irrelevant. Beyond the serious issues of privacy and
ongoing surveillance capitalism, the current practices of working with data in
AI also raise ethical,
methodological, and epistemological concerns.
Sociologist Karin
Knorr Cetina refers to the epistemic machinery when she describes how
contemporary systems use labels to predict human identity, commonly using
binary gender, essentialized racial categories, and problematic assessments of
character and creditworthiness. A sign will stand-in for a system, a proxy
will stand for the real, and a toy model will be asked to substitute for the
infinite complexity of human subjectivity. By looking at how classifications
are made, we see how technical schemas enforce hierarchies and magnify inequity.
Machine learning presents us with a regime of normative reasoning that, when
in the ascendant, takes shape as powerful governing rationality.
One could explore the
history of affect recognition, the idea that facial expressions hold the
key to revealing a person’s inner emotional state. Or considers the
psychologist Paul Ekman's claim that there are a small set of universal
emotional states that can be read directly from the face. Tech companies are
now deploying this idea in effect recognition systems, as part of an industry
predicted to be worth more than seventeen billion dollars. But there is
considerable scientific controversy around emotion detection, which is
incomplete and at worst misleading. Despite the unstable premise, these tools
are being rapidly implemented into hiring, education, and policing systems.
Or ways in which AI
systems are used as a tool of state power. The military past and present of
artificial intelligence have shaped surveillance, data extraction, and risk
assessment we see today. The deep manifestation of highly organized capital
backed by vast systems of extraction and logistics, with supply chains that
wrap interconnections between the tech sector and the military, are now being
reined in to fit a strong nationalist agenda. Meanwhile, the intelligence
community's extralegal tools have now dispersed, moving from the military world
into the commercial technology sector to be used in classrooms, police
stations, workplaces, and unemployment offices. The military logics that have
shaped AI systems are now part of the workings of municipal government, and
they are further skewing the relation between slates and subjects.
Or how artificial
intelligence functions as a structure of power that combines infrastructure,
capital, and labor. From the Uber driver being nudged to the undocumented
immigrant being tracked to the public housing tenants contending with facial
recognition systems in their homes, AI systems are built with the logic of
capital, policing, and militarization, and this combination further widens the
existing asymmetries of power. These ways of seeing depend on the twin moves of
abstraction and extraction: abstracting away the material conditions of their
making while extracting more information and resources from those least able to
resist.
But these logics can
be challenged, just as systems that perpetuate oppression can be rejected. As
conditions on Earth change, calls for data protection, labor rights, climate
justice, and racial equity should be heard together. When these interconnected
movements for justice inform how we understand artificial intelligence,
different conceptions of planetary politics become possible.
In five real world AI
and machine learning trends that will make an impact in 2021 it has been suggested
that if organizations don’t have experience in analytics, they should
consider getting an assessment on how to turn data into a competitive
advantage. For example, the Advanced Analytics Lab at SAS offers an innovation
and advisory service that provides guidance on value-driven analytics
strategies; by helping organizations define a roadmap that aligns with business
priorities starting from data collection and maintenance to analytics
deployment through to execution and monitoring to fulfill the organization’s vision.
For updates click homepage here