By Eric Vandebroeck and co-workers

As seen in part one since the mid-2000s, artificial intelligence (AI) has rapidly expanded as a field in academia and as an industry. Now a small number of powerful technology corporations deploy AI systems at a planetary scale, with their systems hailed as comparable or even superior to human intelligence.

At the height of AI hype, many AI technology vendors claimed that their algorithms could ingest data that was incomplete or of poor quality and were smart enough to find patterns and make predictions even if the data was in poor shape. As we have seen in part one, this is not the case.

 

The new Mappa Mundi?

The label artificial intelligence, as we have seen, can be said to be a misnomer. How computers process data is not at all how humans interact with the world. The conclusions computers draw do not involve the wisdom, common sense, and critical thinking that are crucial components of human intelligence. Artificial unintelligence is more apt in that even the most advanced computer programs are artificial and not intelligent.

Computers do not understand information, text, images, sounds, the way humans do. For example, our mind’s flexibility allows us to handle ambiguity easily and go back and forth between the specific and the general. We can also recognize things we have never seen before, such as a tomato plant growing in a wagon, a wagon tied to a kangaroo’s back, an elephant swinging a wagon with its trunk. Humans can use the familiar to recognize the unfamiliar. Computers are a long way from being able to identify unfamiliar, indeed incongruous, images. Humans also notice unusual colors, but our flexible, fluid mental categories can deal with color differences. For humans, the essence of a wagon is not its color. White wagons, red wagons, and red-and-white wagons are still wagons. Not so for a pixel-matching computer program.

Each way of defining artificial intelligence is doing work, setting a frame for how it will be understood, measured, val­ued, and governed. If AI is defined by consumer brands for corporate infrastructure, then marketing and advertising have predetermined the horizon. If AI systems are seen as more re­liable or rational than any human expert, able to take the “best possible action,” then it suggests that they should he trusted to make high-stakes decisions in health, education, and crimi­nal justice. When specific algorithmic techniques are the sole focus, it suggests that only continual technical progress mat­ters, with no consideration of the computational cost of those approaches and their far-reaching impacts on a planet under strain.

As we have seen, AI is neither ar­tificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifi­cations. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards.

The expanding reach of AI systems may seem inevitable, but this is contestable and incomplete. The underlying visions of the AI field do not come into being autonomously but instead have been constructed from a particular set of beliefs and perspec­tives. The chief designers of the contemporary atlas of AI are a small and homogenous group of people, based in a handful of cities, working in an industry that is currently the wealthiest in the world. Like medieval European mappae mundi, which illustrated religious and classical concepts as much as coordi­nates, hence the maps made by the AI industry can in some way also be seen as political inter­ventions, as opposed to neutral reflections of the world. 

 

The information structure of AI

One of the less recognized facts of artificial intelligence is how many underpaid workers are required to help build, maintain, and test AI systems. This unseen labor takes many forms,  supply-chain work, on-demand crowd work, and traditional service-industry jobs. Exploitative forms of work exist at all stages of the AI pipeline, from the mining sector, where re­sources are extracted and transported to create the core infra­structure of AI systems, to the software side, where distributed workforces are paid pennies per microtask. Mary Gray and Sid Suri refer to such hidden labor as ‘ghost work” Lilly Irani calls it “human-fueled automation” These scholars have drawn attention to the experiences of crowd workers or micro ­workers who perform the repetitive digital tasks that underlie AI systems, such as labeling thousands of hours of training data and reviewing suspicious or harmful content. Workers do the repetitive tasks that backstop claims of AI magic, but they rarely receive credit for making the systems function.

Although this labor is essential to sustaining AI sys­tems, it is usually very poorly compensated. A study from the United Nations International Labor Organization surveyed 3,500 crowd workers from seventy-five countries who routinely offered their labor on popular task platforms like Amazon Me­chanical Turk, Figure Eight, Micro workers, and Click worker. The report found that a substantial number of people earned below their local minimum wage even though the majority of respondents were highly educated, often with specializations in science and technology. Not surprising concerns arose regarding Me­chanical Turk (MTurk) data quality leading to questions about the utility of MTurk for psychological research. 

But without this kind of work, AI systems won't func­tion. The technical AI research community relies on cheap, crowdsourced labor for many tasks that can't be done by ma­chines. Between 2008 and 2016, the term "crowdsourcing' went from appearing in fewer than a thousand scientific articles to more than twenty thousand, which makes sense, given that Mechanical Turk launched in 2005. But during the same time frame, there was far too little debate about what ethical ques­tions might be posed by relying on a workforce that is com­monly paid far below the minimum wage.

Of course, there are strong incentives to ignore the de­pendency of underpaid labor from around the world. All the work they do, from tagging images for computer-vision sys­tems to testing whether an algorithm is producing the right results, refines AI systems much more quickly and cheaply, particularly when compared to paying students.

 

The Myth of Clean Tech

Minerals are the backbone of AI, but its lifeblood is still elec­trical energy. Advanced computation is rarely considered in terms of carbon footprints, fossil fuels, and pollution; meta­phors like ‘the cloud” imply something floating and delicate within a natural, green industry. Servers are hidden in non­descript data centers, and their polluting qualities are far less visible than the billowing smokestacks of coal-fired power stations. The tech sector heavily publicizes its environmental policies, sustainability initiatives, and plans to address climate-related problems using AI as a problem-solving tool. It is all part of a highly produced public image of a sustainable tech industry with no carbon emissions. In reality, it takes a gargan­tuan amount of energy to run the computational infrastruc­tures of Amazon Web Services or Microsoft’s Azure, and the carbon footprint of the AI systems that run on those platforms is growing.

As Tung-Hui Hu writes in A Prehistory of the Cloud, “The cloud is a resource-intensive, extractive technology that con­verts water and electricity into computational power, leaving a sizable amount of environmental damage that it then displaces from sight.” Addressing this energy-intensive infrastructure has become a major concern. Certainly, the industry has made significant efforts to make data centers more energy-efficient and to increase their use of renewable energy. But already, the carbon footprint of the world's computational infrastructure has matched that of the aviation industry at its height, and it is increasing at a faster rate. Estimates vary, with research­ers like Loth Belkhir and Ahmed Elmeligi estimating that the tech sector will contribute 14 percent of global greenhouse emissions by 2040, while a team in Sweden predicts that the electricity demands of data centers alone will increase about fifteenfold by 2030.

By looking closely at the computational capacity needed to build AI models, we can see how the desire for exponen­tial increases in speed and accuracy is coming at a high cost to the planet. The processing demands of training AI models, and thus their energy consumption, is still an emerging area of investigation. One of the early papers in this field came from AI researcher Emma Stribell and her team at the University of Massachusetts Amherst in 2019. With a focus on trying to understand the carbon footprint of natural language process­ing {NLP) models, they began to sketch out potential estimates by running AI models over hundreds of thousands of compu­tational hours. The initial numbers were striking. Strubell’s team found that running only a single NLP model produced more than 660,000 pounds of carbon dioxide emissions, the equivalent of five gas-powered cars over their total lifetime {including their manufacturing) or 125 round-trip flights from New York to Beijing.

Worse, the researchers noted that this modeling is, at minimum, a baseline optimistic estimate. It does not reflect the true commercial scale at which companies like Apple and Amazon operate, scraping internet-wide datasets and feeding is of the earth, and to keep it growing requires expanding re­sources and layers of logistics and transport that are in con­stant motion.

The dizzying spectacle of logistics and production displayed by companies like Ama­zon would furthermore not be possible without the development and wide­spread acceptance of a standardized metal object: the cargo container. Like submarine cables, cargo containers bind the industries of global communication, transport, and capital, a material exercise of what mathematicians call “optimal transport", in this case, as an optimization of space and resources across the trade routes of the world.

Here, too, the most severe costs of global logistics are borne by the Earth’s atmosphere, the oceanic ecosystem and low-paid workers. The corporate imaginaries of AI fail to de­pict the lasting costs and long histories of the materials needed to build computational infrastructures or the energy required to power them. The rapid growth of cloud-based computa­tion, portrayed as environmentally friendly, has paradoxically driven an expansion of the frontiers of resource extraction. It is only by factoring in these hidden costs, these wider collections of actors and systems, that we can understand what the shift toward increasing automation will mean.

 

AI and algorithmic exceptionalism 

Tung-Hui Hu, the author of A Prehistory of the Cloud (2015), describes the cloud as we know it, nu just a technology and a fantasy made by people. In fact, artificial intelligence is not an objective, universal, or neutral computational technique that makes de­terminations without human direction. Its systems are embedded in social, political, cultural, and eco­nomic worlds, shaped by humans, institutions, and imperatives that determine what they do and how they do it. They are designed to discriminate, amplify, and encode narrow classifications. When applied in social contexts such as policing, the court system, health care, and education, they can reproduce, optimize, and amplify existing structural in­equalities. This is no accident: AI systems are built to see and intervene in the world in ways that primarily benefit the states, institutions, and corporations that they serve. In this sense, AI systems are expressions of power that emerge from wider eco­nomic and political forces created to increase profits and cen­tralize control for those who wield them. But this is not how the story of artificial intelligence is typically told.

The standard accounts of AI often center on a kind of algorithmic exceptionalism, the idea that because AI systems can perform uncanny feats of computation, they must be smarter and more objective than their flawed human cre­ators. Consider this diagram of AlphaGo Zero, an AI program designed by Google's DeepMind to play strategy games. The image shows how it “learned” to play the Chinese strategy game Go by evaluating more than a thousand options per move. In the paper announcing this development, the authors write: “Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance” DeepMind cofounder Demis Hassabis has described these game engines as akin to alien intelligence. “It doesn’t play like a human, but it also doesn’t play like computer engines. It plays in a third, almost alien way... It’s like chess from another dimension.” When the next iteration mastered Go within three days, Hassabis described it as '‘rediscovering three thousand years of human knowledge in 72 hours!.”

Ein Bild, das Text, Kreuzworträtsel enthält.

Automatisch generierte Beschreibung

The Go diagram shows no machines, no human workers, no capital investment, no carbon footprint, just an abstract rules-based system endowed with otherworldly skills. Narra­tives of magic and mystification recur throughout AI's history, drawing bright circles around spectacular displays of speed, efficiency, and computational reasoning. It’s no coincidence that one of the iconic examples of contemporary AI is a game.

 

The enchanted determinism of games without frontiers

Games have been a preferred testing ground for AI programs since the 1950s.6 Unlike everyday life, games offer a closed world with defined parameters and clear victory conditions. In World War II, AI's historical roots stemmed from military-funded research in signal processing and optimiza­tion that sought to simplify the world, rendering it more like a strategy game. A strong emphasis on rationalization and pre­diction emerged, along with a faith that mathematical formal­isms would help us understand humans and society. The be­lief that accurate prediction fundamentally reduces layering abstract representations of data on top of each other, enchanted determinism acquires an almost theological quality. That deep learning approaches are often uninterpretable, even to the engineers who created them, gives these systems an aura of being too complex to regulate and too powerful to refuse. As the social anthropologist F. G. Bailey observed, the technique of “obscuring by mystification” is often employed in public set­tings to argue for a phenomenon s inevitability. We are told to focus on the innovative nature of the method rather than on what is primary: the purpose of the thing itself. Above all, en­chanted determinism obscures power and closes off the informed public discussion, critical scrutiny, or outright rejection.

Enchanted determinism has two dominant strands, each a mirror image of the other. One is a form of tech utopianism that offers computational interventions as universal solutions applicable to any problem. The other is a tech dystopian per­spective that blames algorithms for their negative outcomes as though they are independent agents, without contending with the contexts that shape them and in which they operate. At an extreme, the tech dystopian narrative ends in the singu­larity, or superintelligence, the theory that a machine intelli­gence could emerge that will ultimately dominate or destroy humans. This view rarely contends with the reality that so many people worldwide are already dominated by sys­tems of extractive planetary computation.

These dystopian and utopian discourses are metaphysi­cal twins: one places its faith in AI as a solution to every prob­lem, while the other fears AI as the greatest peril. Each offers a profoundly ahistorical view that locates power solely within technology itself. Whether AI is abstracted as an all-purpose tool or an all-powerful overlord, the result is technological de­terminism. AI takes the central position in society’s redemp­tion or ruin, permitting us to ignore the systemic forces of unfettered neoliberalism, austerity politics, racial inequality, and widespread labor exploitation. Both the tech Utopians and dystopian frame the problem with technology always at the center, inevitably expanding into every part of life, decoupled from the forms of power that it magnifies and serves.

When AlphaGo defeats a human grandmaster, it’s tempt­ing to imagine that some otherworldly intelligence has arrived. But there's a far simpler and more accurate explana­tion. AI game engines are designed to play millions of games, run statistical analyses to optimize for winning outcomes, and then play millions more. These programs produce surprising moves uncommon in human games for a straightforward rea­son: they can play and analyze far more games at a far greater speed than any human can. This is not magic; it is statistical analysis at scale. Yet the tales of preternatural machine intel­ligence persist. Over and over, we see the ideology of Carte­sian dualism in AI: the fantasy that AI systems are disembod­ied brains that absorb and produce knowledge independently from their creators, infrastructures, and the world at large.

Enchanted determinism also explains newspaper headlines such as ‘Can algorithms prevent suicide?’ or ‘can AI read emotions?’. Nobody has come close to such remarkable results, yet the dream that a machine can read emotions or stop people from ending their life is stronger than ever. These illusions distract from the far more relevant questions: Whom do these systems serve? What are the political econo­mies of their construction? And what are the wider planetary consequences?

The description of AI as fundamentally abstract distances it from the energy, labor, and capital needed to pro­duce it and the many different kinds of mining that enable it. Thus, for example, AI is also born from Nevada's lithium mines, one of the many mineral extraction sites needed to power contemporary computation. Mining is where we see the ex­tractive politics of AI at its most literal. The tech sector’s demand for rare earth minerals, oil, and coal is vast, but the industry itself never bears this extraction's true costs. On the software side, building models for natural lan­guage processing and computer vision are enormously energy-hungry. The competition to produce faster and more effi­cient models has driven computationally greedy methods that expand A Is carbon footprint. From the last trees in Malaysia that were harvested to produce latex for the first transatlantic undersea cables to the giant artificial lake of toxic residues in Inner Mongolia, we trace the environmental and human birth­places, planetary computation networks and see how they continue to terraform the planet.

The digital pieceworkers paid pennies on the dollar clicking on microtasks so that data systems can seem more intelligent than they are. Or in the Amazon warehouses where employees must keep in time with the algorithmic cadences of a vast logistical empire, the Chicago meat laborers on the disassembly lines where animal carcasses are vivisected and prepared for consumption. And the workers who are pro­testing against the way that AI systems are increasing surveil­lance and control for their bosses.

Labor is also a story about time. Coordinating humans' actions with the repetitive motions of robots and line ma­chinery has always involved controlling bodies in space and time. From the invention of the stopwatch to Google’s TrueTime, the process of time coordination is at the heart of workplace management. AI technologies both require and cre­ate the conditions for ever more granular and precise mecha­nisms of temporal management. Coordinating time demands increasingly detailed information about what people are doing and how and when they do it.

 

Facial recognition and language prediction

We can make an ad that all publicly acces­sible digital material, including data that is personal or potentially damaging, is open to being harvested for training datasets used to produce Al models. There are gigantic datasets full of people’s selfies, hand gestures, people driving cars, babies crying, newsgroup conversations from the 1990s, and improving algorithms that perform such functions as facial recognition, language prediction, and ob­ject detection. When these collections of data are no longer seen as people’s personal material. Still, merely as infrastruc­ture the specific meaning or context of an image or a video is assumed to be irrelevant. Beyond the serious issues of pri­vacy and ongoing surveillance capitalism, the current practices of working with data in AI also raise ethical, method­ological, and epistemological concerns.

Sociologist Karin Knorr Cetina refers to the epistemic ma­chinery when she describes how contemporary systems use labels to predict human identity, commonly using binary gender, essentialized racial categories, and problematic assessments of character and creditworthiness. A sign will stand-in for a sys­tem, a proxy will stand for the real, and a toy model will be asked to substitute for the infinite complexity of human sub­jectivity. By looking at how classifications are made, we see how technical schemas enforce hierarchies and magnify in­equity. Machine learning presents us with a regime of norma­tive reasoning that, when in the ascendant, takes shape as powerful governing rationality.

One could explore the history of affect recognition, the idea that facial expressions hold the key to revealing a person’s inner emotional state. Or considers the psychologist Paul Ekman's claim that there are a small set of univer­sal emotional states that can be read directly from the face. Tech companies are now deploying this idea in effect recog­nition systems, as part of an industry predicted to be worth more than seventeen billion dollars. But there is consider­able scientific controversy around emotion detection, which is incomplete and at worst misleading. Despite the un­stable premise, these tools are being rapidly implemented into hiring, education, and policing systems.

Or ways in which AI systems are used as a tool of state power. The military past and present of artificial intelligence have shaped surveillance, data extraction, and risk assessment we see today. The deep manifestation of highly organized capital backed by vast sys­tems of extraction and logistics, with supply chains that wrap interconnections between the tech sector and the military, are now being reined in to fit a strong nationalist agenda. Mean­while, the intelligence community's extralegal tools have now dispersed, moving from the military world into the commercial technology sector to be used in classrooms, police stations, workplaces, and unemployment offices. The military logics that have shaped AI systems are now part of the work­ings of municipal government, and they are further skewing the relation between slates and subjects.

Or how artificial intelli­gence functions as a structure of power that combines infra­structure, capital, and labor. From the Uber driver being nudged to the undocumented immigrant being tracked to the public housing tenants contending with facial recognition sys­tems in their homes, AI systems are built with the logic of capital, policing, and militarization, and this combination further widens the existing asymmetries of power. These ways of seeing depend on the twin moves of abstraction and extrac­tion: abstracting away the material conditions of their making while extracting more information and resources from those least able to resist.

But these logics can be challenged, just as systems that perpetuate oppression can be rejected. As conditions on Earth change, calls for data protection, labor rights, climate justice, and racial equity should be heard together. When these inter­connected movements for justice inform how we understand artificial intelligence, different conceptions of planetary poli­tics become possible.

In five real world AI and machine learning trends that will make an impact in 2021 it has been suggested that if organizations don’t have experience in analytics, they should consider getting an assessment on how to turn data into a competitive advantage. For example, the Advanced Analytics Lab at SAS offers an innovation and advisory service that provides guidance on value-driven analytics strategies; by helping organizations define a roadmap that aligns with business priorities starting from data collection and maintenance to analytics deployment through to execution and monitoring to fulfill the organization’s vision.

 

For updates click homepage here

 

 

 

 

shopify analytics