Cartography of generative AI
Estampa
The popularisation of artificial intelligence (AI) has given rise to imaginaries that invite alienation and mystification. At a time when these technologies seem to be consolidating, it is pertinent to map their connections with human activities and more than human territories. What set of extractions, agencies and resources allow us to converse online with a text-generating tool or to obtain images in a matter of seconds?
01. Generative AI tools are used to automate tasks such as writing or generating images. This automation is achieved not by programming the concrete steps to be taken, but by using examples. If we have many examples of a case, we can process them using statistical networks that configure themselves by analysing their recurring patterns. Whether it is words, pixels or sound frequencies, we can obtain a statistical model by analysing and exploring a training data set. We could say that generative AI tools disassemble language (visual, textual) to reassemble it based on the calculation of probabilities. If until a few years ago these tools were trained to produce concrete expressions (the image of a face, text in a certain style), they now go beyond specific concreteness to produce many types and styles of content. This ability to generalise is based on the processing of much larger and more heterogeneous datasets in order to respond to all kinds of prompts. As a result, the scale of change in generative AI has been so large that it has required to boost new economies and accelerated reliance on ecosystems.
02. Datasets. The industrial compilation of training datasets is achieved by extracting content from the largest digital archive in existence: the Internet. The data is obtained through automated 'scraping' of online content published and shared by millions of internet users. The original motivation for this extraction did not foresee today's commercial exploitation by start-ups and platforms, but was driven by the desire for scholarly and non-commercial research. Now that these huge digital archives have been used to generate texts and images on demand, we are faced with a series of paradoxes and controversies within the cultural industries. If, on the one hand, the ideology of big data understands Internet content as a vast repository that can be extracted, processed and automated, on the other hand, this extractivist drive is seen by other cultural actors as a process of massive privatisation of the creativity of millions of Internet users.
03. Statistical imitation models. The cultural industries have produced many of the images, texts and sounds that feed AI models, and are in turn their main and potential users. The work of photographers, designers, illustrators, musicians, composers, screenwriters, writers, developers, animators and filmmakers is being processed to train these statistical mimicry models. Although the aesthetics of the content generated is currently not very different from that of image, photo or sound banks, the speed at which these automated services operate is unlikely to be matched by any human competitor. For this reason, the most sociable and precarious jobs in cultural production will be both the most disadvantaged and the most dependent on generative AI.
04. Fine-tuning. While datasets are raw materials, they are not enough to refine the current level of personalised interaction offered by online AI services. To make it more user-friendly, it needs pockets of micro-workers dedicated to refining the model: scoring the answers it generates, tagging images or text, annotating and other evaluation processes that involve cognitive work (often consisting of clicking on one of the options displayed on the screen). Large corporations outsource these employment services to third party companies, which in turn export them to countries in the global south with high poverty rates, where the cost/hour of each worker becomes a residual cost. Some of the outsourced companies have been documented operating in refugee camps, training displaced people in Lebanon, Uganda, Kenya and India to perform micro-tasks with data, exploiting the economic hardship of displaced people (Jones, 2022).
05. Filtering. At a later stage, it is necessary to filter the content generated by the large AI models. The most common tasks at this stage are those aimed at moderating so-called toxic content: hate, political controversy, extreme or explicit sex and violence, originally contained in the training datasets. The moderation work is carried out in the labour markets of countries such as Kenya or Uganda (or even in the large metropolises of the North, among migrant communities) and is dedicated to identifying and classifying texts and images with violent, murderous, rape or child abuse content. Behind the autonomous appearance of AI tools, therefore, we find different levels of human resources displaced to different geographies, precarised and invisible by the technological innovation industry.
06. AI start-ups. Such deployment of offshored human resources ultimately depends on generative AI start-ups (OpenAI, DeepMind, Anthropic and others), companies encouraged by the fetish value of AI models and renewed waves of Silicon Valley tech speculation. Generative AI start-ups have established themselves around expertise and specialised research, but also as global players, articulating such micro-labour markets, allying with big platform computing and attracting financial capital. They are the stars of the current hype in the global digital innovation market, compensating for the profit slowdown of tech venture capital, and with the convenient stimulus of a discourse that focuses the AI debate on the dangers it poses to the extinction of humanity.
07. Public discourses. The AI panic discourse, channelled by institutes and philanthropic foundations and supported by the "visionary" leaders of start-ups, is covered by the media and has the desired alarming effect on public opinion. At a time when these technologies are being subjected to the first regulatory processes (the first AI law was passed in the European Parliament in early 2024), the agitation of existential threat messages is intended to support the AI industry's demands for self-regulation by public administrations. Meanwhile, in the context of a crisis of standardised formats of credibility, defined for some years now by the term "post-truth", social networks are already filling up with synthetic messages, images or texts generated by these tools. The automation of public discourse and its implications in a context of growing misinformation and political polarisation will be at the core of the media agenda in the coming years.
08. Computing. The emergence of the young AI industry would not have been possible without the alliance with the platforms of the big data wave (Microsoft, Google, Amazon, Meta and others). These technological giants, which have built their economic hegemony on the extraction and commercialisation of user data from online services, now have a computing infrastructure of planetary proportions. In their data centres, they process the vast amounts of images, text and sound extracted from the web. This is a task that can only be handled by specialised supercomputers: huge concentrations of dedicated servers working around the clock to train the latest update of an AI model, exponentially larger than the previous one.
09. Calculation power. At the heart of these infrastructures is a key device: graphics processing units (GPUs). They provide the computing power needed to accelerate machine learning workloads –a power that AI research a decade ago found in graphics cards for the demanding video game industry. These devices are in the hands of a few companies that enjoy a near monopoly worldwide (most notably Nvidia, which has consolidated its proprietary system). Their production is outsourced to the semiconductor market, which is even more concentrated: the Taiwan Semiconductor Manufacturing Company (TSMC) produces 90% of the most advanced chips and relies on the lithographic printing equipment of the Dutch company ASML. This industrial conglomerate ultimately manufactures the core server components for more than 8,000 data centres around the world.
10. Raw materials. The semiconductor chips that power servers, as well as the mobile devices we use to share information, are the end product of this complex conglomeration of investments, manufacturers and equipment. But synthesising the tiny integrated circuits, as well as the batteries, power supplies, power distribution units and other components of electronic devices, requires large quantities of metals, minerals and other raw materials. Professor Jennifer Gabrys, who specialises in the materiality of digital media, analyses the amount of resources used to make a microchip: "To produce a two-gram memory microchip, 1.3 kilograms of fossil fuels and materials are required. In this process, just a fraction of the material used to manufacture microchips is actually contained in the final product, with as much as 99 percent of materials used discarded during the production process. Many of these discarded materials are chemicals–contaminating, inert, or even of unidentified levels of toxicity" (Gabrys, 2011) . The supply chain that links the clean rooms of technological innovation with the extraction of these minerals is shrouded in a veil of convenient opacity, facilitated by companies and intermediate suppliers that do not certify the origin of the materials they work with.
11. The mining industry, which supplies major digital hardware manufacturers, is spread across the length and breadth of the globe and is concentrated in countries in the global south.
I. Manufacturers use copper in the most powerful chips because of its higher electrical conductivity. One of the poles of copper mining can be found in the South American countries on the Pacific coast, mainly Chile and Peru. In the south of Peru is the so-called "mining corridor" (exploited by the Chinese company MMG Ltd, the Swiss company Glencore and the Canadian company Hubbay). In Peru, the export of minerals is one of the pillars of the economy, but it is also one of the main sources of conflict due to inequalities in the distribution of mining revenues and health problems for the local population as a result of water pollution.
II. Another conductive material prized by industry is gold. It is used in the production of smartphones, computers and servers, and part of the supply chain of major technology platforms imports it from Brazil, where 28% of its extraction is illegal. Despite the fact that Brazilian law does not officially allow mining on indigenous land, illegal gold mining in the Brazilian Amazon has skyrocketed since 2019. Researchers have documented tens of thousands of small-scale miners and more than 320 illegal mines, with the likelihood that the actual number is much higher. Small-scale gold mining has led to widespread deforestation and high levels of mercury pollution (Manzolli, 2021).
III. Battery production depends on one key component: lithium. And Chile is one of the world's leading producers of this coveted mineral. The Salar de Atacama, an area nearly four times the size of Santiago de Chile, is home to one of the world's largest lithium mines. Rising demand is increasingly affecting local communities, threatening their access to water and impoverishing the region's unique biodiversity.
IV. The production of lithium batteries also requires cobalt, and almost half of the world's reserves of this mineral are concentrated in Africa, mainly in the militarised mines of Congo, where there is documented evidence of the use of child labour and violations of the most basic human rights. In all these cases, the same pattern is repeated: foreign companies negotiate with a local elite over the exploitation of land, sidelining the interests of local communities. In this sense, we can understand how the private supercomputing industry has been built on the colonial foundations of resource extraction in countries of the Global South.
12. Energy. Since generative AI entered the public consciousness, the technology has placed unprecedented demands on power. The surge in investment, applications and media coverage in recent years has multiplied the power requirements of servers in data centres. Today, a single data centre can consume the equivalent of 50,000 homes. And AI has only increased the energy dependency: whereas a rack of servers three years ago consumed 5–10 kilowatts, today's dedicated AI servers require more than 60 kilowatts. This sudden change results in equipment investment and huge energy costs, as part of the increase in power is currently covered by diesel generators (Pasek, 2023).
13. Fossil fuels. The electricity used by data centres is estimated to account for 0.3% of total carbon emissions, and when personal connected devices such as laptops, smartphones and tablets are included, the total rises to 2% of global carbon emissions (Monserrate, 2022). In the context of global warming, the continued growth of computing infrastructure, with a projected 10% of new data centres per year (Espinoza, Aronczyk, 2021), is unsustainable in the face of a future characterised by global warming and species extinction. Most of the electricity consumed by data centres comes from fossil fuels, and although platforms are investing heavily in reducing their emissions, it seems that this is not necessarily by avoiding fossil fuels (digital platforms themselves are increasingly becoming partners with the oil and gas industry), but by exploiting the carbon offset economy, i.e. investing in forestry or wind projects that are often more symbolic than real.
14. Carbon offset. A large proportion of carbon offset projects are located on lands managed by indigenous communities around the world. These are places that still have low rates of deforestation and attract investment because of their regenerative capacity. Land-use decisions often exclude the interests of the communities themselves, leading to conflict or displacement (Kramarz et al., 2021). In addition, the carbon market does not solve the emissions problem, but instead promotes the utopia of infinite growth without consequences for the computer industry and reduces the climate problem to a transaction of buying and selling solutions. Similarly, investment in energy efficiency and renewable energy has its limits. Large solar and wind farms need to find viable sites, and they are not without community conflict. New green power plants will not be enough to achieve decarbonisation if they are to continue to meet the growing demand for computing. More fundamentally, they will not be able to sustain the computing loads envisaged by emerging AI platforms. This evidence is not lost on their CEOs, who have already begun to invest in the nuclear fission industry. The proliferation of large generative AI models requires more computing and more energy to power them. Meanwhile, the carbon footprint of planetary computing has already surpassed that of the airline industry.
15. Heat. The environmental footprint of AI is not limited to carbon emissions. The digital industry cannot function without generating heat. Digital content processing raises the temperature of the rooms that house server racks in data centres. If left unchecked, excessive heat poses a risk to the proper functioning of equipment. Heat must therefore be constantly reduced in such equipment. To control the thermodynamic threat, data centres rely on air conditioning –equipment that consumes more than 40% of the centre's electricity (Weng et al., 2021). But this is not enough: as the additional power consumption required to adapt to AI generates more heat, data centres also need alternative cooling methods, such as liquid cooling systems. Servers are connected to pipes carrying cold water, which is pumped from large neighbouring stations and fed back to water towers, which use large fans to dissipate the heat and suck in fresh water. According to Google, this water consumption ranges from 4 to 9 litres per kWh of server power. A significant amount for a platform that is usually more concerned with sustainability than many of its peers. Water consumption in the company's data centres has increased by more than 60% in the last four years, an increase that parallels the rise of generative AI.
16. Water. The construction of new data centres puts pressure on local water resources and adds to the problems of water scarcity caused by climate change. Droughts affect groundwater levels in particularly water-stressed areas, and conflicts between local communities and the interests of the platforms are beginning to emerge. In 2023, Montevideo residents suffering from water shortages staged a series of protests against plans to build a Google data centre. In the face of the controversy over high consumption, the PR teams of Microsoft, Meta, Amazon and Google have committed to being water positive by 2030, a commitment based on investments in closed-loop systems on the one hand, but also on the recovery of water from elsewhere to compensate for the inevitable consumption and evaporation that occurs in cooling systems.
17. Waste. In data centres, air conditioners, transformers, batteries or power supplies are regularly removed and disposed of when warranties expire. Dismantled equipment is then added to the list of electronic waste. This waste is difficult to recycle and, despite some local initiatives in Europe, is rarely reused, at least in the field of high performance computing. With an average of more than three devices per person worldwide and an average lifespan of less than two years, the digital gadget market, the fetish for the latest innovation or upgrade, means a constant generation of waste. We currently generate an average of 7.3 kg of e-waste per person per year, 82.6% of which ends up in landfill or is informally recycled (Forti et al., 2020). The recycling of the raw materials contained in e-waste is a highly unregulated market based on exports to third countries (64% of e-waste from recycling centres in Europe is sent to Africa).
18. Fossils. Tens of thousands of tonnes of electronic waste each year, which will take millennia to decompose, end up in informal dumps such as those in Agbogbloshie, Ghana, where hazardous materials are incinerated, exposing the people who survive by recycling to toxic fumes and radioactive elements. Mercury, copper, lead and arsenic leach into the soil and waterways, accumulating harmful chemicals in the ecosystem and its food chains. Despite growing awareness and the gradual introduction of new regulations, waste from the digital industry is one of the most visible signs of our fossil legacy. How the amount of resources, minerals, metals and energy invested in computing devices ends up shaping a particular sediment that will persist in deep geological time.
19. Aesthetic turn. All these infrastructures, extractions, transformations, investments, externalisations, computations, statistical models and labour markets interrelate and ultimately shape what is known as generative AI. It is a socio-technical phenomenon that has emerged from the dominance of probability as an epistemological model to face the challenges of the present. The implementation of these statistical tools in more and more contexts of human activity has taken a particular turn in recent years. If initially they were mainly used to track, extract and analyse information from the content of networked communication, this analysis has begun to be used to synthesise the very forms of communication. In this sense, the phenomenon of generative AI can be understood as an aesthetic turn. If the mimicry of human cognitive capacities has driven machine learning research to the present day, this aesthetic turn has directed research towards areas more characteristic of human expression and creativity. The trajectory of this research has mutated in several ways in recent years.
20. Scale. In general, generative AI research has moved from academia and science to industry and has sparked a wave of economic speculation. In the process, several changes in scale have been triggered. First, the scale of computation. From 2012, when the Alexnet project first used GPUs to win an image recognition competition, to 2024, when Nvidia announces that it will triple production of the latest generation of GPUs to 2 million units, there is a change in scale that affects and involves the entire supply chain mentioned above. Second, the scale of data sets has changed. While large platforms now seem to be interested in the functionality of smaller scale models, the focus in recent years has been on generalist task automation. This means that in addition to processing a large dataset, that dataset must contain a large variety of records. In order to achieve this diversity, the models must have been trained by crawling and processing an immeasurable proportion of web-accessible content. In parallel with these changes in scale, there has been an increase in the privatisation of tools and knowledge in the field. Most AI tools are open source, but even those that are not are often based on publicly available academic papers, so sooner or later someone will make a free version. As the models get bigger and bigger, the barrier to entry for researchers becomes harder to overcome. When the GPT-2 text generation model, ChatGPT's predecessor, appeared, anyone with the necessary knowledge and a moderately powerful computer could download the network onto their computer and train it with their own dataset. The next generation, the GPT-3 model, with higher parameters and capacity, was now offered only as a closed model, limited to training on the platform's servers. This paradigm shift has marked the consolidation and acceptance of these tools in recent years, along with the promotion of user interfaces and subscription-based payment systems. Commercial agreements between start-ups and platforms such as Microsoft, Google and Amazon have meant that in a few years generative AI techniques will become infrastructures, in an unprecedented exponential escalation in which what is truly generative is not the synthetic content they provide, but the generation of equipment, ancillary industries and environmental impacts that each new update brings.
21. Counter-mapping. The set of relationships presented here forms a mosaic that is difficult to grasp because it involves the linking of objects and knowledge of different kinds and scales. The discourses surrounding AI often have a strong mythic charge and are accompanied by a series of recurring metaphors and imaginaries: algorithmic agencies detached from human action, the non-negotiable technology that imposes the future on us, the universality of data, or the ability to produce models free of bias or worldviews. The set of discourses that surround these technologies, whether they are more specialised or more popular, end up shaping them in one way or another. For this reason, the Cartography of Generative AI project is based on the motivation to offer a conceptual map that covers a large part of the actors and resources involved in this complex and multifaceted object we call Generative AI. Drawing on a long genealogy of critical cartographies dedicated to wresting the function of maps as producers of hegemonic truths, this visualisation aims to map the phenomenon, taking into account the tensions, controversies and ecosystems that make it possible.
Crawford, K. (2021). Atlas of AI: power, politics, and the planetary costs of artificial intelligence. New Haven, Yale University Press.
Espinoza, M. I., Aronczyk, M. (2021). Big data for climate action or climate action for big data? Big Data & Society, 8(1).
Forti V., Baldé CP., Kuehr R., Bel G. (2020). The Global E-waste Monitor 2020: Quantities, flows and the circular economy potential. United Nations University (UNU).
Gabrys, J. (2013). Digital rubbish: A Natural History of Electronics. Michigan, University of Michigan Press.
Monserrate, SG. (2022). “The Cloud Is Material: On the Environmental Impacts of Computation and Data Storage.” MIT Case Studies in Social and Ethical Responsibilities of Computing, no. Winter 2022 (January).
Hogan, M. (2021). “The data center industrial complex” . In: Jue M, Ruiz R (eds) Saturation: An Elemental Politics. Durham, NC, Duke University Press, 283–305.
Jones, P. (2022). Work Without the Worker: Labour in the Age of Platform Capitalism. Verso Books.
Kramarz, T., Park, S., Johnson, C. (2021). “Governing the dark side of renewable energy: A typology of global displacements”, Energy Research & Social Science, vol 74.
Manzolli, B. et al. (2021). Legalidade da produção de ouro no Brasil. IGC/UFMG.
Pasek, A. (2023). “How to Get Into Fights With Data Centers: Or, a Modest Proposal for Reframing the Climate Politics of ICT.” White Paper. Experimental Methods and Media Lab, Trent University, Peterborough, Ontario.
Weng, C., Wang, Z., Xiang, J., Chen, F., Zheng, S., Yu, M. (2021). “Numerical and experimental investigations of the micro-channel flat loop heat pipe (MCFLHP) heat recovery system for data centre cooling and heat recovery”, Journal of Building Engineering, vol 35.
About/
Estampa is a collective of programmers, filmmakers and researchers working in the fields of audiovisual media and digital environments. Our practice is based on a critical and archaeological approach to audiovisual technologies, on researching the tools and ideologies of artificial intelligence and on the resources of experimental animation.
Freely licensed images from the following authors have been used to create the cartography: macrovector, storyset, fullvector, slidesgo, freepik.
This work was supported by the grants for research and innovation in the visual arts of the Generalitat de Catalunya - Oficina de Suport a la Iniciativa Cultural (OSIC).