Richard Kirk (University of California, Los Angeles)
“The worker becomes all the more a mere appendage of the machine…”
—Karl Marx and Friedrich Engels, The Communist Manifesto, 1848
The Fetish of Autonomy: AI, Hidden Labor, and the Political Geography of the Digital World Order
From OpenAI’s ChatGPT to Google’s Gemini, from facial recognition tools in airports to AI hiring assistants and predictive policing systems, artificial intelligence is increasingly presented as an autonomous force—intelligent, immaterial, totally unburdened by human labor. This framing is not only seductive; it is ideological, inviting us to imagine that “thinking machines” are replacing human work through leaps in technical capacity alone. This is factually wrong. Current AI systems function only through the exploitation of human labor—often invisibilized, devalued, and racialized—and the appropriation of human cognitive and affective capacities on a planetary scale (Gray and Suri 2019; Tubaro et al. 2020).
I begin with a straightforward claim: AI is fetishized. That is, it is treated as autonomous, intelligent, and self-generating—though it relies fundamentally on both living and dead labor. This fetishization is ideological, obscuring the relations of production that make AI possible. I amplify a materialist rejoinder to both techno-optimist and liberal reformist visions of automation. The intervention at hand lies in the refusal of automation’s dominant capitalist ideological frame. Scholars like Crawford (2021), Birhane (2021), and Pasquale (2015) have revealed that AI marks not the obsolescence of labor but its mutation—its intensification through invisibilization. The linkage between AI development and ghost work (Gray and Suri 2019), data colonialism (Couldry and Mejias 2019), and feminized digital labor (Casilli 2019) reframes automation as a project of labor, not a mere technical one. Recent AI iterations do not replace human work but obscure, redistribute, and devalue it.
Yet this dynamic also reveals something more: AI is a manifestation of capitalism’s internal contradictions—its relentless drive to eliminate labor while remaining dependent on it, its fetishization of value while devaluing those who create it (Marx 1976). These contradictions are sharpening. While the current reality of AI involves vast amounts of ghost work and exploitation, one can envision a future in which AI becomes truly generalized—capable of performing even this hidden labor on its own. Even ghost work, then, is rendered obsolete. And at such a juncture, we face an historical fork in the road: the consolidation of a necropolitical capitalist hellscape of managed surplus populations and extreme inequality (Kirk 2024; Mbembe 2003; Peters et al. 2024), or the possibility of a socialist restructuring—one involving redistributive mechanisms like universal basic income. Or, perhaps, this hellscape will arrive first, acting as a pressure-cooker for an inevitable proletarian revolution.
AI, in this sense, is not merely a technology. It is a fetish object and a crucible of crisis—expressing capitalism’s desire for laborless accumulation, even as it destabilizes the very foundations on which capitalist social relations depend. The question is whether this contradiction will merely deepen inequality—or open the door to something else.
In what follows, I briefly trace the layered dependencies of AI on human labor, spatially organized across global circuits of production. I emphasize the role of ghost work and data annotation, the global division of digital labor, and the extractive logic of data colonialism. I conclude by calling for a political geography of AI that not only demystifies its fetishized presentation but also situates it within the evolving geopolitical order. AI is a manifestation of capitalism’s internal contradictions that may well propel us into a post-capitalist future—but only if we do not allow it, as a capitalist project, to destroy us first.
The Labor Beneath the “Intelligence”
At the heart of contemporary AI systems—particularly large language models and computer vision tools—lies a process of training and refining vast algorithmic systems through labeled data. These labels do not appear magically. They are produced through what Mary Gray and Siddharth Suri (2019) call “ghost work”: the behind-the-scenes labor of data cleaning, content moderation, annotation, and tagging. Millions of workers, often underpaid and invisible, are tasked with deciding whether a post violates community guidelines, identifying objects in images, or evaluating chatbot responses for bias, offensiveness, or inaccuracy.
These workers are employed via platforms like Amazon Mechanical Turk, Scale AI, Appen, Remotasks, and Clickworker, and are asked to perform repetitive and often traumatic labor (e.g. screening violent or pornographic content, labeling surveillance footage, or identifying hate speech) without access to labor protections, psychological support, or collective bargaining rights (Roberts 2019). Their work is modular, algorithmically tracked, and governed by rating systems that determine future access to tasks. In short, they are made algorithmically precarious. This is not a labor market replacing work with automation but one that reorganizes and devalues work under the illusion of automation.
That illusion is not self-generated. It is actively cultivated by tech executives, startup founders, government contractors, and media figures—those with material interests in maintaining the fantasy of autonomous AI. The AI model appears intelligent only because thousands of humans have already evaluated and shaped its outputs, with the surplus value of the workers abstracted and commodified in the digital platform. Their labor produces the value that makes AI systems profitable, yet the “intelligence” is attributed not to workers but to the machine itself.
Planetary Infrastructures and the Global Division of Digital Labor
While AI is often described in the language of the cloud—evoking weightlessness and placelessness—it is deeply embedded in material infrastructures and global geographies of labor (Crawford 2021; Graham 2020). Ghost work is not evenly distributed. It flows along familiar colonial and racial-capitalist lines, shaped by disparities in labor cost, infrastructure, and regulation. Kenya, the Philippines, India, Venezuela, and Brazil are major hubs for data annotation work, content moderation, and call center outsourcing (Berg et al. 2018; Gray and Suri 2019). These regions are attractive to tech firms not because of a surplus of “AI talent”, but because of weak labor protections, favorable exchange rates, and a young, digitally connected workforce. In 2023, Time Magazine exposed how OpenAI subcontracted content moderation work to Sama, a firm based in Kenya, where workers earning as little as $1.32 per hour were tasked with filtering disturbing content—including scenes of murder, sexual abuse, and suicide—from the training data that would be used to “cleanse” ChatGPT (Perrigo 2023).
Indeed, this geography of production mirrors patterns long established under global capitalism: the Global South as a site of extraction and exploitation, the Global North as a site of capital accumulation (Arrighi 2001). In this way, AI development follows the logic of the maquiladora or the special economic zone: flexible, border-spanning production that depends on legal arbitrage and racialized labor discipline. What appears as technical innovation is in fact the recombination of old colonial patterns of value extraction, dressed in new technological garb.
Here, too, Marx’s (1976) concept of “dead labor” becomes crucial: the servers, datasets, and architectures that undergird AI systems are themselves the accumulated result of past labor. These material traces—accumulated in code, infrastructure, and capital—anchor the illusion of AI’s autonomy. Contemporary AI relies not only on living workers but on the vast sedimentation of dead labor that renders those workers interchangeable, extractable, and globally mobile.
Data Colonialism and the Appropriation of Human Experience
Alongside ghost work, AI systems depend on the mass appropriation of human behavioral data: images, text, video, voice, facial expressions, search queries, and biometric signals. This data is scraped, aggregated, and repurposed without consent, becoming raw material for predictive models. Nick Couldry and Ulises Mejias (2019) provocatively term this process “data colonialism”—a new form of extraction which treats human life itself as a free resource for capital accumulation.
Via data colonialism, tech corporations extend capitalist logics of enclosure into the digital realm. Language models like GPT-4 are trained on billions of words produced in unpaid public forums—Reddit threads, Wikipedia entries, news articles, books—without compensating the writers or acknowledging their labor (Bender et al. 2021). Facial recognition tools are trained on massive image datasets compiled from social media and surveillance footage, often involving the biometric data of racialized populations without consent (Benjamin 2019; Crawford 2021).
These data forms, too, are repositories of dead labor—residual traces of human work, expression, and sociality that are now expropriated and enclosed within private AI models. These practices raise questions about surveillance, consent, and the ownership of meaning, language, and culture. Who “owns” the collective labor of producing public knowledge and communication? Who profits from its enclosure into proprietary models?
The Fetish of the Machine
If AI systems are reliant on human labor, planetary infrastructure, and exploitative data practices, why do they appear autonomous, intelligent, and value-neutral? Here we must turn to Marx’s concept of the fetishism of the commodity—the process by which “the commodity-form, and the value-relation of the products of labour within which it appears, have absolutely no connection with the physical nature of the commodity and the material relations arising out of this” (1976: 165). This is to say that the social relations of labor are obscured and replaced by the appearance of intrinsic value in the product. AI is not only a commodity but a fetish object: it appears to possess independent intelligence, despite being wholly dependent on labor.
AI systems, like the factory in Marx’s (1976) analysis, are lifeless mechanisms animated only through the structured activity of living labor. To attribute value or generative capacity to AI itself is to fall into the trap of appearances that Marx warns against—to mistake the accumulated dead labor embedded in algorithms and infrastructure for autonomous intelligence. But the illusion is not incidental. As Kirsch and Mitchell (2004) emphasize, the networks that turn social relations into ossified things—whether machines, data models, or workplaces—are assembled through the logic and necessity of capital accumulation. There is social intentionality in how these systems are built and deployed: not only do they reflect the priorities of capital, but they are designed to dominate labor. This is the deeper meaning of Marx’s claim that “[c]apital is dead labour which, vampire-like, lives only by sucking living labour” (1976: 342). AI, too, comes to appear as an actor in its own right, obscuring the capitalist relations which give it form and function.
This fetishism has profound ideological consequences. It allows corporations to present AI as a solution to social problems—from teacher shortages to housing allocation—while masking the social relations that produce those problems in the first place. It legitimizes layoffs and public sector austerity under the guise of “technological progress”. It converts political questions about justice into technical problems to be solved via algorithmic optimization.
A Political Geography of AI
What would it mean to de-fetishize AI and recover the social relations and geographies that make it possible? First, it means recognizing AI not as a technical marvel but as a social and spatial arrangement—a system of labor, infrastructure, and power rooted in global capitalism.
Second, it requires tracing the geopolitical order emerging around AI. The US enforces chip export bans on China to stifle its advancement, while Gulf states like the UAE court US tech firms and normalize ties with Israel to gain access to cutting-edge AI systems. Trump’s recent visit to the Gulf to broker AI deals is emblematic of this alignment between digital capital and geopolitical power. At the same time, Chinese firms like DeepSeek challenge US dominance, offering a competing model of technocratic sovereignty. The AI frontier is not only computational. It is geopolitical.
Third, it means centering the workers whose labor underwrites AI. From data annotators in Nairobi to moderators in Manila, these workers are not marginal to technological development, but rather, they are its engine. Their demands for recognition, fair wages, mental health protections, and union representation should form the basis of a new politics of digital labor. A political geography of AI not only locates these workers within global circuits of value production—but it insists that their struggles are central to the politics of the future.
Fourth, it means refusing the techno-optimist capitalist narrative that automation will liberate us from work if we only allow it to proceed unchecked. So long as AI development is driven by capitalist imperatives, it will not reduce exploitation and immiseration; it will only make it harder to see. True liberation would require not just new tools, but new social relations: collective ownership of digital infrastructures, democratic governance of data, and a decommodified approach to knowledge and care.
A political geography of AI must therefore be more than descriptive. It must be insurgent and expose the global architectures of domination that sustain AI’s development while amplifying the voices of those most harmed by its current trajectory. It must ask: Whose intelligence? Whose future? Whose world is being made? And it must be capable of imagining—and fighting for—a radically different one.
References
Arrighi G (2001) Global capitalism and the persistence of the North–South divide. Science & Society 65(4):469–476
Bender E M, Gebru T, McMillan-Major A and Shmitchell S (2021) On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency https://doi.org/10.1145/3442188.3445922
Benjamin R (2019) Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity
Berg J, Furrer M, Harmon E, Rani U and Silberman M S (2018) “Digital Labour Platforms and the Future of Work: Towards Decent Work in the Online World.” International Labour Organization https://www.skillsforemployment.org/sites/default/files/2024-01/edmsp1_225025.pdf (last accessed 24 June 2025)
Birhane A (2021) Algorithmic injustice: A relational ethics approach. Patterns 2(2) https://doi.org/10.1016/j.patter.2021.100205
Casilli A A (2019) En attendant les robots: Enquête sur le travail du clic. Paris: Seuil
Couldry N and Mejias U A (2019) The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford: Stanford University Press
Crawford K (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press
Graham M (2020) Regulate, replicate, and resist: The conjunctural geographies of platform urbanism. Urban Geography 41(3):453–457
Gray M L and Suri S (2019) Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Houghton Mifflin Harcourt
Kirk R (2024) Neoliberal necropolitics and the global competition for urban dominance. Geoforum 155 https://doi.org/10.1016/j.geoforum.2024.104107
Kirsch S and Mitchell D (2004) The nature of things: Dead labor, nonhuman actors, and the persistence of Marxism. Antipode 36(4):687–705
Marx K (1976) Capital: A Critique of Political Economy, Volume I (trans B Fowkes). London: Penguin
Mbembe A (2003) Necropolitics (trans L Meintjes). Public Culture 15(1):11–40
Pasquale F (2015) The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press
Perrigo B (2023) Exclusive: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time Magazine 18 January https://time.com/6247678/openai-chatgpt-kenya-workers/ (last accessed 26 June 2025)
Peters F, Clare N and Davies T (2025) Necropolitics and geography. Progress in Human Geography https://doi.org/10.1177/03091325251348613
Roberts S T (2019) Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press
Tubaro P, Casilli A A and Coville M (2020) The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence. Big Data & Society 7(1) https://doi.org/10.1177/2053951720919776
Featured image: self-portrait, “Smiling Robot with Waving Hand”, by ChatGPT, 30 July 2025
