



Superintelligence as a Digital Plantation
The New Cotton Fields: AI, Data, and Digital Colonialism By Stefan Youngblood w/ai History doesn’t just repeat itself — it upgrades. Plantations didn’t vanish; they evolved. The cotton fields became corporate farms, the overseers became CEOs, and now, with the rise of artificial intelligence, the plantation economy has gone digital. Instead of stolen bodies, it’s stolen data. Instead of forced labor, it’s algorithmic exploitation. And instead of whips, it’s the invisible hand of automation deciding who eats and who gets left behind. We’re told AI is the great equalizer. A tool of progress. A way to make life easier. But for whom? Because when we look beneath the glossy marketing of superintelligence, we see something disturbingly familiar — Black and Brown communities fueling AI’s rise, but locked out of its wealth, its governance, and its power. This is the digital plantation. And if we don’t confront it now, the chains will tighten before we even recognize they’re there. ⸻ From Chattel to Data Slavery: The Plantation Never Closed The plantation economy was always about one thing: extraction. For centuries, Black bodies were both the labor and the product — our ancestors built the wealth that others inherited while they remained enslaved. Fast forward to today, and the same thing is happening with data. Every time we post, scroll, react, speak, or even exist online, we generate data. That data is the raw material that trains AI. It powers ChatGPT, fuels Google’s search algorithms, and refines Meta’s ad targeting. We are making these systems smarter — while receiving nothing in return. Google, OpenAI, and Meta extract our digital labor without consent, just like enslaved people picked cotton without pay. And when that data is used to create billion-dollar AI models, who profits? Not us. The profits are staggering — OpenAI reached a $80 billion valuation in 2023, while the marginalized communities whose data powers these systems remain economically disadvantaged. Timnit Gebru, former co-lead of Google’s AI ethics team, called it out plain and simple: “The current AI paradigm is extractive. It takes from the many to benefit the few, concentrating both power and profit in ways that mirror colonial systems of the past.” The plantation never shut down — it just upgraded to the cloud. And the numbers prove it: the top five tech companies hold more wealth than the bottom 50% of American households combined, creating a digital aristocracy built on our collective data. ⸻ Invisible Labor & Algorithmic Enslavement AI doesn’t train itself. It’s not magic. It’s labor. And just like before, that labor is hidden in the shadows. In Kenya, content moderators were paid $1.50 an hour to sift through the most violent, traumatic content imaginable to “clean up” AI models for Western corporations. Many develop PTSD, with one worker describing it as “digital trauma bondage.” In the Philippines, data annotators work for pennies, manually labeling images and text so AI can appear seamless and “intelligent.” A 2023 study found these workers earn less than 10% of what their American counterparts would make for identical work. OpenAI, the company behind ChatGPT, paid African workers under $2 an hour to label toxic content — work that left many with lasting psychological trauma. Meanwhile, their CEO earned $44 million in 2023 alone. They are the digital equivalent of field hands, ensuring AI runs smoothly while the wealth generated at the top never trickles down. The gap is stark: for every dollar of value these workers create, they receive less than three cents in compensation. Ruha Benjamin, author of Race After Technology, puts it bluntly: “Automated systems encode inequalities in ways that make them harder to challenge. By embedding exploitation in code rather than policy, tech companies have found the perfect shield against accountability.” In other words, AI is working exactly as it was designed to — protecting power while keeping certain communities locked out. When Sama, a major AI data labeling company, laid off 300 Kenyan workers in 2023, many were given just one hour’s notice and minimal severance — revealing the disposability of the very people making AI “intelligent.” ⸻ Algorithmic Oppression: The New Jim Code They tell us AI is neutral. They tell us it’s fair. But let’s be real — who builds these systems? Who designs the algorithms? Who decides what data is used? Bias isn’t a glitch in AI. It’s a feature. The demographics speak volumes: Black people represent just 2.7% of the AI workforce at major tech companies, despite being 13.6% of the US population. This isn’t coincidence — it’s structural exclusion. Facial recognition technology misidentifies Black faces at alarmingly high rates — 34% higher for darker-skinned women than for lighter-skinned men — leading to wrongful arrests, like what happened to Robert Williams in Detroit in 2020, forced to explain to his daughters why police took him away in handcuffs for a crime he didn’t commit. Healthcare algorithms systematically underserve Black patients, allocating fewer resources to them than equally sick white patients. A 2019 Science study found that this algorithmic bias affected over 200 million Americans, potentially costing thousands of lives annually. Hiring algorithms filter out Black applicants before they even get a chance to interview. One study found that 44% of AI hiring systems showed racial bias against “Black-sounding” names, perpetuating centuries of workplace discrimination in a new, “objective” form. Joy Buolamwini calls it the Coded Gaze — a digital version of redlining that determines who gets opportunity and who gets shut out. Her research with the Algorithmic Justice League found facial recognition error rates as high as 46% for darker-skinned women, compared to less than 1% for lighter-skinned men. AI doesn’t just reflect society — it automates oppression. And if we don’t intervene, the same systems that upheld slavery, Jim Crow, and mass incarceration will be written into the code of our future. As Safiya Noble writes in Algorithms of Oppression: “These are not neutral technologies; they are political technologies designed to concentrate power.” ⸻ The Illusion of Liberation: Who Really Controls AI? Tech execs love to talk about “democratizing AI.” They say it’ll create opportunities, make knowledge accessible, and bring in a new era of abundance. But let’s be real — abundance for who? The top 10 AI companies have a combined market cap of over $10 trillion — while employing fewer than 0.1% of the global workforce. For context, that’s more wealth than the entire GDP of Africa, Latin America, and Southeast Asia combined. The World Economic Forum estimates AI will displace 85 million jobs by 2026, with Black and Brown workers being hit the hardest. According to McKinsey, 45% of jobs in predominantly Black communities are at high risk of automation, compared to 27% in predominantly white communities. AI models rely on our data, but the wealth they generate never flows back into our communities. Of the $232 billion invested in AI startups between 2021–2023, less than 0.5% went to Black founders. The plantation never gave land to the enslaved. And today’s AI economy isn’t giving ownership to the people who built it. Consider this: if the data we collectively produce were valued as labor, each active internet user would be owed approximately $7,500 annually from tech companies — money that instead becomes executive bonuses and shareholder dividends. This is not liberation — it’s digital sharecropping. When Sam Altman speaks of “democratizing AI” while commanding a company with fewer than 30 Black employees out of 770 total staff, the contradiction is clear. And just like before, if we want change, we have to take it. ⸻ Reclaiming AI: The Blueprint for Resistance If we don’t want another century of extraction, we need to flip the system. Here’s how: Data Sovereignty: Own What We Create Our data is our labor. That means we should have ownership, rights, and compensation for it. Organizations like Data for Black Lives and the Indigenous Data Sovereignty Network are already leading the way — now we need to amplify and expand these efforts. The Data Dividend Project proposes that tech companies pay users directly for their data — potentially generating thousands of dollars annually for marginalized communities. California’s proposed Data Dividend Act would be the first step toward recognizing our digital labor as valuable and compensable. Decentralized AI: Community-Controlled Tech AI shouldn’t belong to a handful of companies in Silicon Valley. Projects like Masakhane (African AI for African languages) prove that AI can be built by us, for us — without corporate gatekeepers. Distributed AI collectives like BlackInAI have already developed models that outperform corporate offerings on tasks relevant to Black communities, while maintaining community ownership. Their work on culturally-responsive language models shows that democratization can be real, not rhetorical. HBCUs & Black AI Think Tanks: Power Through Knowledge Black institutions must lead in AI development. Howard’s partnership with Google is a step, but we need more. HBCUs should be the hubs where AI is built, studied, and controlled by Black scholars, researchers, and engineers. The HBCU AI Consortium, launched in 2022, connects 25 historically Black colleges with $20 million in funding to develop AI curriculum and research. This is the foundation for technological sovereignty — ensuring the next generation of AI is designed with our values and needs at the center. Policy & Regulation: Demand Accountability Governments must force AI companies to be transparent. That means laws that: ✅ Address racial bias in AI through mandatory algorithmic impact assessments focused on racial equity ✅ Guarantee fair wages for digital labor, including content moderation and data annotation ✅ Protect marginalized communities from algorithmic harm through enforceable AI liability standards ✅ Require diverse representation in AI development teams as a condition for public contracts The Algorithmic Justice and Online Platform Transparency Act, introduced in 2023, would be the first comprehensive attempt to regulate AI in the service of racial justice. We need to make it law. Reparative AI: Build Tech That Heals, Not Harms AI doesn’t have to be a tool of oppression. It can be a tool of justice — but only if we design it that way. Imagine AI systems that: Identify and correct for racial bias in lending, housing, and hiring Preserve and revitalize endangered indigenous languages Democratize access to legal representation and healthcare knowledge Return wealth to communities whose data and creativity fuel the digital economy Dr. Yeshimabeit Milner said it best: “We need technologies that repair rather than replicate harm. AI built with reparative justice at its core could help heal centuries of extraction and exploitation.” That’s the future we should be building. ⸻ The Plantation Must Fall. AI Must Be Reclaimed. Superintelligence isn’t inherently bad. But if we let it follow historical patterns, it will reinforce the same systems of exploitation we’ve been fighting for centuries. Plantations never just disappeared. They were dismantled. The same must happen now. AI is the next battlefield in the fight for liberation. We either shape it — or we become trapped inside it. The choice is ours. And the stakes couldn’t be higher. This isn’t just about who profits from technology — it’s about who gets to be human in a world increasingly defined by machines. It’s about whether centuries of struggle for dignity and freedom will be coded away or amplified through new tools. The plantation economy survived the end of legal slavery because its logic of extraction remained intact. If we want AI to truly serve humanity — all of humanity — we must break that logic at its root. In the words of Octavia Butler: “There is nothing new under the sun, but there are new suns.” Let’s make sure AI is a new sun — not the same old chains.About Us



