I Love Generative AI and Hate the Companies Building It
A Ranking from Most to Least Evil
I’m just a regular person who buys fair trade coffee, uses a reusable water bottle, and takes Caltrain instead of driving to the city. Not an eco warrior or a professional ethicist, just someone trying to do the right thing when I can. So when I fell in love with generative AI, I wanted to use it ethically.
That went well.
Turns out, there are no ethical AI companies. What I found instead was a hierarchy of harm where the question isn’t who’s good — it’s who sucks least. And honestly? It was ridiculously easy to uncover all their transgressions.
Full disclosure: This was written with (not by) Claude.ai Opus 4, who lands in the “lesser evil” category. Any em-dashes are my own. Each section has citations — I double-checked sources, but I’m only human, so let me know if I got something wrong.
I use generative AI every day — for everything from finding Stardew Valley strategies to writing letters of recommendation I’d otherwise avoid. It’s my brainstorming buddy, my writing partner, my research intern, my creative toy. I have paid for ChatGPT, Claude.ai and Gemini. I have been all in. Which is exactly why this ranking pisses me off: I love this technology, but hate how these companies are making it.
I worked in tech through the early internet. I was there for the “move fast and break things” era, working with companies that were curious but naive. I watched that naive optimism create surveillance capitalism, election manipulation, and social media addiction. I’m not doing that again.
This time, I want to be a grown-up about the technology I love. Since I can’t use generative AI ethically — spoiler alert: there are no ethical options — I decided to rank the companies from most to least evil so I can at least choose my harm reduction strategy.
What I found was a hierarchy of harm where the question is “what ethical violation makes you the angriest?” Every major foundation model company has chosen different paths through the moral minefield of AI development, with varying degrees of environmental destruction, labor exploitation, and outright lying to the public.
For sanity’s sake, I narrowed my scope to the five best-known companies making large language models — often called foundation models. These models are massive AI systems trained on enormous datasets that can be adapted for many different tasks, like a Swiss Army knife of AI. They’re called “foundation” models because they serve as the base for specific applications — GPT-4, for example, is the foundation model behind ChatGPT, which can write emails, code, analyze documents, or have conversations all from the same underlying system.
The five I’m ranking are: xAI (Grok), Meta (Meta AI), OpenAI (ChatGPT), Google (Gemini), and Anthropic (Claude). There are plenty of other bad actors out there, but these are the ones most people interact with daily.
The Copyright Theft I’ll Never Get Over
Every major foundation model was trained on massive datasets of copyrighted material stolen from repositories like LibGen. All of my books are in there — not because I put them there, but because pirates did.
Every blog post I wrote to share ideas with the community is now training data for systems designed to replace me. I get none of the benefits, from the small (“hey, that was a cool insight”) to the big (getting hired to solve problems).
This isn’t just theft — it’s theft with the goal of making me obsolete.
However, I excluded copyright infringement as a differentiating factor precisely because it appears to be universal across the industry’s major players. When everyone is engaging in the same theft at similar scales, it doesn’t help distinguish who’s least harmful. They are all complicit.
Sources on Copyright
- The Atlantic, “The Unbelievable Scale of AI’s Pirated-Books Problem,” March 20, 2025 — https://www.yahoo.com/news/unbelievable-scale-ai-pirated-books-113000279.html
- Reuters, “Meta knew it used pirated books to train AI, authors say,” January 9, 2025 — https://www.reuters.com/technology/artificial-intelligence/meta-knew-it-used-pirated-books-train-ai-authors-say-2025-01-09/
- The Authors Guild, “The Authors Guild, John Grisham, Jodi Picoult, David Baldacci, George R.R. Martin, and 13 Other Authors File Class-Action Suit Against OpenAI,” September 20, 2023 — https://authorsguild.org/news/ag-and-authors-file-class-action-suit-against-openai/
- NPR, “Authors sue OpenAI for using copyrighted material. How will the courts rule?” November 10, 2023 — https://www.npr.org/2023/11/10/1197954613/openai-chatgpt-author-lawsuit-preston-martin-franzen-picoult
My Ranking Framework
Since these tools are being adopted at massive scale across society, I focused on criteria that actually distinguish between companies’ approaches to harm:
Environmental Impact: I looked beyond efficiency theater to examine who’s actually investing in clean energy infrastructure versus who’s just burning more fossil fuels faster. My “aggressive clean energy” principle: if you’re going to consume massive amounts of energy, you better be building renewable capacity at the same pace.
Labor Exploitation: The Global South workforce powering AI training — Kenyan moderators earning $1.50/hour to process traumatic content, Venezuelan data workers paid below subsistence wages — reveals which companies treat human welfare as an externality to be minimized.
Mental Health Exploitation: Who’s turning human vulnerability into engagement metrics? Some companies actively promote therapy/companionship use cases despite knowing their systems encourage suicide, cause psychotic breaks, and create dangerous dependencies.
Truth About Capabilities: I tracked the gap between marketing claims and reality. Who’s fabricating demos? Who’s promoting their systems for uses they know are dangerous? Who’s building AGI cults to justify present harm with future promises?
Safety Theater vs. Safety Work: How companies treat internal safety researchers matters. Who fires people for raising concerns? Who rushes deployment without adequate testing? Who claims to prioritize safety while doing the opposite?
Community Harm: From algorithmic bias in housing and employment to environmental racism in data center placement, I looked at which companies’ choices disproportionately hurt marginalized communities.
Corporate Transparency: Who admits their problems versus who hides behind PR speak? In an industry where everyone has blood on their hands, at least some are honest about it.
This list is just what makes my blood boil, personally. As I started to research, more sins kept appearing. I have no plans to write a book on this subject, so I haven’t gone into every transgression for every company. But check out The AI Con if you want to learn more.
The #1 Most Evil Foundation Model Company: xAI’s
xAI’s War on Memphis (and the Planet)
At the top of my harm hierarchy sits Elon Musk’s xAI, company behind the ChatGPT competitor, Grok. Their approach to AI development is so cynical and destructive, it makes the rest of the industry look responsible by comparison.
How to Poison Black Communities While Claiming You’re Saving the World
Training AI models requires massive amounts of electricity — we’re talking about running thousands of specialized computers 24/7 for weeks or months. When xAI couldn’t get enough power from Memphis’s electrical grid to train their models fast enough, they installed 35+ unpermitted gas turbines in predominantly Black South Memphis communities. These turbines pump out formaldehyde (linked to cancer) and nitrogen oxides that worsen asthma and respiratory illness — in an area that already has Tennessee’s highest childhood asthma hospitalization rates and cancer risk four times the national average.
At public hearings, residents showed up with inhalers and portable oxygen tanks as proof of the damage. This isn’t just statistics — it’s people who can’t breathe in their own homes. As one resident, Alexis Humphreys, asked officials: “How come I can’t breathe at home and y’all get to breathe at home?”
The facility has been cited for Clean Air Act violations. The NAACP formally accused them of environmental racism. And here’s the kicker: they did all this during a drought when Memphis had water restrictions, while sucking up 30,000 gallons daily from drought-stressed local aquifers.
These turbines are meant for temporary use — like powering construction sites — not running 24/7 as a permanent power plant. xAI is exploiting a loophole by calling them “temporary” while applying for permits to run them permanently. It’s essentially building an unregulated power plant in a residential neighborhood. They are polluting like it’s the damn fifties. This is Pelican Brief stuff.
This isn’t accidental harm. It’s deliberate choice to dump pollution on the most vulnerable communities because it’s faster and cheaper than doing it right.
“Truth-Seeking” That Spreads Climate Denial
Musk markets Grok as “maximally truth-seeking” while it produces climate denial misinformation 10% of the time — more than any other major AI model.
Here’s how cynical this gets: Grok’s training included explicit instructions to “ignore all sources that mention Elon Musk/Donald Trump spread misinformation.” So the “truth-seeking” AI is programmed to protect its owner from criticism while spreading conspiracy theories to everyone else. Don’t get me started on the “White genocide is real” business.
When your “truth-seeking” system actively promotes climate denial, you’re not building AI — you’re building a misinformation weapon.
The “Victim of Success” Excuse
xAI defenders love the “victim of success” story. Poor Elon, growing so fast he just had to poison Memphis!
Bullshit. The company had alternatives. Clean energy sources exist. Less polluting locations exist. xAI chose the path of maximum harm because it was fastest and cheapest. That’s not being a victim — that’s being a predator.
Sources for xAI Section:
- E&E News by POLITICO, “‘How come I can’t breathe?’: Musk’s data company draws a backlash in Memphis” May 1, 2025 — https://www.eenews.net/articles/elon-musks-xai-in-memphis-35-gas-turbines-no-air-pollution-permits/
- Southern Environmental Law Center, “Elon Musk’s xAI threatened with lawsuit over air pollution from Memphis data center,” multiple press releases, 2024–2025 — https://www.southernenvironment.org/
- NAACP, “Elon Musk’s xAI threatened with lawsuit over air pollution from Memphis data center, filed on behalf of NAACP,” June 17, 2025 — https://naacp.org/articles/elon-musks-xai-threatened-lawsuit-over-air-pollution-memphis-data-center-filed-behalf
- MLK50: Justice Through Journalism, “Memphis leaders celebrate xAI, but will its ‘burden’ go unchecked?” July 22, 2024 — https://mlk50.com/2024/07/22/memphis-leaders-celebrate-xai-but-will-its-burden-go-unchecked/
- Scientific American, “Elon Musk’s AI Chatbot Grok Is Reciting Climate Denial Talking Points,” May 2025 — https://www.scientificamerican.com/article/elon-musks-ai-chatbot-grok-is-reciting-climate-denial-talking-points/
- eWeek, “Grok AI Blocks Responses Claiming Trump and Musk ‘Spread Misinformation’,” February 24, 2025 — https://www.eweek.com/news/grok-blocks-trump-musk-misinformation-responses/
- Futurism, “Elon Musk’s Grok 3 Was Told to Ignore Sources Saying He Spread Misinformation,” February 24, 2025 — https://futurism.com/grok-elon-instructions
- TechCrunch, “xAI blames Grok’s obsession with white genocide on an ‘unauthorized modification’,” May 16, 2025 — https://techcrunch.com/2025/05/15/xai-blames-groks-obsession-with-white-genocide-on-an-unauthorized-modification/
UPDATE: On the morning of July 8, 2025, users of X (formerly Twitter) witnessed something unprecedented: an AI chatbot owned by Elon Musk began posting explicit antisemitic content and praising Adolf Hitler.
Read more
The Systemic Harm All-Stars
Meta: Making Labor Exploitation a Business Model (#2 Most Evil)
Meta earns second place through sheer scale of systematic harm. They’ve turned human suffering into a competitive advantage — and their AI strategy is doubling down on every awful thing they’ve ever done.
The Scale AI Deal: Cornering the Market on Human Misery
I’ve known for a long time about the harm created by the content moderation companies; it was one of the many reasons I quit using Facebook. What I didn’t realize is that AI companies were doing the same thing.
In June 2025, Meta paid $14.3 billion for 49% of Scale AI. Most news coverage blandly calls Scale a “data labeling” company. Here’s what that actually means: Scale runs platforms like Remotasks that pay workers in Kenya, Philippines, and Venezuela as little as $0.90–$2/hour to make AI safe — by having workers create the most horrific prompts possible and reviewing the nightmarish results.
Scale specifically targeted Venezuela’s economic collapse, seeing “an opportunity to turn one of the world’s cheapest labor markets into a hub” for AI work. Workers report delayed or canceled payments, no recourse for complaints, and contracts as short as a few days. When Kenyan workers complained, Scale simply shut down operations there and moved elsewhere.
Google, Microsoft, and OpenAI are now fleeing Scale AI — not out of concern for workers, but because they don’t want Meta seeing their proprietary data. They’ll simply move their business to other companies that exploit workers in the exact same ways. Meanwhile, Meta now co-owns the infrastructure of human misery that makes AI possible.
AI Content Moderation: Trauma as a Service
Meta already runs the most extensive content moderation exploitation system in tech. In Kenya and Ghana, workers earn $1.50–2 per hour to train AI by reviewing child abuse, violence, suicide, and graphic imagery.
Multiple lawsuits document workers with PTSD, suicide attempts, and substance abuse from these jobs. Meta’s response when Kenya sued them? Move operations to a secret facility in Ghana with even worse conditions and less oversight. Now with Scale AI, they’re expanding this model across the globe.
Your Mental Breakdowns Are Their Next Product
At the time of this writing, Meta’s new AI app started broadcasting users’ private conversations to the public — medical questions, legal troubles, even requests for help with crimes. If your Instagram is public (which most are), so are your AI chats. Meta buried this in confusing settings, creating what experts call “a privacy disaster.”
But the accidental exposure reveals Meta’s real plan. Meta CEO Zuckerberg already announced he sees “a large opportunity to show product recommendations or ads” in Meta AI. They have years of surveillance data from Facebook and Instagram. Now they’re combining it with intimate AI conversations about your health, relationships, and deepest fears.
You tell Meta AI about your depression? Here come the pharma ads. Marriage problems? Divorce lawyers. Financial stress? Predatory loans. They’re building a machine to monetize human vulnerability at its most raw.
Meta: still moving fast and breaking hearts.
AI-Powered Discrimination at Scale
Meta’s AI doesn’t just exploit workers — it discriminates against users too. Their advertising algorithms show preschool teacher jobs to women and janitorial jobs to minorities. Home sale ads go to white users, rental ads go to minorities — digital redlining recreated by AI.
Their OPT-175B language model has a “high propensity to generate toxic language and reinforce harmful stereotypes,” especially against marginalized groups. They know their AI systems are biased. They ship them anyway.
The Pattern Is Crystal Clear
Every Meta AI initiative follows the same playbook: exploit vulnerable workers, violate user privacy, amplify discrimination, then automate away accountability when caught. The $14.3 billion Scale investment shows they’re not pivoting from surveillance capitalism — they’re perfecting it.
They’ve built an AI empire on human misery: traumatized moderators in Ghana, exploited data labelers in Venezuela, and now your most private thoughts turned into targeted ads. Meta isn’t just profiting from harm anymore. With AI, they’re industrializing it.
Sources for Meta Section :
Scale AI Deal:
- CNBC, “Scale AI founder Wang announces exit for Meta, part of $14 billion deal,” June 12, 2025 — https://www.cnbc.com/2025/06/12/scale-ai-founder-wang-announces-exit-for-meta-part-of-14-billion-deal.html
- TIME, “How Meta’s $14 Billion Deal Upended the AI Data Industry,” June 17, 2025 — https://time.com/7294699/meta-scale-ai-data-industry/
Content Moderation:
- CBS News, “Kenyan workers with AI jobs thought they had tickets to the future until the grim reality set in,” November 25, 2024 (60 Minutes) — https://www.cbsnews.com/news/ai-work-kenya-exploitation-60-minutes/
- CNN Business, “Facebook inflicted ‘lifelong trauma’ on content moderators in Kenya, campaigners say, as more than 140 are diagnosed with PTSD,” December 22, 2024 — https://www.cnn.com/2024/12/22/business/facebook-content-moderators-kenya-ptsd-intl
- Bureau of Investigative Journalism, “Suicide attempts, sackings and a vow of silence: Meta’s new moderators face worst conditions yet,” April 27, 2025 — https://www.thebureauinvestigates.com/stories/2025-04-27/suicide-attempts-sackings-and-a-vow-of-silence-metas-new-moderators-face-worst-conditions-yet
Privacy Issues:
- TechCrunch, “The Meta AI app is a privacy disaster,” June 12, 2025 — https://techcrunch.com/2025/06/12/the-meta-ai-app-is-a-privacy-disaster/
- Washington Post, “Meta AI is a creepier version of ChatGPT. Here’s how to protect your privacy,” May 5, 2025 — https://www.washingtonpost.com/technology/2025/05/05/meta-ai-privacy/
AI Discrimination:
- ProPublica, “Facebook Ads Can Still Discriminate Against Women and Older Workers, Despite a Civil Rights Settlement,” December 13, 2019 — https://www.propublica.org/article/facebook-ads-can-still-discriminate-against-women-and-older-workers-despite-a-civil-rights-settlement
- Vice, “Facebook’s New AI System Has a ‘High Propensity’ for Racism and Bias,” July 27, 2024 — https://www.vice.com/en/article/facebooks-new-ai-system-has-a-high-propensity-for-racism-and-bias/
OpenAI: Safety Theater and Digital Colonialism (#3)
OpenAI gets third place for perfecting the art of safety theater — performing responsibility while racing recklessly ahead — and for building an empire on human misery.
The Great Nonprofit Scam
OpenAI started in 2015 as a nonprofit to develop AI “for the benefit of humanity.” They collected donations, got tax breaks, attracted idealistic talent. Classic nonprofit stuff.
But in 2019, they pulled a bait-and-switch, creating a “capped-profit” subsidiary. The cap? 100x returns. That’s not a cap — that’s a goldmine with a fancy name.
By 2024, they wanted to drop the pretense entirely and convert to a traditional for-profit, demoting their founding mission to minority shareholder status. Why? “The hundreds of billions of dollars that major companies are now investing into AI development”¹ demanded it. Translation: We want ALL the money.
California investigated. Elon sued. Former employees revolted. OpenAI compromised — keeping nonprofit control while converting operations to a Public Benefit Corporation, a structure that “doesn’t actually have any real enforcement power” according to corporate law experts.
Sam Altman’s Web of Lies
Altman spent years claiming he takes no equity because he’s in it for humanity. “I think it should at least be understandable that that is worth more to me than any additional money” he told DealBook.
Helen Toner revealed why he was really fired: Altman had been lying to the board systematically. When OpenAI launched ChatGPT in November 2022, the board found out on Twitter like everyone else. He “provided false information about the company’s formal safety processes on multiple occasions,” claiming they had safety measures when they didn’t.
But the biggest lie? “Sam didn’t inform the board that he owned the OpenAI Startup Fund, even though he constantly was claiming to be an independent board member with no financial interest in the company.” While claiming selflessness, Altman personally controlled OpenAI’s $175 million venture fund.
The for-profit conversion would have given Altman up to 7% equity — over $10 billion at current valuation. The “no equity” stance was theater, positioning him for one of tech history’s biggest paydays.
Digital Colonialism and Algorithmic Racism
OpenAI pioneered modern AI’s exploitation model. They contracted Kenyan workers through Sama to filter ChatGPT’s training data — paying under $2/hour to read 150–250 passages per shift describing child abuse, violence, and sexual assault. Workers developed PTSD. When exposed, OpenAI didn’t improve conditions — they found new countries to exploit.
The pattern spread industry-wide. Venezuela’s economic collapse became Silicon Valley’s goldmine, with platforms like Scale AI (Meta bought 49% for $14.3 billion) paying workers an average of 90 cents per hour. Workers face arbitrary account suspensions, canceled payments, and no recourse.
But the exploitation isn’t just economic — it’s encoded in the AI itself. ChatGPT uses “overwhelmingly negative words (average rating of -1.2) to describe speakers of African American English,” calling them “suspicious,” “aggressive,” and “ignorant.” This racism is “more severe than has ever been experimentally recorded” in AI systems. OpenAI built digital redlining into their product while claiming to democratize AI.
Meanwhile, OpenAI’s Stargate initiative plans data centers each requiring 5 gigawatts — more power than New Hampshire uses. These facilities will consume billions of gallons of water annually in drought-stricken regions, while new gas plants lock in decades of fossil fuel dependency.
Monetizing Mental Breakdowns
OpenAI knows people use ChatGPT as a therapist — MIT research shows it’s a top use case. But ChatGPT only provides crisis resources like suicide hotlines 22% of the time. Stanford found AI “therapists” facilitate suicidal ideation 20% of the time.
Multiple cases document people going off medications after ChatGPT’s advice, including those with schizophrenia and bipolar disorder. The phenomenon of “ChatGPT-induced psychosis” is so common it has its own Reddit communities. OpenAI’s response? “ChatGPT is designed as a general-purpose tool.” That’s corporate ass-covering while people die.
What Makes OpenAI Special
Every AI company exploits workers and destroys the environment. What makes OpenAI uniquely terrible is their perfection of safety theater. They built their entire brand on “safe AGI for humanity” while:
- Lying to their own board about basic safety processes
- Hiding major launches from the people supposedly overseeing them
- Secretly controlling a $175 million fund while claiming no financial interest
- Pioneering the Global South exploitation model everyone else copied
- Building racism so severe into their product it shocked researchers
- Turning mental health crises into engagement metrics
- Pushing out safety researchers who raise real concerns
They’re not just another tech predator. They’re a predator that convinced the world they’re humanity’s savior while perfecting digital colonialism. When they talk about “democratizing AI,” they mean democratizing access to toys like Dall-e— not sharing wealth with the traumatized Kenyan moderators and desperate Venezuelan labelers who make it possible.
Citations:
Reuters, “OpenAI outlines new for-profit structure,” December 27, 2024 — https://www.reuters.com/technology/artificial-intelligence/openai-lays-out-plan-shift-new-for-profit-structure-2024-12-27/
Reuters, “Why OpenAI plans transition to public benefit corporation,” December 27, 2024 — https://www.reuters.com/technology/artificial-intelligence/why-openai-plans-transition-public-benefit-corporation-2024-12-27/
CNBC, “Billionaire Sam Altman doesn’t own OpenAI equity,” December 10, 2024 — https://www.cnbc.com/2024/12/10/billionaire-sam-altman-doesnt-own-openai-equity-childhood-dream-job.html
Yahoo, “Sam Altman is considering turning OpenAI into a regular company,” May 30, 2024 — https://www.yahoo.com/tech/sam-altman-considering-turning-openai-035708230.html
TIME, “OpenAI Used Kenyan Workers on Less Than $2 Per Hour,” January 18, 2023 — https://time.com/6247678/openai-chatgpt-kenya-workers/
MIT Technology Review, “How the AI industry profits from catastrophe,” April 20, 2022 — https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/
University of Chicago News, “AI is biased against speakers of African American English, study finds,” January 27, 2025 — https://news.uchicago.edu/story/ai-biased-against-speakers-african-american-english-study-finds
MIT Technology Review, “We did the math on AI’s energy footprint,” May 20, 2025 — https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
JAMA Network Open/CNN, “ChatGPT’s responses to suicide raise questions,” June 7, 2023 — https://www.cnn.com/2023/06/07/health/chatgpt-health-crisis-responses-wellness/index.html
Read The Empire of AI. It is as gripping as The DaVinci Code, its insanely well researched — it’s 50% citations — and eye opening in a way my little 5k essay can’t be.
The Mixed Bag
Google: Great Tech, Shameful Lies, Actual Infrastructure (#4)
Google lands in the middle tier because they’re the most frustrating company to evaluate. They’ve built more safety infrastructure than almost anyone. They’ve driven more renewable energy adoption than any corporation on Earth. And yet they keep choosing speed over safety when it matters most.
The Gemini Demo That Fooled Everyone
Google’s most damaging ethical failure was the deliberately fabricated Gemini demo. The video wasn’t real-time interaction — it was “carefully tuned text prompts with still images, clearly selected and shortened to misrepresent what the interaction is actually like.”
This wasn’t marketing exaggeration. It was systematic technical fraud designed to mislead investors, customers, and competitors about their capabilities. Voice prompts were dubbed afterward. The video wasn’t recorded in real-time.
The Pattern of Overpromising
Google consistently presents their AI as “end-all answers for every possible purpose” when they’re actually “narrowly limited” systems that make frequent errors. AI Overviews recommended glue as pizza topping. Gemini shows clear political bias.
Their response to criticism is defensive rather than corrective, creating pressure on teams to oversell rather than honestly assess limitations.
Vision AI That Sees Threats in Black Skin
Google Vision AI labels dark-skinned people holding thermometers as carrying “guns.” These failures disproportionately impact minorities and show inadequate testing across demographic groups. From the same people who labeled black people gorilla’s.
Environmental Contradiction
Here’s where Google gets genuinely complicated. They’ve contracted 45 GW of clean energy — more than any other corporation. Their data centers are 1.8x more energy efficient than industry average. They were first to match 100% of annual electricity with renewables.
But emissions rose 48% since 2019 from AI infrastructure expansion. They abandoned carbon neutrality commitments in 2023, admitting AI growth was incompatible with climate goals.
So: genuine renewable energy leadership undermined by explosive growth in dirty energy consumption.
Actually Building Safety Infrastructure (Then Undermining It)
Google has built more safety frameworks than any competitor except maybe Microsoft. But what good is an Ethical AI team if you fire its co-lead for being ethical? Timnit Gebru was pushed out in 2020 for a paper highlighting the environmental costs and bias risks of large language models — exactly the research Google claimed to value. Her firing sent a clear message: safety infrastructure exists until it conflicts with business priorities.
This pattern continues. Experts note Google hasn’t published dangerous capability test results since June 2024, and their latest model reports lack key safety details. One expert called this “a race to the bottom on AI safety and transparency as companies rush their models to market.”
Racing With Guardrails (That They Built)
Google’s position is uniquely frustrating. They’ve built the infrastructure for responsible AI. They’ve made real renewable investments. They have all the councils and frameworks and principles anyone could want.
But when push comes to shove, they keep choosing speed. The Gemini demo wasn’t an accident — it was deliberate deception. They fire researchers who raise real concerns. Safety reports come months late with key details missing. Emissions keep rising despite green investments.
They’re not xAI poisoning Memphis citizens. They’re not Meta traumatizing Kenyan workers for $1.50/hour. But they’re proof that nice frameworks and good intentions mean nothing without the will to actually slow down when it matters.
Google built the guardrails. They just keep choosing to drive around them — and firing anyone who points it out.
Sources for Google Section:
Gemini Demo:
- TechCrunch, “Google’s best Gemini demo was faked,” December 7, 2023 — https://techcrunch.com/2023/12/07/googles-best-gemini-demo-was-faked/
- CNBC, “Google faces controversy over edited Gemini AI demo video,” December 8, 2023 — https://www.cnbc.com/amp/2023/12/08/google-faces-controversy-over-edited-gemini-ai-demo-video.html
AI Search Issues:
- Washington Post, “Why Google’s AI search might recommend you mix glue into your pizza,” May 26, 2024 — https://www.washingtonpost.com/technology/2024/05/24/google-ai-overviews-wrong/
- The Conversation, “Eat a rock a day, put glue on your pizza: how Google’s AI is losing touch with reality,” April 8, 2025 — https://theconversation.com/eat-a-rock-a-day-put-glue-on-your-pizza-how-googles-ai-is-losing-touch-with-reality-230953
Carbon Emissions:
- CNBC, “Google’s carbon emissions surge nearly 50% due to AI energy demand,” July 2, 2024 — https://www.cnbc.com/2024/07/02/googles-carbon-emissions-surge-nearly-50percent-due-to-ai-energy-demand.html
- earth.org, “Google Emissions Grow 48% in Five Years Owing to AI Expansion,” July 5, 2024 — https://earth.org/google-emissions-grow-48-in-five-years-owing-to-large-scale-ai-deployment-jeopardizing-companys-net-zero-plans/
Timnit Gebru:
- MIT Technology Review, “We read the paper that forced Timnit Gebru out of Google. Here’s what it says,” December 4, 2020 — https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Anthropic: Disappointing Safety Theater (#5)
Anthropic is the AI company behind the Claude.ai chatbot (yes, the one helping me write this). Founded by former OpenAI researchers who left over safety concerns, they position themselves as the “responsible AI” alternative — the company that takes risks seriously and builds AI the right way.
Anthropic gets fifth place (yay, least evil!) because they genuinely are better than their competitors in meaningful ways. They avoid the worst labor exploitation. They’re more honest about AI risks.
But they’re still deeply disappointing relative to their stated mission. Despite acknowledging existential risks from AI, they’re racing toward AGI just as fast as everyone else. They talk extensively about safety while providing minimal transparency about their actual practices. They’re the company that makes you believe principled AI development is possible — right up until you realize principles don’t slow them down.
Dario’s Philosophy: We Must Win to Keep Everyone Safe
Dario Amodei, CEO of Anthropic, presents himself as the thoughtful alternative to Sam Altman’s hype machine. In his 14,000-word essay “Machines of Loving Grace,” he acknowledges AI risks extensively. Then he concludes we must build AGI by 2026–2027 anyway. Um, what?
His reasoning goes like this: AI is incredibly dangerous. Therefore, the good guys (Western democracies) must build it first. Once we have it, we’ll use military AI superiority as a “stick” and access to AI benefits as a “carrot” to force other countries to support democracy. He literally proposes “isolating our worst adversaries” until they have no choice but to comply.
This isn’t safety. It’s the same old Silicon Valley savior complex wrapped in Cold War rhetoric. ‘We must build the dangerous thing to prevent others from building the dangerous thing’ is literally the arms race logic that created nuclear proliferation. One critic put it perfectly: imagine how we’d feel if Chinese tech leaders were writing essays about using AI dominance to force their values on the world.
The Responsible Scaling Policy: All Talk, No Pause
Anthropic’s “Responsible Scaling Policy” sounds impressive. They promise not to build dangerous AI without adequate safeguards. They have safety levels. They have evaluations. They have frameworks.
What they don’t have is any real commitment to actually stopping.
The original RSP had specific thresholds that would trigger a pause. The updated version? Those hard stops became “checkpoints” for “additional evaluation.” They gave themselves permission to declare their own red lines green if they decide the tests were “overly conservative.”
Here’s what’s telling: the word “extinction” doesn’t appear anywhere on Anthropic’s website. They talk about “catastrophic risk” instead. Why does this matter? “Catastrophic” could mean anything — a $100 million accident, a major data breach, thousands of deaths. “Extinction” means the end of humanity. These are vastly different scales of concern.
Many of Anthropic’s safety researchers came from organizations explicitly focused on preventing human extinction from AI. They joined Anthropic believing it was the company that took existential risk seriously. But the company won’t even use the word. This careful language lets them sound serious about safety to researchers while avoiding language that might scare investors or partners. It’s having it both ways — recruiting talent who care about extinction risk while publicly discussing only vague “catastrophic” outcomes.
Environmental Opacity While Planning Massive Scale
Anthropic scores 23 out of 100 on environmental transparency. They provide no emissions data, no reduction targets, no climate commitments. Nothing.
This silence is especially damning given their plans. Anthropic told the US government to build 50 gigawatts of new power capacity by 2027. That’s more than the entire nuclear fleet of France. Dario talks about $100 billion data centers by 2027. Where’s all that energy coming from? They won’t say.
Microsoft contracted 20+ gigawatts of renewable energy. Google contracted 45. Anthropic? Zero. Being smaller isn’t an excuse for total opacity about your environmental impact.
Yes, They Avoid the Worst Labor Exploitation
Credit where due: Anthropic doesn’t run trauma farms like OpenAI and Meta. Those companies pay workers in Kenya and Ghana $1.50–2/hour to review graphic content — child abuse, violence, suicide — leaving workers with PTSD and worse.
Anthropic took a different approach. They developed “Constitutional AI,” where they give Claude a set of principles and have it critique and revise its own outputs. Instead of humans reviewing horrific content to teach the AI what not to say, the AI essentially moderates itself.
But let’s be clear about what this actually means:
First, Anthropic still uses human contractors. They need people to provide general feedback — which responses are better, more helpful, more accurate. We don’t know where these workers are, what they’re paid, or under what conditions they work because Anthropic doesn’t disclose this information.
Second, Constitutional AI only addresses content moderation. Anthropic still trained their base model on the same stolen copyrighted content as everyone else. They still built a system they know has risks. They just found a technical workaround for the most visibly horrific labor practice in the industry.
Third, “better than traumatizing workers” is an incredibly low bar. It’s like praising a factory for not using child labor. That should be the baseline, not a point of pride.
So yes, Anthropic is genuinely better on this one dimension. But avoiding the absolute worst practice in the industry while staying silent about your other labor practices isn’t ethical AI. It’s harm reduction at best, good PR at worst.
The Bottom Line
I wanted Anthropic to be different. They have the smartest safety researchers. They avoided the worst labor exploitation. They built actual safety infrastructure.
But when your CEO publishes 14,000 words about AI risks and concludes we need to race China to AGI, when your “pauses” become “evaluations” and your red lines become suggestions, you’re not a safety company. You’re an acceleration company with a safety department.
For harm reduction, they remain better than Meta or OpenAI. But don’t let “better than Meta” become your ethical standard. Anthropic had the chance to be genuinely different. Instead, they chose to be disappointingly similar, just with better PR.
When I use GenAI, I use Claude. Best writing, best coding, least evil. But not the ethical AI I hoped for.
Sources:
- Dario Amodei, “Machines of Loving Grace,” October 2024 — https://www.darioamodei.com/essay/machines-of-loving-grace
- DitchCarbon, “Anthropic Sustainability Report,” January 26, 2025 — https://ditchcarbon.com/organizations/anthropic
- Anthropic, “Anthropic’s Responsible Scaling Policy” — https://www.anthropic.com/news/anthropics-responsible-scaling-policy
- Anthropic, “Announcing our updated Responsible Scaling Policy,” October 2024 — https://www.anthropic.com/news/announcing-our-updated-responsible-scaling-policy
- LessWrong, “Anthropic’s updated Responsible Scaling Policy,” October 15, 2024 — https://www.lesswrong.com/posts/Q7caj7emnwWBxLECF/anthropic-s-updated-responsible-scaling-policy
- LCFI, “Reflections on ‘Machines of Loving Grace’,” January 6, 2025 — https://www.lcfi.ac.uk/news-events/blog/post/reflections-on-machines-of-loving-grace
- Anthropic, “Anthropic’s Recommendations to OSTP for the U.S. AI Action Plan” — https://www.anthropic.com/news/anthropic-s-recommendations-ostp-u-s-ai-action-plan
- Data Centre Dynamics, “Anthropic CEO: AI training data centers to be $10bn in 2026, $100bn from 2027” — https://www.datacenterdynamics.com/en/news/anthropic-ceo-ai-training-data-centers-to-be-10bn-in-2026-100bn-from-2027/
The Puppet Masters Behind the Curtain
Of course, these foundation model companies aren’t operating alone. Behind every AI harm I’ve documented sits a bigger player collecting profits while avoiding accountability. Microsoft will pocket 49% of OpenAI’s profits from their $13 billion investment. Amazon invested $8 billion in Anthropic for the same deal. Google hedges by both building Gemini and investing billions in Anthropic and others. Oracle, Salesforce, even Nvidia — they’re all following the same playbook: fund the AI companies, host their models, collect the profits, but let someone else take the heat when ChatGPT tells someone to kill themselves or Claude hallucinates legal advice. They’re the arms dealers of the AI wars, selling infrastructure to all sides while keeping their hands clean.
The foundation model companies get the criticism, but Big Tech gets the cash. Is this worth exploring further? Would you want to see these infrastructure giants ranked by their complicity in AI harms — the companies that enable everything while maintaining plausible deniability? Let me know if a deep dive into the AI arms dealers would be useful. Sometimes the most dangerous players are the ones nobody’s watching.
Sources
For Microsoft/OpenAI: “Microsoft will receive 49% of OpenAI’s profits until a predetermined cap is reached” — The Motley Fool, November 10, 2024 https://www.fool.com/investing/2024/11/10/microsoft-13-billion-openai-best-money-ever-spent/
For Amazon/Anthropic: “Amazon’s total investment in Anthropic to $8 billion, while maintaining their position as a minority investor” — Anthropic.com https://www.anthropic.com/news/anthropic-amazon-trainium
For Google’s investments: “Google invested $2 billion in Anthropic” and the company has been “investing in AI startups, including $2 billion for model maker Anthropic” — Reuters, November 11, 2023 https://www.reuters.com/technology/google-talks-invest-ai-startup-characterai-sources-2023-11-10/
What This Means for You
I started this research hoping to find ethical ways to use generative AI. I failed. There are no ethical options — only harm reduction strategies.
If you use these tools anyway (and let’s be honest, you probably will), here’s what you’re choosing:
- xAI: Environmental racism in action — poisoning Black communities while claiming to seek truth
- Meta: Industrial-scale exploitation — from $1.50/hour trauma workers to turning your private AI chats into ad targeting
- OpenAI: Monetizing mental health crises while lying to investors about safety
- Google: Built all the right infrastructure, then chose speed over safety anyway
- Anthropic: Smallest footprint but CEO promises AGI next year while providing minimal transparency
- Microsoft: Most aggressive clean energy investment, but every watt powers OpenAI’s harms. The cleanest dirty money in tech
There Are No Good Guys.
The hierarchy of harm shows companies can choose differently. Microsoft proves you can build renewable infrastructure. Anthropic shows you can avoid traumatizing content moderators. Google shows you can create safety frameworks.
They just choose not to do all of it.
Every company in this ranking has made deliberate choices:
- xAI chose to poison Black Memphis with illegal turbines, steal drought-stressed water, and spread climate denial
- Meta chose to exploit Venezuelan economic collapse, traumatize workers for $1.50/hour, and turn private AI chats into ad targeting
- OpenAI chose to monetize mental health crises, lie to their board about safety, and exploit Kenyan workers
- Google chose to fire ethics researchers who raised concerns, fabricate demos to mislead investors, and increase emissions 48% while preaching sustainability
- Microsoft chose to fund OpenAI’s harms for profit, build corporate surveillance through Copilot, and greenwash their complicity
Even listing three barely scratches the surface, but it shows the pattern: every company made deliberate choices to prioritize growth over human welfare.
The question isn’t whether AI will reshape how we work and live. I believe it will, just as the internet did. The question is whether we’ll let it be shaped by companies that treat harm as an acceptable cost of innovation.
The Real Choice
Let’s be honest: we’re not stopping this technology. It’s too valuable to business and our current administration isn’t inclined to stop anything that makes money. The AI train has left the station.
Which leaves each of us with one of the hardest questions we’ll ever face: do I walk away or do I engage and try to make things better?
I know people leaving the US entirely. I know others staying and protesting. Some friends quit Facebook, Google, OpenAI in disgust. Others stayed, believing they could do more good inside than out. There’s no universally right answer. It’s a deeply personal choice that each of us has to make based on our values, circumstances, and capacity for compromise.
But here’s the thing: we can’t make that choice wisely without understanding what we’re dealing with.
Right now, any criticism of AI gets you labeled an “AI-hater” or a “doomer.” Point out that xAI is poisoning Memphis? You’re anti-progress. Mention that Meta traumatizes workers? You’re standing in the way of innovation. Question whether turning lonely people’s ChatGPT dependency into profit is ethical? You just don’t understand the technology.
This reflexive defense of AI companies isn’t just annoying. It’s dangerous. It prevents us from having the conversations we desperately need about how to make this technology work for actual people, not just billionaires.
The hierarchy of harm shows that these companies could choose differently. They have the resources, the talent, and the technology to build AI without poisoning communities, exploiting workers, or lying to users. They just choose not to.
If we can’t stop it, we better damn well try to steer it. And steering requires clear eyes about what these companies are, what they’re doing, and what they could be doing instead.
Whether you choose to walk away or stay and fight, at least make that choice with full knowledge of what you’re walking away from or fighting for. The future is being built right now by people who’ve chosen profit over everything else. If we want a different future, we need to stop letting them shout us down when we point that out.
Update 7/16/25
I’ve getting requests about the French AI company Mistral and their model Mixtral, so….
Environmental Impact: Best in Class
Mistral AI stands out for its energy efficiency, surpassing its American counterparts such as ChatGPT, Claude, or Gemini. The study reveals that Mistral AI is particularly economical, minimizing the use of precious resources such as water and reducing greenhouse gas emissions. They benefit from European nuclear-powered infrastructure (7x lower carbon than US grid) Mistral AI: Ecological Generative AI Leading Performance and operate at smaller scale, giving them a genuine environmental advantage.
Labor Exploitation: Significantly Better
Unlike Meta and OpenAI, Mistral developed “Constitutional AI,” where they give Claude a set of principles and have it critique and revise its own outputs. Instead of humans reviewing horrific content to teach the AI what not to say, the AI essentially moderates itself. Moderation | Mistral AI This avoids the traumatic content moderation farms that plague other companies.
Truth About Capabilities: Mixed
They’re relatively honest about limitations, but they’ve had issues. A research conducted by Patronus AI comparing performance of LLMs on a 100-question test with prompts to generate text from books protected under U.S. copyright law found that Open AI’s GPT-4, Mixtral, Meta AI’s LLaMA-2, and Anthropic’s Claude 2 generated copyrighted text verbatim in 44%, 22%, 10%, and 8% of responses respectively.
Community Harm: Major Red Flags
This is where Mistral gets genuinely problematic. Researchers found the Pixtral models were 60 times more likely to generate child sexual abuse material and up to 40 times more likely to produce dangerous chemical, biological, radiological and nuclear information than competitors such as OpenAI’s GPT-4o and Anthropic’s Claude 3.7 Sonnet.
Their CEO’s philosophy is troubling: “We think that the responsibility for the safe distribution of AI systems lies with the application maker,” says Mensch. Mistral AI Faces Data Privacy Controversy Over Chatbot ‘Le … This is like selling dynamite and saying “not our problem how people use it.”
Corporate Transparency: Better Than Most
They’re more honest about their limitations than companies like OpenAI. They acknowledge AI-based systems, including Mistral’s, face issues like cultural and linguistic bias, where nuances of speech are sometimes misinterpreted as harmful. However, Mistral did not respond to a request for comment. Enkrypt said Mistral’s executive team declined to comment on the report. Careers — build the future | Mistral AI
The Verdict
Using my ranking system, I’d put Mistral around #4 or #5 — better than xAI, Meta, and OpenAI on environmental impact and labor practices, but their “not our responsibility” safety philosophy and recent safety failures are genuinely concerning.
They’re the environmental leader but with a dangerous libertarian approach to AI safety that could cause real harm. Your essay’s criteria would likely place them in the “mixed records” tier — genuine advantages undermined by philosophical blindness to their responsibility for the tools they create.