Contradictions: 6 Unmissable Signals + 3 Clicks
A curated roundup of 6 unmissable signals worth watching, and 3 clicks that caught my attention recently.
The viral post warning about AI displacement was written by AI. YouTube's fighting AI slop content while rolling out features that allows creators to use their AI likeness. What started as two AI companies having a hard line towards mass surveillance, ended with one signing a contract with the Department of War. The gap between what people say and what they do keeps widening.
1. OpenClaw, Abi Awomosu, and What People Actually Want From AI
Across TikTok, Hacker News, Reddit, YouTube, people kept saying “I want Jarvis.” The data showed they meant “I want a wife.”
Peter Steinberger hacked together a weekend project. Two months later it had 100,000 GitHub stars, a forced rename after a trademark dispute, and 2 million visitors in a single week. It also had a security disaster: attackers compromised roughly 12% of the plugin marketplace with malware, and a separate database misconfiguration exposed 1.5 million API tokens. Thousands of people handed over the keys to their digital lives to a weekend project without reading the warnings. Then the founder joined OpenAI.
Abi Awomosu ran the numbers on what people actually built versus what they celebrated. 34% developer tools, 19% productivity, 17% home automation. But 83% of what got screenshotted and written about was feminized labor such as email management, calendar coordination, morning briefings and smart home delegation. One function representing 1% of skills captured 92% of press coverage.
People want the future.They just don’t know what they’re handing over or who they’re replacing to get it.
2. Zoe Scaman: “The Six Loops”and “The Brand That Thinks”
Zoe Scaman mapped the six repeating scripts keeping AI discourse stuck: fear, hype, efficiency, exceptionalism, tactical minimizing, and meta-conversation about the conversation itself. None of them wrong. None of them interesting. None of them moving anything forward.
She argues we’re stuck in loops because they’re comfortable. Fear is easy. Hype is easy. Talking about talking about AI is easy. Building systems that respect people is hard. The discourse goes in circles while the actual decisions about who benefits and who gets displaced happen in product roadmaps and infrastructure choices.
Then she wrote the follow-up. Her argument: we’re moving from the brand as monument to the brand as agent. Not a static manual of fonts and rules but a living system with encoded judgment — the reasoning behind decisions, the tensions the brand holds, the calls nobody wrote down. Bake that in and the brand stops being a fixed script and becomes something you can co-create with. Every interaction becomes a step in compounding intelligence rather than a repeat of one.
Read her pieces and get out of the loops.
3.Anthropic, the Pentagon, and the Most Convenient 24 Hours in AI
Anthropic had a $200M contract with the Pentagon. Claude was the first frontier model deployed on classified networks. The terms included two hard limits: no use for autonomous weapons, no mass domestic surveillance. When the Pentagon pushed to remove those restrictions, Anthropic said no. They were designated a national security supply chain risk, a label historically reserved for foreign adversaries like Chinese companies.
Then look at what happened in the same day. Amazon’s $50B OpenAI investment announced at 8am. OpenAI all-hands declaring the same red lines as Anthropic at 1:30pm. Trump terminates Anthropic’s contracts at 3:31pm. OpenAI’s Pentagon contract finalized by 8pm. Polly Allen documented the full timeline, worth a read.
No company trying to thrive in capitalism is perfectly ethical. But I’ll take ethical choices over none.
4. Matt Shumer: “Something Big is Coming”
Matt Shumer is 26. He built an AI startup, now invests in the space. A few weeks ago, he posted a 5,000-word essay on X arguing that AI’s disruption could be bigger than COVID, that we’re in the “this seems overblown” phase of February 2020, right before everything changed. He says 50% of entry-level white-collar jobs could disappear in 1-5 years. He points to GPT-5.3-Codex, which OpenAI says “played a key role in its own creation.” Recursive self-improvement. AI building itself.
The post hit 60 million views. Alexis Ohanian: “Great writeup. Strongly agree.” Gary Marcus: “Weaponized hype, filled with vivid narrative and marketing speech.” Ed Zitron wrote an annotated version calling out all the BS in the essay.
One side sees an alarm. The other sees marketing. Both might be right. He later confirmed he used Claude to write it, and said that was kind of the point. Whether that proves his argument or undermines it probably depends on which side you’re already on.
5. YouTube CEO: Fighting AI Slop While Building AI Features
YouTube’s CEO just published the company’s 2026 priorities. Top of the list: combating “low-quality, repetitive AI content” (slop). Also top of the list: features that lets creators make Shorts using AI likeness.
While being able to generate videos or games from a single text prompt lowers the barrier to entry, potentially accelerating the volume of low-effort content—YouTube is betting on a new “Content ID for Likeness.” This tech is designed to help creators identify and remove unauthorized uses of their images, effectively protecting their digital identities while allowing them to “license” their AI selves for brand deals and passive content creation.
It’s a high-stakes pivot: YouTube wants to be the world’s most powerful AI creative suite and its most strict AI police force at the same time.
6. The Guardian: India's Female AI Content Moderators

Women in India are watching hours of abusive content every day to train AI systems. Paid roughly $2-3 an hour. The work is the invisible infrastructure behind every “intelligent” system we use.
The Guardian talks to workers who describe feeling “blank” at the end of shifts. They watch violence, abuse, explicit material, the worst of what gets uploaded, and label it so the AI learns what to filter out. The work is outsourced specifically to developing nations where labor protections are weak, such as Kenya, the Philippines, Venezuela, India, and Colombia.
Every time someone talks about AI getting smarter, someone had to watch the worst of the internet to make it possible.
+ 3 Clicks
The ultimate reason to throw a party, a reminder about culture and how to stay safe online.
The White Paper’s Seven reasons why hosting a silly little potluck is essential to defeating fascism: We should all be gathering, partying, cooking, convening.
Matt Klein (ZINE)’s “Culture Is a Dinner Party”: Culture happens around tables, not on feeds. Klein’s always good at reminding us what we keep on forgetting.
Cosmopolitan: "Your Overdue Digital Safety Intervention": I wrote a piece for young women on how to stay safe online. Cosmopolitan just published it in print in the Middle East, share it with a young woman in your life.






