Hello from The AI Night,
Today in AI:
Anthropic Accuses Deepseek, Moonshot and MiniMax of illicit Distillation
Citrini Research Models an AI-Driven “Economic Collapse Scenario”
Anthropic Finds Users Stop Thinking When AI Outputs Look Polished

Image Source: Anthropic blog
Here's the deal: Anthropic revealed that three Chinese AI labs, DeepSeek, Moonshot and MiniMax ran coordinated campaigns to extract Claude's capabilities for training their own models. Together, they generated over 16 million exchanges through roughly 24,000 fraudulent accounts violating Anthropic's terms of service and regional access restrictions.
The Breakdown:
The labs targeted Claude's most differentiated capabilities; agentic reasoning, tool use and coding
MiniMax ran the largest operation with 13M+ exchanges, Moonshot followed with 3.4M and DeepSeek used 150K+
DeepSeek prompts attempted to extract “chain of thought” reasoning data and generate censorship safe alternatives to politically sensitive queries
Labs accessed Claude through commercial proxy services running "hydra cluster" networks of fraudulent accounts
Anthropic attributed campaigns with high confidence using IP correlation, metadata and infrastructure indicators
When Anthropic released a new model mid-campaign, MiniMax pivoted within 24 hours to target it
The bigger picture: Distilled models likely strip safety guardrails, creating national security risks. Anthropic argues these attacks undermine export controls and calls for coordinated industry and policy response before the extraction window widens further.
Citrini Reasearch
Citrini Research Models an AI-Driven “Economic Collapse Scenario”

Image Source: Citrini Research
Here's the deal: Citrini Research published a detailed speculative scenario exploring what happens if rapid AI capability gains trigger a self reinforcing economic downturn. Written in February 2026 but framed as a macro memo from June 2028, the piece models a negative feedback loop where AI-driven layoffs reduce consumer spending, which forces more AI adoption, which accelerates further displacement.
The Breakdown:
Central mechanism; white collar workers represent ~50% of US employment and drive ~75% of discretionary spending. Replacing them shrinks the consumer economy that funds everything else
The scenario traces disruption from SaaS pricing collapse to agentic commerce eliminating intermediation fees to private credit defaults on PE-backed software deals to stress on the $13 trillion mortgage market
Coined "Ghost GDP", output that shows up in national accounts but never circulates through households
This is explicitly a scenario not a prediction. The authors stress it models an underexplored left tail risk
The bigger picture: This scenario forces investors and builders to stress-test their portfolios against one key question; what if AI's productivity gains flow to capital, not labor, fast enough to break the consumer economy before policy can respond?

Image Source: Anthropic blog
Here's the deal: Anthropic released its AI Fluency Index a baseline measurement tracking 11 observable behaviors across 9,830 anonymized Claude.ai conversations from January 2026. The study uses the 4D AI Fluency Framework to quantify how effectively people collaborate with AI.
The Breakdown:
85.7% of conversations showed iteration and refinement, which correlated with 2.67 additional fluency behaviors (roughly double the non-iterative rate)
Iterative conversations were 5.6x more likely to involve users questioning Claude's reasoning and 4x more likely to identify missing context
In artifact producing conversations (12.3% of sample) users were more directive upfront but less evaluative; -5.2pp for identifying missing context, -3.7pp for fact-checking and -3.1pp for questioning reasoning
Only 30% of users explicitly set collaboration terms with Claude
The framework covers 11 of 24 total behaviors, 13 happen outside the chat interface
The bigger picture: As AI outputs become more polished, users appear to lower their guard precisely when scrutiny matters most. This baseline gives Anthropic a measurable foundation to track whether fluency improves or erodes as capabilities scale.
Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.
What else you need to know:
OpenAI's Realtime API now supports low-latency multimodal interactions across WebRTC, WebSocket and SIP connections with native speech-to-speech, audio transcription and image input capabilities.
OpenAI partnered with BCG, McKinsey, Accenture and Capgemini as "Frontier Alliance" partners to help enterprises deploy its Frontier platform for building and managing AI coworkers at scale.
Google is rolling out new templates for Veo 3.1 in the Gemini app that let users create videos from a gallery of presets customized with reference photos and descriptions.
Google's Antigravity team blocked OpenClaw users whose OAuth token usage overloaded the backend and promised a reinstatement path for those unaware they violated the terms of service.
CrowdStrike, Cloudflare, Okta and Palo Alto Networks dropped 5–9% after Anthropic launched Claude Code Security, a preview tool that scans codebases for vulnerabilities before deployment.
That’s it for today’s edition of The AI Night.
Our goal is to cut through the noise, surface what actually changed, and explain why it matters.
If this was useful, you’ll get the same signal here tomorrow.



