Hello from The AI Night,
Today in AI:
Anthropic Launches Claude Sonnet-4.6
Figma Launches Code to Canvas Workflow With Claude Code
Manus Brings Its Full AI Agent to Telegram Chat
Anthropic
Anthropic Launches Claude Sonnet-4.6

Image Source: Anthropic blog
Here's the deal: Anthropic released Claude Sonnet 4.6, a full upgrade over Sonnet 4.5 across coding, computer use, long context reasoning, and agent planning. It is now the default model on free and Pro plans, priced at $3/$15 per million tokens, and includes a 1M token context window in beta.
The Breakdown:
In Claude Code testing, users preferred Sonnet 4.6 over Sonnet 4.5 roughly 70% of the time and over Opus 4.5 (the November 2025 frontier model) 59% of the time
OSWorld computer use scores show steady gains across sixteen months of Sonnet models with early users reporting human level capability on tasks like complex spreadsheets and multi step web forms
Prompt injection resistance is a major improvement over Sonnet 4.5, performing on par with Opus 4.6
Supports adaptive thinking, extended thinking, and context compaction in beta
Claude in Excel now supports MCP connectors for tools like S&P Global, PitchBook and FactSet
The bigger picture: Tasks that previously required Opus class models, including economically valuable office work, are now accessible at Sonnet tier cost. This compresses the gap between frontier capability and practical deployment budgets for developers and enterprises.

Image Source: Figma Article
Here's the deal: Figma launched "Claude Code to Figma," a feature that captures live UI from a browser (production, staging, or localhost) and converts it into fully editable Figma frames. The tool bridges code first prototyping with collaborative design exploration.
The Breakdown:
Developers can capture screens directly from Claude Code sessions and paste them into any Figma file as editable frames
Supports single screens and multi screen flows, preserving sequence and context across captures
Captured UI can be duplicated, annotated, rearranged and iterated on without touching the codebase
Works alongside Figma Make (prompt-to-prototype) and Copy Design, offering multiple entry points into the same design workflow
Roundtrip capability exists through the Figma MCP server, letting developers bring Figma frames back into coding environments via a prompt and a link
The bigger picture: This closes a real friction point between building and designing. Teams no longer need screenshots or local builds to get feedback on code generated UI. For AI assisted development workflows where interfaces are created rapidly through prompting, the bottleneck shifts from creation to evaluation, and this tool directly addresses that.

Image Source: Manus blog
Here's the deal: Manus launched "Manus Agents", letting users run the full Manus agent directly inside Telegram. The feature is available now across all subscription tiers, with more messaging platforms planned.
The Breakdown:
Users connect via QR code from the Manus workspace Agents tab. No API keys, CLI or config files needed
The Telegram agent runs the same multi step task execution as the web app, including research, code execution, document generation and PDF delivery
Supports voice messages, images, and file inputs. The agent transcribes voice, interprets intent and returns outputs in chat
Users can choose between Manus 1.6 Max (deeper reasoning, creative tasks) or Manus 1.6 Lite (faster, lightweight tasks)
Communication style is configurable: concise, structured or conversational
The agent only accesses messages sent directly to it. It cannot read other Telegram chats, groups or contacts
The bigger picture: This moves Manus from a browser based tool to a persistent, message native agent. For power users, it reduces friction between thinking of a task and executing it, especially for recurring workflows like meeting prep or content generation.
What else you need to know:
xAI secretly launched Grok 4.2 public beta with a multi-agent "Heavy" mode where specialized agents handle research, verification and logic tasks across X, web, iOS and Android platforms.
PolyAI raised $200M from Nvidia, Khosla Ventures and other top VCs to scale its voice AI agents that now handle over 500 million calls across 3,000 enterprise deployments including Marriott and PG&E.
Cohere Labs released Tiny Aya, a 3.35B-parameter open weight multilingual model family supporting 70+ languages with regional variants, designed to run locally on consumer hardware and phones.
ElevenLabs launched ElevenAgents for Support, a product designed to help customer support teams convert manual workflows into production ready AI agent systems.
Anthropic signed a three year MOU with Rwanda's government to deploy Claude across health, education and public sector systems, marking its first formal multi sector government partnership in Africa.
That’s it for today’s edition of The AI Night.
Our goal is to cut through the noise, surface what actually changed, and explain why it matters.
If this was useful, you’ll get the same signal here tomorrow.

