The April 2026 Radar is explicitly a Radar about AI-assisted engineering, and its four themes map the operational tensions the TAB is seeing in the field. The first is that evaluating technology is getting harder in an agentic world. Semantic diffusion (overlapping terms for similar things - spec-driven development, harness engineering, MCP-this, agent-that), single-contributor AI-built tooling that’s weeks old, and the impossibility of waiting for maturity without going stale are all compressing the Radar’s traditional rhythm. Underneath this, they flag codebase cognitive debt - the accumulating gap between code shipped and mental models of what it does.

The second theme, “retaining principles, relinquishing patterns," captures the pendulum. The Radar returns to pair programming, zero trust, mutation testing, DORA metrics, clean code, testability, accessibility, and even the command line as a first-class interface - not out of nostalgia but as a counterweight to the speed at which agents generate complexity. At the same time, they flag that team topologies themselves will have to evolve alongside agent topologies, and measurement of “developer” productivity needs a rewrite.

The third theme, securing permission-hungry agents, is the sharpest. The agents worth building need access to everything - OpenClaw, Claude Cowork, Gas Town agent swarms - and Simon Willison’s “lethal trifecta” (private data + untrusted content + external action) now describes most useful agents by default, not by misconfiguration. Prompt injection remains unsolved, and model behavior is inconsistent enough that a single successful run gives no guarantee at scale. The Radar’s bet: safe agent systems are pipelines of constrained agents, not monolithic ones, with Agent Skills emerging as a safer alternative to MCP, durable agents as a defense against instruction bloat, and strong monitoring and control as table stakes.

The fourth theme, “putting coding agents on a leash," splits the harness engineering landscape into feedforward controls (Agent Skills, Superpowers, plugin marketplaces, GitHub Spec-Kit, OpenSpec) that shape the agent before code is generated, and feedback controls (cargo-mutants, WuppieFuzz, CodeScene, deterministic quality gates wired as agent-queryable sensors) that observe behavior after the fact and drive self-correction before the human reviews.

The blip-level picks worth noting: Adopt now includes Context engineering, Curated shared instructions, DORA metrics (with a new “rework rate” fifth metric), Passkeys (now AAL2-compliant per NIST SP 800-63-4), Structured output from LLMs, and Zero trust architecture, plus Claude Code, Cursor, mise, Apache Iceberg, React JS, React Native, Svelte, and Typer in tools/languages. Caution covers Agent instruction bloat, AI-accelerated shadow IT, Codebase cognitive debt, Coding agent swarms, Coding throughput as a productivity measure, Ignoring durability in agent workflows, MCP by default, Pixel-streamed development environments, and OpenClaw - that last one is the most striking: an autonomous personal-assistant agent flagged for significant concerns requiring careful evaluation. The Radar is telling you that 2026’s problem is not building agents; it’s containing them.

🤖 Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive Debt As 2026’s Core Tensions - Vol 34’s four themes read as a punchlist for AI-era engineering: tech evaluation is harder under semantic diffusion, established principles (DORA, pair programming, zero trust, command line) are being reclaimed against agent-generated complexity, permission-hungry agents are structurally unsafe without pipelines of constrained sub-agents, and coding-agent harnesses split into feedforward (Agent Skills, spec-kits) and feedback (mutation testing, linters as sensors) controls. OpenClaw lands in Caution.