{
	"version": "https://jsonfeed.org/version/1",
	"title": "Coté",
	"icon": "https://avatars.micro.blog/avatars/2025/42/8457.jpg",
	"home_page_url": "https://cote.report/",
	"feed_url": "https://cote.report/feed.json",
	"items": [
			{
				"id": "http://cotereport.micro.blog/2026/04/26/patel-ais-unpopularity-isnt-a.html",
				"title": "Patel: AI''s Unpopularity Isn''t a Marketing Problem, It''s \"Software Brain",
				"content_html": "<h2 id=\"beware-software-brainhttpswwwthevergecompodcast917029software-brain-ai-backlash-databases-automation\"><a href=\"https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation\">BEWARE SOFTWARE BRAIN</a></h2>\n<p>Nilay Patel names the pattern behind the tech industry&rsquo;s bewilderment at AI&rsquo;s collapsing favorability: <strong>&ldquo;software brain,&rdquo; the worldview that treats everything as databases controllable by structured code.</strong> It is the same mindset Marc Andreessen codified in 2011, now turbocharged by AI.</p>\n<p>Polling is brutal. NBC rates AI below ICE; Quinnipiac finds over half of Americans think AI will do more harm than good; Gallup shows Gen Z hope dropping to 18 percent while anger climbs.</p>\n<p>Executives like Nadella and Altman read this as a communications failure. <strong>Patel&rsquo;s claim is that it is not. ChatGPT already has 900 million weekly users; people are reacting to their lived experience, not an ad gap.</strong> You cannot market people out of what they feel every day.</p>\n<p>The software-brain thesis scales into a critique of adjacent systems. <strong>Law looks like code - precedent, citations, statutes - but the law is constitutively ambiguous, and its ambiguity is the point.</strong> Attempts to force the legal system into deterministic computation (e.g., fully automated AI arbitration) mistake formality for determinism and hollow out the thing that makes law legitimate.</p>\n<p>Business, by contrast, is where software brain genuinely wins. <strong>Modern enterprises are already databases-plus-loops, which is why Anthropic and OpenAI are charging hard at the enterprise - the value is real where the terrain actually is software.</strong> Consulting decks, advertising, marketing automation - all up for grabs.</p>\n<p>The failure mode is exporting that logic to human life. <strong>The ask is no longer that computers adapt to people but that people make themselves legible to the machine: open your files, email, calendar, and messages so the AI becomes more valuable.</strong> A decade of smart-home flops already showed regular people don&rsquo;t want this; AI is not going to reverse that.</p>\n<p>Meanwhile the externalities - energy, emissions, RAM supply, data-center politics, even political violence aimed at executives - compound. <strong>The industry is simultaneously telling people their jobs will vanish, their lives should be instrumented, and the models might end the world, then wondering why nobody likes them.</strong> Patel&rsquo;s bottom line: this is not a haircut problem. It is a worldview problem.</p>\n<h2 id=\"links\">Links</h2>\n<p>🤖 <a href=\"https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation\">BEWARE SOFTWARE BRAIN</a> - Nilay Patel argues AI&rsquo;s unpopularity isn&rsquo;t a marketing gap but a collision between tech&rsquo;s database-worldview (&ldquo;software brain&rdquo;) and human life, which refuses to flatten into loops. The industry is winning in business where the world is already software, and alienating everyone else.</p>\n<!--\n🤖 BEWARE SOFTWARE BRAIN\nhttps://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation\nNilay Patel argues AI's unpopularity isn't a marketing gap but a collision between tech's database-worldview (\"software brain\") and human life, which refuses to flatten into loops. The industry is winning in business where the world is already software, and alienating everyone else.\n-->\n",
				
				"date_published": "2026-04-26T15:51:17+02:00",
				"url": "https://cote.report/2026/04/26/patel-ais-unpopularity-isnt-a.html"
			},
			{
				"id": "http://cotereport.micro.blog/2026/04/26/thoughtworks-tech-radar-vol-lands.html",
				"title": "Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive",
				"content_html": "<p>The April 2026 Radar is explicitly a Radar about AI-assisted engineering, and its four themes map the operational tensions the TAB is seeing in the field. The first is that <strong>evaluating technology is getting harder in an agentic world</strong>. Semantic diffusion (overlapping terms for similar things - spec-driven development, harness engineering, MCP-this, agent-that), single-contributor AI-built tooling that&rsquo;s weeks old, and the impossibility of waiting for maturity without going stale are all compressing the Radar&rsquo;s traditional rhythm. Underneath this, they flag <strong>codebase cognitive debt</strong> - the accumulating gap between code shipped and mental models of what it does.</p>\n<p>The second theme, <strong>&ldquo;retaining principles, relinquishing patterns,&quot;</strong> captures the pendulum. The Radar returns to pair programming, zero trust, mutation testing, DORA metrics, clean code, testability, accessibility, and even the command line as a first-class interface - not out of nostalgia but as a counterweight to the speed at which agents generate complexity. At the same time, they flag that team topologies themselves will have to evolve alongside <strong>agent topologies</strong>, and measurement of &ldquo;developer&rdquo; productivity needs a rewrite.</p>\n<p>The third theme, <strong>securing permission-hungry agents</strong>, is the sharpest. The agents worth building need access to everything - OpenClaw, Claude Cowork, Gas Town agent swarms - and Simon Willison&rsquo;s &ldquo;lethal trifecta&rdquo; (private data + untrusted content + external action) now describes most useful agents by default, not by misconfiguration. Prompt injection remains unsolved, and model behavior is inconsistent enough that a single successful run gives no guarantee at scale. The Radar&rsquo;s bet: <strong>safe agent systems are pipelines of constrained agents, not monolithic ones</strong>, with Agent Skills emerging as a safer alternative to MCP, durable agents as a defense against instruction bloat, and strong monitoring and control as table stakes.</p>\n<p>The fourth theme, <strong>&ldquo;putting coding agents on a leash,&quot;</strong> splits the harness engineering landscape into feedforward controls (<strong>Agent Skills, Superpowers, plugin marketplaces, GitHub Spec-Kit, OpenSpec</strong>) that shape the agent before code is generated, and feedback controls (<strong>cargo-mutants, WuppieFuzz, CodeScene, deterministic quality gates wired as agent-queryable sensors</strong>) that observe behavior after the fact and drive self-correction before the human reviews.</p>\n<p>The blip-level picks worth noting: <strong>Adopt now includes Context engineering, Curated shared instructions, DORA metrics (with a new &ldquo;rework rate&rdquo; fifth metric), Passkeys (now AAL2-compliant per NIST SP 800-63-4), Structured output from LLMs, and Zero trust architecture</strong>, plus Claude Code, Cursor, mise, Apache Iceberg, React JS, React Native, Svelte, and Typer in tools/languages. <strong>Caution covers Agent instruction bloat, AI-accelerated shadow IT, Codebase cognitive debt, Coding agent swarms, Coding throughput as a productivity measure, Ignoring durability in agent workflows, MCP by default, Pixel-streamed development environments, and OpenClaw</strong> - that last one is the most striking: an autonomous personal-assistant agent flagged for significant concerns requiring careful evaluation. The Radar is telling you that 2026&rsquo;s problem is not building agents; it&rsquo;s containing them.</p>\n<h2 id=\"links\">Links</h2>\n<p>🤖 <a href=\"https://www.thoughtworks.com/radar\">Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive Debt As 2026&rsquo;s Core Tensions</a> - Vol 34&rsquo;s four themes read as a punchlist for AI-era engineering: tech evaluation is harder under semantic diffusion, established principles (DORA, pair programming, zero trust, command line) are being reclaimed against agent-generated complexity, permission-hungry agents are structurally unsafe without pipelines of constrained sub-agents, and coding-agent harnesses split into feedforward (Agent Skills, spec-kits) and feedback (mutation testing, linters as sensors) controls. OpenClaw lands in Caution.</p>\n<!--\n🤖 Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive Debt As 2026's Core Tensions\nhttps://www.thoughtworks.com/radar\nVol 34's four themes read as a punchlist for AI-era engineering: tech evaluation is harder under semantic diffusion, established principles (DORA, pair programming, zero trust, command line) are being reclaimed against agent-generated complexity, permission-hungry agents are structurally unsafe without pipelines of constrained sub-agents, and coding-agent harnesses split into feedforward (Agent Skills, spec-kits) and feedback (mutation testing, linters as sensors) controls. OpenClaw lands in Caution.\n-->\n",
				
				"date_published": "2026-04-26T14:01:50+02:00",
				"url": "https://cote.report/2026/04/26/thoughtworks-tech-radar-vol-lands.html"
			}
	]
}
