<rss xmlns:source="http://source.scripting.com/" version="2.0">
  <channel>
    <title>Coté</title>
    <link>https://cote.report/</link>
    <description></description>
    
    <language>en</language>
    
    <lastBuildDate>Sun, 26 Apr 2026 15:51:17 +0200</lastBuildDate>
    <item>
      <title>Patel: AI&#39;&#39;s Unpopularity Isn&#39;&#39;t a Marketing Problem, It&#39;&#39;s &#34;Software Brain</title>
      <link>https://cote.report/2026/04/26/patel-ais-unpopularity-isnt-a.html</link>
      <pubDate>Sun, 26 Apr 2026 15:51:17 +0200</pubDate>
      
      <guid>http://cotereport.micro.blog/2026/04/26/patel-ais-unpopularity-isnt-a.html</guid>
      <description>&lt;h2 id=&#34;beware-software-brainhttpswwwthevergecompodcast917029software-brain-ai-backlash-databases-automation&#34;&gt;&lt;a href=&#34;https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation&#34;&gt;BEWARE SOFTWARE BRAIN&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Nilay Patel names the pattern behind the tech industry&amp;rsquo;s bewilderment at AI&amp;rsquo;s collapsing favorability: &lt;strong&gt;&amp;ldquo;software brain,&amp;rdquo; the worldview that treats everything as databases controllable by structured code.&lt;/strong&gt; It is the same mindset Marc Andreessen codified in 2011, now turbocharged by AI.&lt;/p&gt;
&lt;p&gt;Polling is brutal. NBC rates AI below ICE; Quinnipiac finds over half of Americans think AI will do more harm than good; Gallup shows Gen Z hope dropping to 18 percent while anger climbs.&lt;/p&gt;
&lt;p&gt;Executives like Nadella and Altman read this as a communications failure. &lt;strong&gt;Patel&amp;rsquo;s claim is that it is not. ChatGPT already has 900 million weekly users; people are reacting to their lived experience, not an ad gap.&lt;/strong&gt; You cannot market people out of what they feel every day.&lt;/p&gt;
&lt;p&gt;The software-brain thesis scales into a critique of adjacent systems. &lt;strong&gt;Law looks like code - precedent, citations, statutes - but the law is constitutively ambiguous, and its ambiguity is the point.&lt;/strong&gt; Attempts to force the legal system into deterministic computation (e.g., fully automated AI arbitration) mistake formality for determinism and hollow out the thing that makes law legitimate.&lt;/p&gt;
&lt;p&gt;Business, by contrast, is where software brain genuinely wins. &lt;strong&gt;Modern enterprises are already databases-plus-loops, which is why Anthropic and OpenAI are charging hard at the enterprise - the value is real where the terrain actually is software.&lt;/strong&gt; Consulting decks, advertising, marketing automation - all up for grabs.&lt;/p&gt;
&lt;p&gt;The failure mode is exporting that logic to human life. &lt;strong&gt;The ask is no longer that computers adapt to people but that people make themselves legible to the machine: open your files, email, calendar, and messages so the AI becomes more valuable.&lt;/strong&gt; A decade of smart-home flops already showed regular people don&amp;rsquo;t want this; AI is not going to reverse that.&lt;/p&gt;
&lt;p&gt;Meanwhile the externalities - energy, emissions, RAM supply, data-center politics, even political violence aimed at executives - compound. &lt;strong&gt;The industry is simultaneously telling people their jobs will vanish, their lives should be instrumented, and the models might end the world, then wondering why nobody likes them.&lt;/strong&gt; Patel&amp;rsquo;s bottom line: this is not a haircut problem. It is a worldview problem.&lt;/p&gt;
&lt;h2 id=&#34;links&#34;&gt;Links&lt;/h2&gt;
&lt;p&gt;🤖 &lt;a href=&#34;https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation&#34;&gt;BEWARE SOFTWARE BRAIN&lt;/a&gt; - Nilay Patel argues AI&amp;rsquo;s unpopularity isn&amp;rsquo;t a marketing gap but a collision between tech&amp;rsquo;s database-worldview (&amp;ldquo;software brain&amp;rdquo;) and human life, which refuses to flatten into loops. The industry is winning in business where the world is already software, and alienating everyone else.&lt;/p&gt;
&lt;!--
🤖 BEWARE SOFTWARE BRAIN
https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation
Nilay Patel argues AI&#39;s unpopularity isn&#39;t a marketing gap but a collision between tech&#39;s database-worldview (&#34;software brain&#34;) and human life, which refuses to flatten into loops. The industry is winning in business where the world is already software, and alienating everyone else.
--&gt;
</description>
      <source:markdown>## [BEWARE SOFTWARE BRAIN](https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation)

Nilay Patel names the pattern behind the tech industry&#39;s bewilderment at AI&#39;s collapsing favorability: **&#34;software brain,&#34; the worldview that treats everything as databases controllable by structured code.** It is the same mindset Marc Andreessen codified in 2011, now turbocharged by AI.

Polling is brutal. NBC rates AI below ICE; Quinnipiac finds over half of Americans think AI will do more harm than good; Gallup shows Gen Z hope dropping to 18 percent while anger climbs.

Executives like Nadella and Altman read this as a communications failure. **Patel&#39;s claim is that it is not. ChatGPT already has 900 million weekly users; people are reacting to their lived experience, not an ad gap.** You cannot market people out of what they feel every day.

The software-brain thesis scales into a critique of adjacent systems. **Law looks like code - precedent, citations, statutes - but the law is constitutively ambiguous, and its ambiguity is the point.** Attempts to force the legal system into deterministic computation (e.g., fully automated AI arbitration) mistake formality for determinism and hollow out the thing that makes law legitimate.

Business, by contrast, is where software brain genuinely wins. **Modern enterprises are already databases-plus-loops, which is why Anthropic and OpenAI are charging hard at the enterprise - the value is real where the terrain actually is software.** Consulting decks, advertising, marketing automation - all up for grabs.

The failure mode is exporting that logic to human life. **The ask is no longer that computers adapt to people but that people make themselves legible to the machine: open your files, email, calendar, and messages so the AI becomes more valuable.** A decade of smart-home flops already showed regular people don&#39;t want this; AI is not going to reverse that.

Meanwhile the externalities - energy, emissions, RAM supply, data-center politics, even political violence aimed at executives - compound. **The industry is simultaneously telling people their jobs will vanish, their lives should be instrumented, and the models might end the world, then wondering why nobody likes them.** Patel&#39;s bottom line: this is not a haircut problem. It is a worldview problem.

## Links

🤖 [BEWARE SOFTWARE BRAIN](https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation) - Nilay Patel argues AI&#39;s unpopularity isn&#39;t a marketing gap but a collision between tech&#39;s database-worldview (&#34;software brain&#34;) and human life, which refuses to flatten into loops. The industry is winning in business where the world is already software, and alienating everyone else.

&lt;!--
🤖 BEWARE SOFTWARE BRAIN
https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation
Nilay Patel argues AI&#39;s unpopularity isn&#39;t a marketing gap but a collision between tech&#39;s database-worldview (&#34;software brain&#34;) and human life, which refuses to flatten into loops. The industry is winning in business where the world is already software, and alienating everyone else.
--&gt;
</source:markdown>
    </item>
    
    <item>
      <title>Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive</title>
      <link>https://cote.report/2026/04/26/thoughtworks-tech-radar-vol-lands.html</link>
      <pubDate>Sun, 26 Apr 2026 14:01:50 +0200</pubDate>
      
      <guid>http://cotereport.micro.blog/2026/04/26/thoughtworks-tech-radar-vol-lands.html</guid>
      <description>&lt;p&gt;The April 2026 Radar is explicitly a Radar about AI-assisted engineering, and its four themes map the operational tensions the TAB is seeing in the field. The first is that &lt;strong&gt;evaluating technology is getting harder in an agentic world&lt;/strong&gt;. Semantic diffusion (overlapping terms for similar things - spec-driven development, harness engineering, MCP-this, agent-that), single-contributor AI-built tooling that&amp;rsquo;s weeks old, and the impossibility of waiting for maturity without going stale are all compressing the Radar&amp;rsquo;s traditional rhythm. Underneath this, they flag &lt;strong&gt;codebase cognitive debt&lt;/strong&gt; - the accumulating gap between code shipped and mental models of what it does.&lt;/p&gt;
&lt;p&gt;The second theme, &lt;strong&gt;&amp;ldquo;retaining principles, relinquishing patterns,&amp;quot;&lt;/strong&gt; captures the pendulum. The Radar returns to pair programming, zero trust, mutation testing, DORA metrics, clean code, testability, accessibility, and even the command line as a first-class interface - not out of nostalgia but as a counterweight to the speed at which agents generate complexity. At the same time, they flag that team topologies themselves will have to evolve alongside &lt;strong&gt;agent topologies&lt;/strong&gt;, and measurement of &amp;ldquo;developer&amp;rdquo; productivity needs a rewrite.&lt;/p&gt;
&lt;p&gt;The third theme, &lt;strong&gt;securing permission-hungry agents&lt;/strong&gt;, is the sharpest. The agents worth building need access to everything - OpenClaw, Claude Cowork, Gas Town agent swarms - and Simon Willison&amp;rsquo;s &amp;ldquo;lethal trifecta&amp;rdquo; (private data + untrusted content + external action) now describes most useful agents by default, not by misconfiguration. Prompt injection remains unsolved, and model behavior is inconsistent enough that a single successful run gives no guarantee at scale. The Radar&amp;rsquo;s bet: &lt;strong&gt;safe agent systems are pipelines of constrained agents, not monolithic ones&lt;/strong&gt;, with Agent Skills emerging as a safer alternative to MCP, durable agents as a defense against instruction bloat, and strong monitoring and control as table stakes.&lt;/p&gt;
&lt;p&gt;The fourth theme, &lt;strong&gt;&amp;ldquo;putting coding agents on a leash,&amp;quot;&lt;/strong&gt; splits the harness engineering landscape into feedforward controls (&lt;strong&gt;Agent Skills, Superpowers, plugin marketplaces, GitHub Spec-Kit, OpenSpec&lt;/strong&gt;) that shape the agent before code is generated, and feedback controls (&lt;strong&gt;cargo-mutants, WuppieFuzz, CodeScene, deterministic quality gates wired as agent-queryable sensors&lt;/strong&gt;) that observe behavior after the fact and drive self-correction before the human reviews.&lt;/p&gt;
&lt;p&gt;The blip-level picks worth noting: &lt;strong&gt;Adopt now includes Context engineering, Curated shared instructions, DORA metrics (with a new &amp;ldquo;rework rate&amp;rdquo; fifth metric), Passkeys (now AAL2-compliant per NIST SP 800-63-4), Structured output from LLMs, and Zero trust architecture&lt;/strong&gt;, plus Claude Code, Cursor, mise, Apache Iceberg, React JS, React Native, Svelte, and Typer in tools/languages. &lt;strong&gt;Caution covers Agent instruction bloat, AI-accelerated shadow IT, Codebase cognitive debt, Coding agent swarms, Coding throughput as a productivity measure, Ignoring durability in agent workflows, MCP by default, Pixel-streamed development environments, and OpenClaw&lt;/strong&gt; - that last one is the most striking: an autonomous personal-assistant agent flagged for significant concerns requiring careful evaluation. The Radar is telling you that 2026&amp;rsquo;s problem is not building agents; it&amp;rsquo;s containing them.&lt;/p&gt;
&lt;h2 id=&#34;links&#34;&gt;Links&lt;/h2&gt;
&lt;p&gt;🤖 &lt;a href=&#34;https://www.thoughtworks.com/radar&#34;&gt;Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive Debt As 2026&amp;rsquo;s Core Tensions&lt;/a&gt; - Vol 34&amp;rsquo;s four themes read as a punchlist for AI-era engineering: tech evaluation is harder under semantic diffusion, established principles (DORA, pair programming, zero trust, command line) are being reclaimed against agent-generated complexity, permission-hungry agents are structurally unsafe without pipelines of constrained sub-agents, and coding-agent harnesses split into feedforward (Agent Skills, spec-kits) and feedback (mutation testing, linters as sensors) controls. OpenClaw lands in Caution.&lt;/p&gt;
&lt;!--
🤖 Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive Debt As 2026&#39;s Core Tensions
https://www.thoughtworks.com/radar
Vol 34&#39;s four themes read as a punchlist for AI-era engineering: tech evaluation is harder under semantic diffusion, established principles (DORA, pair programming, zero trust, command line) are being reclaimed against agent-generated complexity, permission-hungry agents are structurally unsafe without pipelines of constrained sub-agents, and coding-agent harnesses split into feedforward (Agent Skills, spec-kits) and feedback (mutation testing, linters as sensors) controls. OpenClaw lands in Caution.
--&gt;
</description>
      <source:markdown>The April 2026 Radar is explicitly a Radar about AI-assisted engineering, and its four themes map the operational tensions the TAB is seeing in the field. The first is that **evaluating technology is getting harder in an agentic world**. Semantic diffusion (overlapping terms for similar things - spec-driven development, harness engineering, MCP-this, agent-that), single-contributor AI-built tooling that&#39;s weeks old, and the impossibility of waiting for maturity without going stale are all compressing the Radar&#39;s traditional rhythm. Underneath this, they flag **codebase cognitive debt** - the accumulating gap between code shipped and mental models of what it does.

The second theme, **&#34;retaining principles, relinquishing patterns,&#34;** captures the pendulum. The Radar returns to pair programming, zero trust, mutation testing, DORA metrics, clean code, testability, accessibility, and even the command line as a first-class interface - not out of nostalgia but as a counterweight to the speed at which agents generate complexity. At the same time, they flag that team topologies themselves will have to evolve alongside **agent topologies**, and measurement of &#34;developer&#34; productivity needs a rewrite.

The third theme, **securing permission-hungry agents**, is the sharpest. The agents worth building need access to everything - OpenClaw, Claude Cowork, Gas Town agent swarms - and Simon Willison&#39;s &#34;lethal trifecta&#34; (private data + untrusted content + external action) now describes most useful agents by default, not by misconfiguration. Prompt injection remains unsolved, and model behavior is inconsistent enough that a single successful run gives no guarantee at scale. The Radar&#39;s bet: **safe agent systems are pipelines of constrained agents, not monolithic ones**, with Agent Skills emerging as a safer alternative to MCP, durable agents as a defense against instruction bloat, and strong monitoring and control as table stakes.

The fourth theme, **&#34;putting coding agents on a leash,&#34;** splits the harness engineering landscape into feedforward controls (**Agent Skills, Superpowers, plugin marketplaces, GitHub Spec-Kit, OpenSpec**) that shape the agent before code is generated, and feedback controls (**cargo-mutants, WuppieFuzz, CodeScene, deterministic quality gates wired as agent-queryable sensors**) that observe behavior after the fact and drive self-correction before the human reviews.

The blip-level picks worth noting: **Adopt now includes Context engineering, Curated shared instructions, DORA metrics (with a new &#34;rework rate&#34; fifth metric), Passkeys (now AAL2-compliant per NIST SP 800-63-4), Structured output from LLMs, and Zero trust architecture**, plus Claude Code, Cursor, mise, Apache Iceberg, React JS, React Native, Svelte, and Typer in tools/languages. **Caution covers Agent instruction bloat, AI-accelerated shadow IT, Codebase cognitive debt, Coding agent swarms, Coding throughput as a productivity measure, Ignoring durability in agent workflows, MCP by default, Pixel-streamed development environments, and OpenClaw** - that last one is the most striking: an autonomous personal-assistant agent flagged for significant concerns requiring careful evaluation. The Radar is telling you that 2026&#39;s problem is not building agents; it&#39;s containing them.

## Links

🤖 [Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive Debt As 2026&#39;s Core Tensions](https://www.thoughtworks.com/radar) - Vol 34&#39;s four themes read as a punchlist for AI-era engineering: tech evaluation is harder under semantic diffusion, established principles (DORA, pair programming, zero trust, command line) are being reclaimed against agent-generated complexity, permission-hungry agents are structurally unsafe without pipelines of constrained sub-agents, and coding-agent harnesses split into feedforward (Agent Skills, spec-kits) and feedback (mutation testing, linters as sensors) controls. OpenClaw lands in Caution.

&lt;!--
🤖 Thoughtworks Tech Radar Vol 34 Lands On Permission-Hungry Agents and Cognitive Debt As 2026&#39;s Core Tensions
https://www.thoughtworks.com/radar
Vol 34&#39;s four themes read as a punchlist for AI-era engineering: tech evaluation is harder under semantic diffusion, established principles (DORA, pair programming, zero trust, command line) are being reclaimed against agent-generated complexity, permission-hungry agents are structurally unsafe without pipelines of constrained sub-agents, and coding-agent harnesses split into feedforward (Agent Skills, spec-kits) and feedback (mutation testing, linters as sensors) controls. OpenClaw lands in Caution.
--&gt;
</source:markdown>
    </item>
    
  </channel>
</rss>
