#AI

 The Human in the Loop

Feb 25 2026

AI can write code — that's no longer the interesting question. The interesting question is what happens to us when it does. This piece explores the tension between generation and understanding in AI-assisted development: the tools let you prototype five approaches in an afternoon, which is a genuine superpower, but they also create a seductive dopamine loop where effortless output bypasses the struggle that builds real competence. The core problem is trust calibration — AI-generated code passes structural smell tests while subtly failing semantic ones, and catching that gap requires exactly the kind of embodied intuition you only develop by wrestling with problems yourself. The highest-leverage meta-skill turns out to be decomposition: breaking complex problems into pieces you can reason about, which can't be delegated because it is the understanding. The practical sweet spot isn't avoiding AI or surrendering to it — it's using it for breadth (exploring patterns, drafting alternatives, generating boilerplate) while reserving depth for yourself, especially in the genuinely hard parts where judgment, taste, and debugging intuition matter most. What remains irreducibly human isn't the code — it's the walk that builds the navigator.

AI Essay

 Designing for the Unknown

Feb 24 2026

Most software systems aspire to robustness — surviving stress unchanged — but that's just a holding pattern in a world that won't stop moving. Drawing on Taleb's concept of antifragility and Barry O'Reilly's Residuality Theory, this piece argues that we can deliberately engineer systems that improve under stress by combining two ideas from complexity science: Kauffman's Random Boolean Networks (which show that real system topologies are far more manageable than their theoretical state spaces suggest) and Monte Carlo-style thought experiments (tracing hypothetical scenarios through your architecture to find structural brittleness before the world finds it for you). The multiplier effect is key — fixing a structural weakness uncovered by one imagined scenario tends to cover dozens you never thought of. AI coding assistants make this exploration dramatically faster, letting you prototype multiple architectural alternatives in hours instead of weeks, but they don't replace the judgment calls about which scenarios matter and what the results mean. The takeaway: thinking is still the cheapest, highest-leverage activity in software design, and now we have even less excuse not to do more of it.

AI Design Essay

 Layers, Levels, and the Cognitive Maze

Feb 22 2026

This essay explores the difference between meaningful architectural layers and mere levels of indirection in software design. I argue that splitting code into smaller pieces often creates navigational mazes rather than genuine separation of concerns, using the bloated business-logic layer and the repository pattern as key examples. AI coding assistants highlight the problem: they can follow complex call chains perfectly but still struggle when the underlying organization doesn't reflect coherent business concepts. The piece advocates for indirection that earns its existence through a distinct purpose, not just a smaller file size.

AI Design

 It's All About Interfaces

Feb 21 2026

The most important thing in software design isn't the logic or the algorithms — it's the interfaces. Coding assistants are making this truth visceral: when a machine consumes your API, every implicit assumption and ambiguous contract is exposed. This piece explores how hexagonal architecture, basic principles, and the DRY rule all take on new significance when AI agents stress-test your designs, and why getting interfaces right is becoming the most human skill in software engineering.

AI Design Essay

 The Tension Between Consistency and Improvement

Feb 20 2026

Improving a software codebase often creates temporary inconsistency because new patterns must coexist with older ones during gradual migrations. While consistency helps developers understand and navigate systems efficiently, evolving requirements and better practices make change unavoidable. Incremental refactoring is the safest approach but introduces short-term cognitive complexity—especially as AI tools accelerate experimentation and increase the risk of unfinished migrations. This article argues that teams must consciously balance consistency and improvement by making deliberate decisions, keeping changes focused, and finishing migrations to maintain long-term code quality.

AI Design

 The Assumptions That Break Systems

Feb 15 2026

Every bug you've ever spent hours hunting had the same root cause: something you were sure was true, wasn't. Assumptions are the invisible scaffolding of software development — inferences we treat as facts because checking everything is impossible, and our cognitive context, much like an LLM's, has hard limits. The dangerous part isn't making assumptions; it's losing track of which beliefs are proven and which are just comfortable shortcuts. Even tests — our best tool for replacing assumptions with proof — are themselves built on assumptions. This is the paradox at the heart of reliable software.

AI Essay

 It's All About Decisions

Feb 14 2026

Software development isn't a typing discipline. It's a decision-making discipline. And now that you can prototype five approaches in the time it used to take to build one, the decisions matter more than ever.

AI Essay

 What Human Learning Can Teach AI

Sep 28 2025

This essay examines how emotional weighting fundamentally distinguishes human from artificial learning, drawing on neuroscience research showing that amygdala-hippocampus interactions create a biological "highlighting" system for emotionally salient information—absent in current Large Language Models despite their sophisticated attention mechanisms. While AI systems excel at systematic processing through statistical optimization, they lack the subjective relevance judgments and persistent purposefulness that characterize human cognition, particularly the ability to maintain coherent goal-directed behavior across complex, multi-step tasks. We argue that effective human-AI collaboration will emerge from leveraging complementary cognitive architectures: AI's consistent attention mechanisms paired with human emotional weighting, subjective prioritization, and embodied purposefulness, with meta-learning skills like problem decomposition becoming crucial for humans to enhance their effectiveness as AI partners.

AI Essay

 The New Ontology of Code

Aug 30 2025

AI code generation isn't just changing how we write software—it's changing what code is. We're witnessing a fundamental inversion where code transforms from permanent artifact to regenerable output, where implementation becomes fluid while interfaces become the new bedrock. But this shift creates uncomfortable paradoxes: "easier" coding that doesn't eliminate complexity but displaces it, democratization that centralizes power, and the strange alienation of debugging code you commanded but didn't create. This essay explores what we're really trading in this transformation—intimate understanding for broad capability, craft for productivity, independence for efficiency. Neither utopian nor dystopian, it offers language for that unsettling feeling many developers can't quite name: working with systems we control but don't comprehend. For anyone grappling with AI's impact on software, this is about recognizing the trade-offs we're making before we've made them irreversible.

AI Essay

 Shoucheng Zhang: "Quantum Computing, AI and Blockchain: The Future of IT" | Talks at Google

Dec 11 2018

Prof. Shoucheng Zhang discusses three pillars of information technology: quantum computing, AI and blockchain. He presents the fundamentals of crypto-economic science and answers questions such as: What is the intrinsic value of a medium of exchange? What is the value of consensus and how does it emerge? How can math be used to create distributed self-organizing consensus networks to create a data-marketplace for AI and machine learning? Prof. Zhang is the JG Jackson and CJ Wood professor of physics at Stanford University. He is a member of the US National Academy of Science, the American Academy of Arts and Sciences and a foreign member of the Chinese Academy of Sciences. He discovered a new state of matter called topological insulator in which electrons can conduct along the edge without dissipation, enabling a new generation of electronic devices with much lower power consumption. For this groundbreaking work he received numerous international awards, including the Buckley Prize, the Dirac Medal and Prize, the Europhysics Prize, the Physics Frontiers Prize and the Benjamin Franklin Medal. He is also the founding chairman of the DHVC venture capital fund, which invests in AI, blockchain, mobile internet, big data, AR/VR, genomics and precision medicine, sharing economy and robotics.

AI Blockchain Quantum Computing Video