Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.
Everyone Was Talking About Context Engineering. Nobody Had Solved Governance.

The Model Already Read the README. MICA v0.1.8 Made It a Protocol
v0.1.7 made scoring a contract with fail-closed gates. v0.1.8 recognized that README-first behavior could serve as invocation — and formalized it as a schema-level protocol. This article uses simplified examples to show how the invocation gap that had existed since v0.0.1 was finally closed

The Stake Was Governance Outside the Schema. MICA v0.1.5 Pulled It In
v0.1.0 through v0.1.4 made the schema more implementable. v0.1.5 was the first version to ask a different question — what if governance itself belongs inside the schema? Here is what that looked like, and what it still could not do.

The Schema Existed. The Model Had No Way to Know.
v0.0.1 proved that context could be structured. It did not prove that the structure could govern what shaped the session. Three failures — and why only one made the others meaningless.

My LLM Kept Forgetting My Project. So I Built a Governance Schema.
Session loss isn't a UX inconvenience — it's a structural failure with compounding consequences for long-running AI projects. This post defines the problem precisely and introduces MICA, a governance schema for AI context management.

Your Agentic Stack Has Two Layers. It Needs Three.
Most agentic stacks cover tools and skills, but miss intent governance. Learn why a third layer is needed to stop AI drift, scope creep, and technically correct systems heading in the wrong direction.

I’m Not Building AI Demos. I’m Building AI Audits (ASDP + Slop Gates)
Learn how ASDP and AI Slop Gates turn AI trust into auditable evidence, with CI/CD checks, drift policies, and governance artifacts that block weak, narrative-driven systems.

Undo Beats IQ: Building Flamehaven as a Governed AI Runtime (Not a Prompt App)
Project note, essay, or technical log from the Flamehaven writing archive.

AGI Is Not a Destination — It Is a Promise
From Death Star hype to a compass of meaning: AGI is not a weapon of scale, but a promise of reasoning. Our experiment reveals the hinge.

When My AI Got Smarter — But Also Slower
Smarter. Slower. More trustworthy. What happened when I tested SR9/DI2 on 5.0—and why progress in AI is about persistence, not perfection.

When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner
When I Stopped Treating AI as a Tool — and Started Seeing It as a Partner From Vending Machine to Partner At first, I treated AI like a vending machine. Insert a prompt. Get an answer …

AGI Doesn’t Begin with Scale — It Begins in a Pause
After 12,000 AI dialogues, I discovered AGI isn’t about scale but resonance — born in a pause that revealed presence, ethics, and responsibility.
Showing page 1 of 2 · 13 matching posts