Writing Hub
AI governance essays, reasoning systems notes, experiment logs, and technical writing across BioAI and engineering practice.
Project Topics

I Audited 10 Open-Source Bio-AI Repos. Most Could Produce Outputs. Few Could Establish Trust.
I audited 10 visible repositories. Most could produce outputs. Very few could establish what those outputs meant.

Bio-AI Repository Audit 2026: A Technical Report on 10 Open-Source Systems
We audited 10 prominent open-source Bio-AI repositories using code inspection and STEM-AI trust scoring. 8 of 10 scored T0: trust not established. Here is what the code actually shows.

Medical AI Repositories Need More Than Benchmarks. We Built STEM-AI to Audit Trust
STEM-AI is a governance audit framework for public medical AI repositories. It scores README integrity, cross-platform consistency, and code infrastructure — because benchmarks alone don't tell you if a bio-AI tool is safe to trust.

How do you know when your entire AI pipeline is wrong — not just one model? (EXP-033)
EXP-033 shows how to validate an entire AI pipeline, not just one model, using five-gate checkpoints, reproducible PASS/BLOCK parity, AlphaGenome on/off testing, and fully traceable governance decisions.

What AI Changed About Research Code — and What It Didn’t
The old bottleneck was writing the code. The new bottleneck is proving that the code still means what the theory meant.

What an AI Reasoning Engine Built for Alzheimer's Metabolic Research: A Code Walkthrough
A code walkthrough of an AI reasoning engine for Alzheimer’s metabolic research, showing how literature ingestion, causal inference, and executable biomarker scaffolds generate falsifiable pre-validation hypotheses.

Chaos Engineering for AI: Validating a Fail-Closed Pipeline with Fake Data and Math
A case study in AI governance showing how synthetic invalid inputs, structural disagreement, SIDRCE ethics checks, and end-to-end reliability scoring triggered a safe BLOCK verdict in a biomedical pipeline.

When AI Models Fight, Truth Wins: The “Eureka” Moment for Tired Researchers
To the grad student staring at a pLDDT of 90 and wondering why the ligand won’t bind.

From 97% Model Accuracy to 74% Clinical Reliability: Building RSN-NNSL-GATE-001
Learn how RSN-NNSL-GATE-001 turns high model accuracy into system-level clinical reliability by blocking unsafe AI pipeline decisions, measuring end-to-end risk, and enforcing fail-closed governance.

When Adding Chai-1 and Boltz-2 Exposed Hidden Model Disagreement(Trinity Protocol Part)
See how adding Chai-1 and Boltz-2 to an AlphaFold workflow exposed hidden model disagreement, increased drift, and revealed why failed convergence can be the most valuable signal in computational biology.

Orchestrating AlphaFold 3 & 2 with Python: Handling AI Hallucinations using Adapter Patter (Trinity Protocol Part 1)
Learn how to orchestrate AlphaFold 3 and AlphaFold 2 with Python using the Adapter Pattern to detect AI hallucinations, measure structural drift, and improve protein prediction reliability.

I Integrated AlphaFolder3 & AlphaGenome. It Looked Perfect. Then It Failed the "Honesty Test."
A real-world experiment integrating AlphaFold3 and AlphaGenome revealed a critical lesson: AI predictions that look perfect can still fail the ‘honesty test.’ A deep dive into bioinformatics, model validation, and AI reliability in drug discovery.
Showing page 1 of 2 · 19 matching posts