A · About

About

The author, the stake, the journey from a 2024 AI protocol to a 2026 verification framework, and the limits of what this site claims.

Note

Disclosure first. This site is published by VALIS Systems. The author is the founder. VALIS builds AI verification infrastructure in the category this framework describes. That commercial interest is acknowledged in every direction this document can be read.

The doctrine itself is independent of any specific product. The framework is published openly because the Zero Trust posture it advocates extends to the doctrine itself. You should not have to trust the publisher.

Why this framework exists

The arguments on this site come from a specific path. The three pieces that produced the framework, in order.

Yahoo, 2024: the protocol that was right for its moment

In 2024, I authored an AI usage protocol for Yahoo. It was scoped to what the AI conversation was actually about at the time: making sure models did not fabricate facts, that citations grounded back to sources, that AI use was disclosed, that human review was in the loop. It was a reasonable response to the AI landscape of 2024.

It was also the wrong frame for where things were going. The realization came over the following months as models improved on the hallucination axis faster than the protocols I had written assumed. The remaining failure mode was no longer getting the facts wrong. It was producing fluent, well-cited, hallucination-free output that still reasoned badly. The 2024 toolkit was solving the visible problem. The problem was changing underneath it.

2024 to 2026: building VALIS

From 2024 to 2026, I designed and built VALIS. The work was equal parts engineering and analysis. We ran real verification through the system. We saw which architectural commitments held under pressure and which were performative. We saw what verification at scale actually requires when you cannot quietly soften it for a deadline or a difficult client.

Three observations crystallized over those two years.

Verification cost is structural

Verification does not get cheaper at the rate generation does. It runs on a different cost curve. That asymmetry is the operational risk most organizations have not yet priced.

Trust models matter more than features

A product that asks the customer to trust the verifier is a different category from one that produces independently checkable verification. The architectural commitments define the category.

The deficit was already there

AI did not create the verification deficit. AI made it impossible to ignore. The same gap existed in human-produced analytical content for decades. We just could not see it.

2026: the realization that drives this framework

By early 2026, the central observation crystallized.

Tip

The verification problem was human to begin with. AI exposed it. Anything we build to address AI verification has to address the deeper deficit underneath it.

That observation reframed everything. The framework on this site is the distillation of that reframing: the doctrine, the architecture, the operational practice. Published openly because the doctrine should survive the publisher, the company, and the founder.

What this framework is, and is not

The framework is a directional reading of where AI verification is heading. It is not a guarantee, not legal advice, not investment guidance.

Not legal advice

The procurement and contracting recommendations on this site are framing, not legal counsel. Have your counsel review any specific contract language before signing.

Not investment advice

References to capital market signals in the Watchlist are framework-test signals, not investment recommendations. Apply your own diligence.

Not a guarantee

The framework predicts a market correction is likely within 18 months. The Watchlist names the dated signals that will test the prediction. The prediction could be wrong, and the framework specifies how it would fail.

The framework is general. Your situation is specific. Use the doctrine, the buyer's checklist, and the lane discipline practices as inputs to your own thinking, not as a substitute for it.

What the framework owes the reader

The Zero Trust posture extends to the framework itself. Four commitments.

Verifiable

The source is public. The framework is published in AI-readable form (see llms.txt and llms-full.txt). Anyone can audit the arguments.

Forkable

Licensed under Creative Commons Attribution 4.0. Use, adapt, build on it, with attribution. Implement the doctrine elsewhere if you want to.

Contestable

Substantive disagreements are welcome via issues and pull requests on the repository. The doctrine improves when it is contested.

Testable

The 2026 Watchlist specifies dated signals that will tell you (and me) whether the framework holds. A framework that does not specify how it could be wrong is not a framework.

Author

David Lundblad. Founder of VALIS Systems. Previously authored Yahoo's AI usage protocol (2024). Two years designing and building VALIS (2024-2026). Publishing this framework as the distillation of that work.

Reach me through:

Where this goes next