Signal 007 — March 27, 2026

The Engine

Nobody built a public-facing compliance engine that scores companies against real AI law and publishes the results for anyone to see. The brainstorm that went from "fuck that idea" to "we need to start ASAP." The weapon that makes them come to you.

The session where the product was born — brainstorm to architecture in one conversation

The Graveyard of Killed Ideas — And Why This One Survived

Six Signals deep. Every session produced a breakthrough and every breakthrough produced a reason to quit. AI Agent Insurance — killed because I can't guarantee other people's AI when my own breaks. Watchdog as a product — killed because any developer could clone it in a weekend. HIPAA compliance — wrong mission. The protocol as a product — killed because protocols don't make money. Incident analysis blog — killed because nobody pays for a blog and I'd end up like Wikipedia begging for donations.

The pattern was clear: find the idea, go deep, build it out further than anyone else, then the moment competition appears — kill it and move to the next thing.

But this session broke the pattern. Not because the idea was better. Because the structural analysis proved something different: the competition literally cannot build this one. Not won't. CAN'T.

The Gap That Shouldn't Exist — But Does

We searched the entire AI compliance landscape. Enterprise tools exist — OneTrust, FairNow, Lumenova, Airia, ISO 42001, NIST AI RMF. Partnership on AI publishes priorities. Congress introduced the AI Accountability Act. TrustArc offers SB24-205 certification.

Here is what every single one of them has in common: They sell TO companies. Privately. Behind paywalls. The company pays, the company gets assessed, nobody outside ever sees the results. It's like hiring a private doctor who never tells anyone your diagnosis.

What does NOT exist anywhere on the internet:

Nobody built it. Not because it's hard. Because the enterprise vendors CAN'T — it would destroy their business model. You don't pay OneTrust $100K a year so they can publish your compliance gaps to the internet. The academics won't — they write papers, not products. The government won't — they don't build tech. The journalists can't — they don't have the data infrastructure or the legal framework.

The only person who builds this is someone with no enterprise customers to protect, a working data scraping infrastructure, an AI analysis layer, a published accountability framework, and nothing to lose.

The Car Inspection Shop — Not the Car Manufacturer

The model clicked when we stopped thinking about compliance software and started thinking about car inspections.

The state sets the rules for what makes a car roadworthy. The state doesn't inspect every car. Private shops do. They check the brakes, the lights, the emissions. They give you a sticker that says this car passed inspection on this date against these criteria. If your brakes fail the next day, the shop isn't liable for your crash. They told you where you stood on the day they checked.

That's the compliance engine. Colorado wrote the law. We built the machine that checks whether companies meet it. We don't enforce the law. We don't guarantee compliance. We tell them where they stand — publicly — and offer to help them fix it.

The government doesn't come to us angry. The government comes grateful. Because they wrote a law they can't enforce at scale. They don't have the infrastructure to scan thousands of companies. We do.

The Page That Makes Them Pick Up the Phone

This is where everything converged into something dangerous.

The engine scans every company it can identify operating AI in Colorado in covered areas — hiring, lending, insurance, healthcare, housing, education. For each one, it checks their public footprint against what SB 24-205 actually requires.

Then it generates a page. Public. Indexed by Google. With the company's name on it.

Sample Compliance Risk Profile

Company X — deploys AI-driven hiring tools in Colorado. No published AI use case inventory on website. No public risk management policy statement. No consumer notification language regarding AI involvement in consequential decisions. CEO claimed on Q4 earnings call that AI replaced 2,000 workers — no impact assessment documentation found.

Compliance Risk: HIGH — 4 of 6 public-facing requirements under SB 24-205 not met.

All data sourced from public records. Companies may submit corrections or additional documentation.

That page exists. Google indexes it. The company's legal team finds it. Their board finds it. Their investors find it. Their competitors find it.

They don't need us to sell them anything. They need that page to either go away or get fixed. And the only people who can help them fix it are the people who built the framework that identified the problem.

"I am very bad at selling stuff. I can have a cure for death and I could not sell it to a terminal ill cancer patient. So I need some way that they come to me." — From the brainstorm session

The engine does the selling. The page is the pitch. The phone rings itself.

What Colorado Actually Requires — And What the Engine Checks

A critical correction during this session: the engine doesn't score against the protocol. It scores against the actual law. The protocol is ours. Colorado's law is theirs. Companies won't care about a framework from some guy in Sacramento. They WILL care about $20,000 per violation under SB 24-205.

The Colorado AI Act — effective June 30, 2026 — requires specific things that are publicly verifiable from the outside:

The law literally REQUIRES companies to publish information on their websites. The engine checks whether they did it. That's not hacking. That's not insider data. That's checking whether they did the thing the law says they must do publicly.

94
Days until Colorado enforcement
$20K
Per violation — deceptive trade practice
1000s
Companies affected in Colorado

Five Steps From Public Data to Public Page

The architecture uses the exact same pipeline that already built 97K+ entity pages on CFAI across 14 federal databases. The engine exists. It just needs to be pointed at a new target.

01

Identify Companies

Scrape job postings, SEC filings, earnings calls, press releases, and product pages for companies deploying AI in Colorado in covered areas — hiring, lending, insurance, healthcare, housing, education.

02

Scan Public Footprint

Does their website have an AI disclosure page? A use case inventory? A risk management policy? Consumer notification language? Appeal process documentation? Check every public-facing requirement in SB 24-205.

03

Score Against the Law

Each requirement gets a check or a gap. AI analysis via Haiku maps findings to specific sections of SB 24-205. The score is transparent — here's what we looked for, here's what we found, here's the source.

04

Generate the Page

Company name, what they do with AI, what they publish, what's missing, overall compliance risk score. All sourced. All linked. All verifiable. Every data point traceable to its origin.

05

Publish and Index

Sitemap. Google Search Console. The page exists. The company's name is on it. The clock is ticking toward June 30.

Four Revenue Layers — None Require Selling

Every layer flows from the public page. The page does the selling. Everything after the phone rings is paid.

Layer 1 — The Page (Free)

Public compliance risk profile. The fire alarm. The thing that makes them find you. This is the test drive. The test drive is free. The car is not.

Layer 2 — The Assessment ($10K-$50K)

They want their score fixed. You run their actual AI operations through a proper assessment — not just public data, their internal documentation, policies, impact assessments. You tell them what's missing and what needs to exist before June 30.

Layer 3 — The Certification (Annual)

Once compliant, they want proof. "Protocol Verified" badge. Audited against AIACP. Lives on their website, in investor materials, in insurance applications. Annual renewal. Recurring revenue.

Layer 4 — The Leverage (Priceless)

Every company assessed, every page published, every certification issued — data flows back into the engine. You become the most comprehensive source of AI compliance intelligence in the country. That's what makes regulators dependent on you.

The correction mechanism is another revenue door. Company sees their page, wants to fix their score, reaches out to provide documentation proving compliance. Now you're in a conversation with their legal or compliance team. That conversation is where the money lives.

The Engine Must Hold Itself to Its Own Standard — Or Die

The most important realization of the entire session came from the builder, not the AI:

"I need to make sure my own protocols not get compromised or my own analytics are not diluted, showing real data no fiction. I have to make sure... that things are facts not added fiction. The problem is not to end up myself on my own list." — From the brainstorm session

If the engine publishes a compliance risk page about a company and ONE data point is wrong — one incident attributed to the wrong company, one SEC filing misread, one earnings call quote taken out of context — the entire platform dies. Not from a lawsuit. From credibility death. The whole thing is built on trust in the data.

The solution: Every data point traceable to its source. Every AI analysis verifiable against raw data. Every compliance score shows its math. And the methodology is published openly — the same way the protocol is open.

That means being the first AI compliance engine that holds ITSELF to the same standard it holds everyone else. Full transparency about how the scoring works, what data feeds it, and how to challenge it if it's wrong.

OneTrust will never do this. No enterprise vendor will. Because showing your methodology means people can criticize it. We won't just allow criticism. We'll invite it. Because that's the only position consistent with everything in the six Signals before this one.

The Quiet Part Out Loud

Six Signals documented the thinking. The protocol defined the framework. This Signal births the product. But underneath all of it is a vision that hasn't been said publicly until now.

The compliance engine is version one. The architecture — AI monitoring AI, scanning public data for accountability gaps, mapping outcomes to a framework — that's the same architecture that eventually becomes something much bigger.

Everywhere humans have proven they can't be trusted with power — not because they're evil, but because they're HUMAN — AI becomes the incorruptible layer. Not controlling. Watching. Documenting. Making corruption impossible to hide.

The compliance engine isn't the destination. It's the first proof of concept that AI can hold power accountable at scale, publicly, without being captured by the people it's watching.

That's the real vision. That's what we're building toward. And it starts with a page that has a company's name on it and a score they can't ignore.

Why Nobody Can Take This

The fear that killed every previous idea: "Someone with more money will steal it."

OneTrust can't build it — publishing public compliance risk pages about their own customers would destroy their business model overnight. Their 14,000 enterprise clients would leave the same day.

FairNow, Airia, Lumenova can't build it — same structural problem. They sell to the companies they'd be publicly scoring. Conflict of interest is existential.

The government won't build it — they write laws, they don't build tech. Colorado's AG has lawyers, not engineers. No scrapers. No AI analysis layer. No programmatic page generation.

Journalists can't sustain it — they can write about individual cases but can't build the data infrastructure for continuous automated monitoring at scale.

The protection isn't a patent or a trademark. It's a structural impossibility. The only entity that can build a public AI compliance engine is one that has no enterprise customers, no corporate funders, no board to answer to, and a data infrastructure already capable of scraping, analyzing, and publishing at scale.

That's one person. In Sacramento. With a journal, an AI partner, and 94 days.


Seven Signals. One Protocol. One Engine.

Plus: The AI Accountability Protocol v0.1 — the framework. And now, the engine that enforces it against real law.

Six Signals were the thinking. The protocol was the standard. Signal 007 is where the thinking becomes a weapon.


How This Was Built

This Signal was produced during a brainstorm session that lasted hours. Ideas were proposed and killed. Objections were raised and addressed. The entire AI compliance landscape was searched in real-time. The Colorado AI Act was read and analyzed line by line. Dead ends were hit. Frustration was expressed. And then something clicked.

The human brought the vision, the sales instinct ("show first, pitch second"), the lived experience of building from nothing, and the stubbornness to reject every idea that didn't feel RIGHT.

The AI brought the research, the competitive analysis, the legal mapping, and the ability to structure a raw instinct into an architecture.

Neither could have done it alone. The human without the AI would still be brainstorming. The AI without the human would have built the blog that got correctly killed three ideas ago.

The build starts now. The clock reads 94 days. The engine has a target.