The Room Where It Happens
There’s a song in Hamilton about the dinner where the deal got cut, the one that traded the United States’ capital for the assumption of state debts. Aaron Burr isn’t there. He spends the whole number outside the door, listening, furious, repeating the line that gives the song its name:
I wanna be in the room where it happens.
Burr’s complaint isn’t really about prestige. It’s about information. Decisions made in the room have context that decisions reported out of the room don’t. By the time the press release goes out, the interesting part is over.
This is the situation almost every AI security tool is in right now.
The trust problem nobody is solving
An AI agent is the strangest thing your security stack has ever had to reason about. It logs in as a person but acts like a process. It inherits a user’s authority and spends it autonomously, hundreds of decisions deep, on machines and files the user never named. Every existing control was built for one of those halves. None of them was built for both.
And the agent isn’t acting blindly. It’s acting on a worldview the user can’t see. Tool outputs, file contents, instructions hidden in a comment somewhere: all of it is quietly reshaping what the agent thinks the situation is. From inside that worldview, what it’s about to do may make perfect sense. From the user’s, the same action might look like a betrayal.
The question that matters is one of intent: why is the agent doing this, and does the why hold up when you can see what the agent can see? That’s a judgment you can only make from inside the context that produced it.
Most of the security industry is standing in the hallway.
The hallway
Gateways see encrypted traffic. Identity providers see a delegation that happened once, hours ago. SIEMs see logs of things that already finished. Model-vendor classifiers see what’s happening inside their own product, and the user opens a different IDE tomorrow.
Each of these is a real tool solving a real problem, and good security has always been layered. Gateways, IAM, DLP, and audit logging will keep doing what they do. The point isn’t that they fail; it’s that they cannot answer the question that AI agents have introduced. None of them sees the agent’s worldview. They see actions stripped of the situation that produced them. You’re left grading behavior without motive, every judgment shallow by construction.
There’s a second problem, and it’s structural rather than perceptual. Most hallway controls assume a cooperating user. They sit between the user and the model, or between the user and the network, and they only see what the user routes through them. That assumption was reasonable when the tools in question were stable and few. It is not reasonable now. A developer switching models, swapping IDEs, pointing a binary at a different endpoint, or running something locally instead of in the cloud isn’t evading anything. It’s a normal Tuesday. Any control one export ANTHROPIC_BASE_URL= away from blind is not really a control. It’s a suggestion.
You cannot make a real trust decision from the hallway. And you cannot rely on a control the user can walk around without trying.
What’s actually in the room
The room is the compute. It’s the endpoint where the agent actually runs. Where Claude Code, OpenClaw, Cursor, or whatever else is executing on real hardware against real files. That’s where decisions actually happen, and that’s where the agent’s worldview is being assembled from everything it reads.
Being on the endpoint is not valuable for its own sake. Placement is a means, not an end. The reason placement matters is because the room is where the worldview is built, and where it can be inspected against the user’s.
In the room, you see the inputs reshaping the agent’s picture of the situation as it ingests them. You see the action about to land before it lands: the file about to be touched, the call about to go out. And critically, you can stop, prompt the human, and wait. Hallway tools can flag and alert. Only something in the room can hold the door.
This is what we mean by context. It’s not telemetry. Telemetry is what you ship to the SIEM. Context is the agent’s situational picture, available to you in the moment of decision, alongside the user’s.
From context to judgment
Context alone isn’t a security control. A perfect log of every agent action is a forensic artifact, not a runtime defense. The thing that turns context into a trust decision in real time is a model of intent: the bridge between two worldviews.
Most of the time, the user’s worldview and the agent’s track each other. The interesting failures all happen when they diverge.
- Prompt injection is a divergence: the agent’s view has been quietly rewritten by something the user never saw.
- Scope drift is a divergence: the agent has updated its sense of the task in a direction the user didn’t authorize.
- Role violation is a divergence: the agent is operating with a sense of its own scope that the user’s view excluded.
You cannot detect any of these from outside. Holding both worldviews at once, and noticing when they pull apart, is the whole game.
The third worldview
There’s one worldview we haven’t named yet: the organization’s. The user has theirs. The agent has theirs. The org has its own. It has decided what agents should and shouldn’t do, regardless of how reasonable any individual user request or agent decision sounds. That’s policy. Declared intent at the organizational level, not negotiated per-action.
Policy and intent need each other. Policy without intent-judgment is the brittle deny-list every security tool already has. Fine for the obvious. Blind to the subtle. Intent-judgment without policy means re-litigating every action on its merits, which doesn’t scale. Being in the room is what lets you do both: the org’s hard lines enforced where the action actually happens, and the gray area between them judged with the full context it was made in.
Trust is a runtime property
The deepest mistake in the current AI security category is treating trust as something you establish at the boundary. At login. At the gateway. At the policy gate. As if the trust decision happens once, and then propagates inward.
Trust does not propagate. It decays the moment a delegated agent starts acting, because every input it ingests is quietly reshaping the worldview it’s reasoning from. The agent at minute forty may be coherent, capable, and acting in good faith inside a worldview that no longer matches the user’s at all.
Real trust is a runtime property. It has to be re-earned continuously, and it can only be re-earned by something watching the worldview form, in the room, in the moment.
Why we built this way
Certiv exists because we believe the next decade of enterprise security will be defined by who is in the room when AI agents act, and who is left in the hallway watching the door.
Being on the endpoint is hard. Doing it without breaking the developer experience, without becoming a new attack surface in its own right, without becoming a bottleneck, all of that is harder still. We’ve spent a long time on that engineering, and we won’t be writing about it here. What we’ll say is what the post argues for: we run where the agent runs, in the path of every action, on the endpoint itself. The implementation is the product. The thesis is what it’s for.
The thesis is simple. You have to be in the room. You have to see the agent’s worldview as it’s being assembled, alongside the user’s. And you have to turn that into judgment, before the action lands.
Everyone else is in the hallway.
We’d rather be in the room where it happens.
If this resonates and you’re navigating the same problem, book a conversation with our team and we’ll show you what being in the room actually looks like.
— Daniel, Chief AI Officer, Certiv