Hello World from Certiv
When you get the opportunity to start a company, the right place to begin is with gratitude.
Throughout my career I’ve been lucky to work with incredible people and build systems that mattered to real customers. Starting Certiv with my co-founders Paul and Dan is another one of those moments I don’t take lightly. Building something meaningful alongside people you trust and respect is a privilege.
Certiv is the result of a journey that spans cloud security, automation, and now AI. But more than anything, it comes from lessons learned along the way about what it actually takes for organizations to trust automation and autonomous systems.
Today we’re excited to finally say hello.
Where the story started
For Paul and I, part of this story actually started at our last company. We spent years building automation systems that could detect and fix security issues across cloud infrastructure. That work eventually became part of VMware, where our platform was (and as of this writing, still is) foundational to their cloud management and security offerings.
Customers loved the idea of automation. They wanted systems that could automatically fix issues across their infrastructure.
It sounded great… In theory…
When it came time to give that remediation system permission to act, things got complicated.
Security teams would pause and ask the hard questions.
What if the automation breaks something?
How can I guarantee what code will run?
Do I have to elevate permissions to get this to work?
These were deterministic systems making predictable changes, and even then trust was difficult.
Over time we learned that organizations don’t adopt automation simply because it works. They adopt it when they understand it, feel confident in how it behaves, and they have the needed controls in place. These are the requirements for building trust in a system.
To get there we had to design a system that showed customers what automation was doing, why it was doing it, give admins the right controls, and bake security in. As people gained confidence in the system, those early “no’s” turned into “yes’s.”
That experience stuck with us.
Because if trust was difficult for deterministic automation, what would happen when the actor was AI?
The shift from automation to agents
Fast forward to today and we’re entering a world where AI agents are becoming one of the most powerful productivity multipliers we’ve seen in generations. Developers are using tools like OpenClaw, Claude Code, and Codex. Knowledge workers are automating real workflows with agents that read files, write code, call APIs, and interact with production systems. What’s worse is the AI agents are acting and looking as if they are the person who is on the machine.
These systems don’t just execute scripts — they act like a human. They reason. They chain together actions. They adapt to achieve a goal.
That power is exciting, but it also introduces something new into enterprise environments. For the first time we have non-deterministic software acting autonomously across systems, often with access to sensitive data. Protecting your enterprise is no longer just an outside-in game. Threat vectors will now come from inside, too.
Traditional security models were never designed for this kind of actor.
How Certiv came together
While thinking through all of this we connected with Dan, who had been working on AI systems for years, first in research and later at Microsoft where he helped bring early language models into real-world developer workflows. He was already deep in the emerging world of agentic systems. Meanwhile I was spending a lot of time trying to understand what this new wave of automation would mean for infrastructure, access, and security.
As we started experimenting with agents ourselves, we kept running into the same issues.
Every useful system eventually hit the same bottlenecks around access, security, and trust.
How much access should an agent have?
How do you know it’s behaving the way you expect?
How do organizations safely allow these systems to operate across their environments?
It quickly became clear that solving this problem required multiple perspectives. Dan brought deep experience with the models themselves and what it takes to build real agentic systems. Paul had spent years building large-scale security and automation platforms. And I had lived the challenges of building systems customers needed to trust in production.
Bringing those experiences together felt important. That’s when the idea behind Certiv really began to take shape.
”Is this allowed?” vs “Should this be happening?”
One realization we had early on is that most existing security models answer a simple question:
“Is this action allowed?”
That works when systems behave predictably.
But agents introduce a much more important question:
“Should this action be happening?”
Answering that requires context. Why is the agent doing this? What goal is it trying to achieve? What chain of actions led here? What might it do next?
Permissions alone can’t answer those questions. Traditional security models assume that controlling identity or gateway network activity is enough. AI agents change that equation.
The most useful agents operate with the same access as the people they assist, working directly inside developer and knowledge-worker environments. That’s what makes these systems powerful, and also what introduces risk.
Even if permissions are carefully scoped in one system, the intersection of access across many systems can still create unexpected outcomes. Increasingly, that work is happening on endpoints. Tools like OpenClaw and Claude Code run directly on developer machines with access to local files, repositories, credentials, and development environments. If agents run where the context lives, then the controls that govern them have to live there too.
In order to truly secure agents, you have to observe them where they actually work. The center of gravity for this new kind of automation isn’t the cloud. It’s the endpoint.
From security to assurance
As we thought about the problem more deeply, we realized organizations don’t just need security for AI agents. They need assurance.
Security is part of that, but it’s only one piece. Teams need to see what agents are doing and understand their actions. They need governance and guardrails that align with how their organization operates. They need the ability to detect harmful or unexpected behavior and intervene when necessary. They need confidence that the work being done by these systems is aligned with their intent.
Most importantly, assurance should enable productivity.
This idea became central to how we think about Certiv.
We’re building what we call a Runtime Assurance Layer for AI agents — technology designed to observe, govern, and secure agent activity wherever it runs so organizations can actually trust these systems.
Guiding principles
As we’ve worked on this problem, a few guiding principles have shaped how we think about building Certiv:
- Trust comes from visibility. Organizations need to see what their agents are doing and understand why actions are happening.
- Context matters more than permissions. Security decisions can’t rely on static rules alone — they require the understanding of intent as well as behavior.
- Governance should enable rather than block. The goal isn’t to slow down AI adoption, but to make it safe enough for organizations to move faster.
- Humans remain in control. Autonomous systems should augment people, not remove accountability or oversight.
Why “Certiv”
Somewhere in all of these conversations about trust and visibility we kept coming back to a simple idea.
Organizations need certainty about what AI agents are doing.
No guesses. No black boxes. No blind trust.
Certainty.
That idea is where the name Certiv came from. If you want to get a little geeky about it, the Latin word certius means “more certain.”
That’s exactly what we want to bring to the world of AI agents.
More certainty about what autonomous systems are doing. More certainty about the actions they take. More certainty that the work happening across your systems aligns with your policies and intent.
When organizations have that certainty, they can move faster with automation instead of slowing down because of risk.
The future: managing teams of agents
We also believe something bigger is happening.
In the near future, most professionals won’t just use software — they’ll manage teams of AI agents. Developers will orchestrate fleets of coding agents. Analysts will delegate research and reporting workflows. Operators will automate complex operational tasks.
In many ways, we’ll all become managers of autonomous systems.
For that future to work, organizations need a foundation they can trust. They need a way to enable agents without losing governance, visibility, or control.
That’s the mission behind Certiv.
What we’re building
We’re building a Runtime Assurance Layer for AI agents that helps organizations safely enable autonomous systems while maintaining the confidence to operate them in real environments.
We’re still early in this journey, and this space is evolving incredibly fast. One saying I’ve always loved is that “the answer is not in the building.” Theories only get you so far — the real answers come from working with customers and learning from the real world.
So that’s what we’re doing. Talking to teams, learning from builders, and creating something genuinely useful.
Starting a company is both exciting and humbling. We’re grateful for the investors, advisors, and early partners who believe in what we’re building. Most of all, we’re grateful for the opportunity to work together on a problem we think will shape the future of how software operates.
If you’re building with AI agents — or thinking about how to safely enable them inside your organization — we’d love to talk.
The journey is just beginning.
— Jason, Co-founder & CEO, Certiv