Manifold, a startup founded by the former team behind Laiyer AI, has raised $8 million in seed funding to secure autonomous AI agents at the endpoint — the actual machines where agents execute their tasks.
The raise adds to a funding surge in agentic security: Manifold’s announcement came the same day five other companies launched agent security products, and one day after Kai ($125M) and Surf AI ($57M) raised $182M combined.
The Endpoint Problem
Most agent security approaches focus on the network layer (firewalls), the identity layer (permissions), or the application layer (prompt filtering). Manifold targets a gap that sits beneath all of these: what happens when an agent actually runs on a machine.
Enterprise AI agents don’t just send API calls — they execute code, access file systems, spawn processes, and interact with operating system resources. An agent with proper API credentials and approved network access can still cause damage at the endpoint level through:
- Unintended file system modifications
- Process spawning that escapes sandbox boundaries
- Memory access patterns that expose sensitive data
- Resource consumption that impacts other workloads
Runtime Protection, Not Prompt Filtering
Manifold’s approach is runtime-focused: monitoring and constraining agent behavior at the OS level rather than at the prompt or API layer. This means protection applies regardless of which LLM powers the agent, which framework orchestrates it, or which prompt template generated the instructions.
The positioning echoes what we’ve seen from TrojAI’s Agent Runtime Intelligence and Singulr’s Agent Pulse, but at a lower level of the stack — the actual endpoint where agent code executes.
Why It Matters for OpenClaw
OpenClaw agents run on user-controlled hardware by design. That means endpoint security is especially relevant — there’s no cloud provider managing the execution environment. An OpenClaw agent on a Mac Mini, Raspberry Pi, or VPS has direct access to the host system’s resources.
Projects like Nvidia’s OpenShell address this with containerized sandboxing. Manifold approaches it from the security monitoring side: even inside a sandbox, what is the agent actually doing?
The Laiyer AI Connection
The founding team’s background at Laiyer AI — which focused on AI model security — gives context. They’ve moved from protecting models to protecting the agents that use them. The shift mirrors the broader industry evolution: from “is the model safe?” to “is the agent safe?”
With $8M in seed funding, Manifold joins a growing cohort of startups attacking agent security at different layers of the stack. The endpoint layer — where agents meet the operating system — may be the most underserved and most consequential.