We built this because
we needed it ourselves.

Agent FP started when a fraud team we know watched their false negative rate triple in six months — not because fraudsters got smarter, but because AI agents got cheaper.

The problem we're solving

Traditional bot detection was built to catch scrapers, credential stuffers, and click farms. Those threats haven't disappeared — but they've been joined by something harder to detect: AI agents operating with human-level behavioral variability.

Modern AI agents can navigate complex forms, solve CAPTCHAs, rotate between personas, and mimic the timing entropy of real users. They're built on the same browser automation that security teams have spent years training classifiers against — and they're getting better every month.

Your fraud model sees an IP with reasonable velocity, a browser fingerprint that matches Chrome on macOS, mouse movements with normal distribution, and a form submission that took 45 seconds. It marks it human. It's not.

What makes Agent FP different

We focus exclusively on the boundary between AI-agent behavior and human behavior. We're not trying to block all bots — we're trying to answer one specific question with high precision: is this request coming from an AI agent?

That focus lets us go deep on signals that general-purpose bot detection ignores: the micro-timing of LLM token generation reflected in keystroke patterns, the characteristic HTTP/2 multiplexing behavior of Python async libraries, the specific set of WebGL extensions that headless Chromium supports vs. real Chrome.

Our approach

We maintain a continuously updated library of fingerprints for 340+ AI agent frameworks, including LangChain, AutoGPT, CrewAI, AgentKit, and all major browser automation tools. When a new framework ships, we fingerprint it within 24 hours and update the model.

Detection happens at the edge — under 2ms added latency, no round-trips required. The confidence score is available as a JavaScript property, a webhook payload, or a simple API response.

Our principles

Precision over recall

False positives hurt real users. We optimize for low false positive rates first.

Minimal data

We collect the least data necessary to make an accurate classification. No PII without consent.

Transparency

We publish our methodology, false positive rates, and known limitations openly.

Speed matters

Detection that adds perceptible latency will get turned off. We run at the edge.

Contact

We're a small team. We read every email. [email protected]

Security disclosures: [email protected]