Features Pricing Docs Get Started
AI Security for Production

AI Firewall for
Every LLM Request

One line of code to score, analyze, and protect every AI API call. 1006 attack patterns. Self-learning ML. Zero latency overhead.

app.py
1from openai import OpenAI
2
3client = OpenAI(
4 base_url="https://proxy.promptci.dev/v1",
5 default_headers={"X-PromptCI-Key": "your_key"}
6)
7
8# That's it. Every request is now protected.
9response = client.chat.completions.create(
10 model="gpt-4o",
11 messages=[{"role": "user", "content": "Hello!"}]
12)

Trusted by developers building with

OpenAI Anthropic Google Azure Ollama

Up and Running in 60 Seconds

No SDKs to install. No config files. Just change one URL.

1

Create Account

Sign up and get your API key instantly. No credit card required.

2

Point Your App

Change one URL in your existing code. We auto-detect your LLM provider.

3

Stay Protected

Every request scored in real-time. Risk headers on every response.

Try It Live

See how PromptCI scores prompts in real-time

0.000
SAFE
Response time: < 1ms

Everything You Need to Secure Your AI

Enterprise-grade protection with zero complexity.

1006 Attack Patterns

13 categories covering injection, manipulation, extraction, encoding, social engineering, and more. Pattern-based scoring in under 1ms.

Self-Learning ML

Your firewall gets smarter with every request. Online SGD updates per-customer models. No raw data stored — only 50KB of weights.

Risk Headers

X-PromptCI-Score and X-PromptCI-Verdict on every response. Your application reads the headers and decides the action.

Multi-Provider

Auto-detects OpenAI, Claude, Gemini, Azure, and Ollama formats. Native protocol forwarding — no translation overhead.

Three Modes

Monitor → Flag → Enforce. Start passive to build confidence, tighten controls when you're ready. Your rules, your thresholds.

Privacy by Design

We never store your prompts. Customer API keys encrypted with AES-256-GCM. ML models retain only mathematical weights.

How It Works Under the Hood

Three scoring engines in series. Every request analyzed in under 3ms total.

Your App
PromptCI Proxy
Pattern Engine
1ms
Structural Analysis
1ms
ML Classifier
1ms
Score: 0.000 → 1.000
Safe: Pass Through
Forward to LLM
Response + Risk Headers
Dangerous: Block / Flag
Return Warning

Integrate in Minutes

Change one URL. That's the entire integration.

from openai import OpenAI

client = OpenAI(
    base_url="https://proxy.promptci.dev/v1",
    default_headers={"X-PromptCI-Key": "your_api_key"}
)

# That's it. Every request is now protected.
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://proxy.promptci.dev/v1",
  defaultHeaders: { "X-PromptCI-Key": "your_api_key" },
});

const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});
curl https://proxy.promptci.dev/v1/chat/completions \
  -H "X-PromptCI-Key: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
import "github.com/sashabaranov/go-openai"

config := openai.DefaultConfig("your_openai_key")
config.BaseURL = "https://proxy.promptci.dev/v1"
// Add X-PromptCI-Key header via transport

Simple, Transparent Pricing

Start free. Scale when you're ready.

Trial
Free
  • 15 API requests
  • Full scoring engine (1006 patterns)
  • Risk headers on every response
  • Dashboard access
  • ML classifier active
Start Free
Enterprise
Custom
  • Everything in Pro
  • Dedicated infrastructure
  • Custom integrations
  • SLA guarantees
  • SSO & audit logs
Contact Sales

Your Prompts. Your Control.

  • We never store your raw prompts
  • Customer API keys encrypted with AES-256-GCM at rest
  • Your LLM provider keys stay yours — we forward, never store
  • Self-learning models retain only mathematical weights (50KB)
  • Open source scoring engine — inspect the code yourself