Logwick Documentation
Audit logging for AI agents
Everything you need to log, search, and understand what your AI agents are doing in production. One line of code. Full visibility.
Overview
Logwick is an audit log for AI agents. After each AI call in your code, you send one POST request to Logwick with the input, output, agent, and status. Logwick stores it, indexes it, and makes it searchable from your dashboard.
Use it to debug production issues, monitor costs and error rates, meet compliance requirements, and understand what your AI agents are actually doing.
REST API
POST logs from any language or framework
Node.js SDK
npm install logwick
Python SDK
pip install logwick
Claude MCP
Ask Claude about your logs in plain English
Dashboard
Search, filter, export, and set webhooks
Webhooks
Get alerted when error rates spike
Quick start
Get up and running in under 3 minutes. No SDK required — just a fetch call.
1. Get your API key
Sign up at logwick.io and copy your API key from the dashboard. It starts with sk-lw-
2. Add one call after your AI request
JavaScript — add after any AI call
const start = Date.now()
const result = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: userPrompt }]
})
// Add this — fire and forget, never blocks your code
fetch('https://logwick.io/api/v1/logs', {
method: 'POST',
headers: {
'Authorization': 'Bearer sk-lw-your-key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
agent: 'gpt-4o',
action: 'email_draft',
status: 'success',
input: userPrompt,
output: result.choices[0].message.content,
tokens: result.usage.total_tokens,
latency_ms: Date.now() - start,
user: req.user.email,
})
}).catch(() => {}) // never throws3. Check your dashboard
Open logwick.io/dashboard — your log appears instantly. That's it.
Node.js SDK
Install
npm install logwick
Basic usage
JavaScript
import { LogwickClient } from 'logwick'
const logwick = new LogwickClient({ apiKey: 'sk-lw-your-key' })
// Fire and forget
logwick.fire({
agent: 'gpt-4o',
action: 'email_draft',
status: 'success',
input: userPrompt,
output: result.choices[0].message.content,
tokens: result.usage.total_tokens,
user: currentUser.email,
})Client options
JavaScript
const logwick = new LogwickClient({
apiKey: 'sk-lw-your-key', // required
silent: true, // suppress console warnings (default: true)
tags: ['production'], // default tags on every log
})Python SDK
Install
pip install logwick
Basic usage
Python
import logwick
logwick.init(api_key="sk-lw-your-key")
# Fire and forget
logwick.fire({
"agent": "gpt-4o",
"action": "email_draft",
"status": "success",
"input": user_prompt,
"output": result,
"tokens": 312,
"user": user_email,
})Using the client directly
Python
from logwick import LogwickClient
lw = LogwickClient(
api_key="sk-lw-your-key",
silent=False, # print warnings
tags=["production"], # default tags
)OpenAI integration
Wrap your OpenAI call and Logwick automatically captures input, output, tokens, cost, and latency.
Node.js
JavaScript
const result = await logwick.openai(
() => openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: prompt }]
}),
{ action: 'email_draft', user: req.user.email }
)
// result is the normal OpenAI response — nothing changesPython
Python
result = lw.openai(
lambda: client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
),
{"action": "email_draft", "user": user_email}
)Anthropic / Claude integration
Wrap your Anthropic messages call and Logwick captures everything automatically.
Node.js
JavaScript
const result = await logwick.anthropic(
() => anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [{ role: 'user', content: prompt }]
}),
{ action: 'document_review', user: req.user.email }
)Python
Python
result = lw.anthropic(
lambda: client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}]
),
{"action": "document_review", "user": user_email}
)Google Gemini integration
Node.js
JavaScript
const result = await logwick.gemini(
() => model.generateContent(prompt),
{ action: 'data_analysis', user: req.user.email }
)Python
Python
result = lw.gemini(
lambda: model.generate_content(prompt),
{"action": "data_analysis", "user": user_email}
)LangChain integration
Add one callback handler and every LLM call in your chain is logged automatically — no per-call code needed.
Node.js
JavaScript
import { LogwickCallbackHandler } from 'logwick'
const handler = new LogwickCallbackHandler(logwick, {
user: 'ops@acme.com'
})
const chain = new LLMChain({
llm,
prompt,
callbacks: [handler] // every call in the chain is now logged
})Python
Python
handler = lw.langchain_handler(user="ops@acme.com")
chain = LLMChain(
llm=llm,
prompt=prompt,
callbacks=[handler] # every call in the chain is now logged
)Claude MCP integration
Connect Logwick to Claude Desktop and ask questions about your logs in plain English. This is the fastest way to investigate AI agent incidents.
"Show me my last 10 error logs"
"What was my success rate this week?"
"How many tokens did I spend yesterday?"
"Find all failed email_draft actions"
"Log this GPT-4o call for me"
Setup with Claude Desktop
Add this to your claude_desktop_config.json and restart Claude Desktop.
Mac: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
claude_desktop_config.json
{
"mcpServers": {
"logwick": {
"command": "npx",
"args": ["-y", "@logwick/mcp"],
"env": {
"LOGWICK_API_KEY": "sk-lw-your-key"
}
}
}
}Available tools
ingest_logWrite a log entry to Logwick
query_logsSearch logs by status, agent, action, date, or keyword
get_statsGet usage stats — success rate, tokens, cost
get_logGet full details of a single log entry by ID
delete_logDelete a log entry
Adding Logwick to a project via Claude
Once the MCP server is connected, you can ask Claude to add Logwick to any project:
Say this to Claude
"Go to https://logwick.io/docs and add Logwick logging
to my project. My API key is sk-lw-your-key."
Claude will read this documentation, install the appropriate SDK, add your API key to your environment variables, and wire up logging to your existing AI calls.
REST API
All endpoints require an API key in the Authorization header.
Authentication
Authorization: Bearer sk-lw-your-key
POST /api/v1/logs — Ingest a log
Request
curl -X POST https://logwick.io/api/v1/logs \
-H "Authorization: Bearer sk-lw-your-key" \
-H "Content-Type: application/json" \
-d '{
"agent": "gpt-4o",
"action": "email_draft",
"status": "success",
"input": "Draft a follow-up email",
"output": "Subject: Following up...",
"tokens": 312,
"latency_ms": 1842,
"user": "customer@acme.com"
}'Response
{
"id": "fcf559c2-a3cb-4d48-999c-b606f1440472",
"timestamp": "2026-04-19T13:44:52.241063+00:00",
"status": "ingested"
}GET /api/v1/logs — Query logs
Request
# All logs
curl https://logwick.io/api/v1/logs \
-H "Authorization: Bearer sk-lw-your-key"
# Filter by status
curl "https://logwick.io/api/v1/logs?status=error" \
-H "Authorization: Bearer sk-lw-your-key"
# Filter by agent
curl "https://logwick.io/api/v1/logs?agent=gpt-4o" \
-H "Authorization: Bearer sk-lw-your-key"
# Search
curl "https://logwick.io/api/v1/logs?search=email" \
-H "Authorization: Bearer sk-lw-your-key"
# Export as CSV
curl "https://logwick.io/api/v1/logs?format=csv" \
-H "Authorization: Bearer sk-lw-your-key"
GET /api/v1/stats — Usage statistics
Request
curl "https://logwick.io/api/v1/stats?days=30" \
-H "Authorization: Bearer sk-lw-your-key"
Response
{
"total": 1842,
"success": 1756,
"error": 86,
"success_rate": 95.4,
"error_rate": 4.6,
"avg_latency": 1204,
"total_tokens": 284921,
"total_cost": 4.27,
"period_days": 30
}x402 — Pay per log
Logwick supports x402 — an open payment protocol that lets AI agents pay per log with USDC on Base. No account, no API key, no signup required. The agent pays and logs in a single request.
Endpoint
POST https://logwick.io/api/v1/agent-log
Price: $0.001 USDC per log · Network: Base (eip155:8453) · No API key required
How it works
The agent calls the endpoint. Logwick responds with HTTP 402 and payment requirements. The agent pays $0.001 USDC, includes the payment proof in the request header, and Logwick stores the log.
Discover pricing — GET request
curl https://logwick.io/api/v1/agent-log
# Returns:
# {
# "x402Version": 1,
# "accepts": [{
# "scheme": "exact",
# "network": "eip155:8453",
# "maxAmountRequired": "1000",
# "payTo": "0x...",
# "description": "Ingest one AI agent audit log entry"
# }]
# }Log a paid entry
POST with x402 payment header
curl -X POST https://logwick.io/api/v1/agent-log -H "Content-Type: application/json" -H "X-Payment: <signed-payment-proof>" -d '{
"agent": "gpt-4o",
"action": "email_draft",
"status": "success",
"input": "Draft a follow-up email",
"output": "Subject: Following up...",
"tokens": 312,
"latency_ms": 1842
}'Using the Coinbase AgentKit
If your agent uses Coinbase AgentKit, x402 payments are handled automatically — the agent discovers the price, pays, and logs in one step.
JavaScript — AgentKit with x402
import { CdpWalletProvider } from '@coinbase/agentkit'
import { x402Fetch } from '@coinbase/x402-fetch'
const wallet = await CdpWalletProvider.configureWithWallet({ /* config */ })
// x402Fetch automatically handles payment — no manual signing needed
const response = await x402Fetch('https://logwick.io/api/v1/agent-log', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
agent: 'gpt-4o',
action: 'email_draft',
status: 'success',
input: prompt,
output: result,
tokens: 312,
}),
wallet,
})Log fields
Same fields as the standard REST API — see Log fields reference below. The only difference is authentication — payment via x402 instead of an API key.
✓ No account required — payment is authentication
✓ $0.001 per log — fractions of a cent, agent-friendly pricing
✓ Base mainnet — real USDC, instant settlement
✓ Listed on agentic.market — discoverable by any x402-compatible agent
Log fields reference
All fields accepted by the ingest endpoint. Fields marked with * are required.
FieldTypeDescription
agent*
string
The AI model or agent name (e.g. gpt-4o, claude-3-5-sonnet, gemini-1.5-pro)
action*
string
The action or task type (e.g. email_draft, data_analysis, code_generation)
status
string
success, error, or pending (default: success)
input
string
The prompt or input sent to the AI agent
output
string
The response or output from the AI agent
user
string
The user or system that triggered this action
tokens
number
Total tokens used (input + output)
latency_ms
number
Response time in milliseconds
cost_usd
number
Estimated cost in USD
tags
array
Array of strings for categorization (e.g. ['production', 'v2'])
metadata
object
Any additional key-value data you want to store