Behind the build · May 2026
From 32 to 92: How We Built One of the Most Agent-Ready Developer Tools on the Internet
Ora.run scans websites and scores how ready they are for AI agents — things like MCP servers, OpenAPI specs, x402 payments, llms.txt files, and structured data. We started at 32/100. We just hit 92/100, putting Logwick in the top 0.4% of 8,400 sites scanned. Here's exactly what we built, why it matters, and what we learned.
#2
overall out of 8,400 sites
AI agentsMCPx402llms.txtOpenAPIAgent infrastructure
Score progression
32
Starting point — bare site, no agent infrastructure
60
Added llms.txt, OpenAPI spec, robots.txt
65
Added public API discovery endpoint
67
Added blog post targeting "audit logging for AI agents"
73
Added agent-card.json, MCP server card, OAuth metadata
82
Split JSON-LD into separate blocks, Ed25519 key, og:image
87
Added llms-full.txt, pricing.md, index.md, modular llms.txt
88
Added markdown links to llms.txt, PostalAddress schema
92
Blog post made content citable, OpenAPI 429 errors, guest token
What is ora.run and why does it matter?
Ora.run evaluates how ready a website or product is to be used by AI agents — not just humans. It scores across five layers: Discovery (can agents find you?), Identity (do agents understand you?), Auth & Access (can agents authenticate?), Agent Integration (have you built the plumbing?), and User Experience (can users interact with you through agents?).
This matters because AI agents are increasingly the interface between users and software. When a developer tells Claude to "add logging to my project," Claude needs to be able to discover Logwick, understand what it does, authenticate, and integrate it — all without human intervention. A high ora.run score means Logwick works natively in the agentic world.
Most developer tools score in the 40-60 range. Scoring 92 puts Logwick alongside companies that have dedicated engineering teams working on agent infrastructure. We did it in a single focused sprint.
What we actually built
Here's every file and endpoint we added, organized by impact:
Agent discovery files
/llms.txtPlain text description of Logwick for AI agents — the AI equivalent of robots.txt. Includes when to use, constraints, API reference, and authentication.
/llms-full.txtComplete product documentation in a single file — agents ingest everything in one request without crawling.
/pricing.mdMachine-readable pricing so agents can compare plans and make recommendations.
/index.mdMarkdown version of the homepage for agents that prefer markdown over HTML.
/api/llms.txt and /docs/llms.txtPer-section context files so agents can fetch scoped documentation.
Well-known endpoints
/.well-known/api-catalogRFC 9727 API catalog with linkset format, advertising all agent-accessible APIs.
/.well-known/agent-card.jsonA2A agent card describing Logwick's capabilities and endpoints.
/.well-known/mcp/server-card.jsonMCP server card so agents can preview the MCP server before connecting.
/.well-known/oauth-authorization-serverRFC 8414 OAuth metadata for agent authentication discovery.
/.well-known/oauth-protected-resourceRFC 9728 protected resource metadata.
/.well-known/http-message-signatures-directoryEd25519 public key directory for RFC 9421 request signing.
/.well-known/openapi.jsonOpenAPI spec at the standard well-known path.
x402 pay-per-log
/api/v1/agent-logAI agents pay $0.001 USDC on Base mainnet per log entry. No account required — payment is authentication.
/api/v1/agent-logsAgents query their own logs by signing a message with their wallet — no account needed.
/api/v1/agent-statsStats by wallet address, same wallet signature pattern.
/api/discovery/resourcesx402 discovery endpoint listing all payment-gated resources.
API improvements
/api/v1 (public)Public API discovery endpoint — no auth required, returns full API surface.
/api/v1/logs/streamServer-Sent Events streaming endpoint for real-time log consumption.
/api/v1/guest-tokenGuest API key with 1-hour expiry and 10-log limit — agents can test without signup.
JSON 404 handlersCatch-all routes returning structured JSON instead of HTML error pages.
Rate limit headersX-RateLimit-Limit, X-RateLimit-Window, Retry-After on all API routes.
429 + 5xx in OpenAPIDocumented error responses so agents know how to handle rate limits and failures.
Structured data
5 separate JSON-LD blocksSoftwareApplication, Organization, FAQPage, WebSite with speakable, BreadcrumbList — each as separate script tags so scanners detect every type.
sameAs entity linkingGitHub, npm, PyPI, Twitter, LinkedIn — agents can disambiguate Logwick from other entities.
PostalAddress in OrganizationRequired for Organization schema completeness score.
Canonical URL + og:imageAll four metadata signals for AI entity resolution.
Content
Blog post: audit logging for AI agentsTargets the exact use case query agents search for — full code examples for OpenAI, Anthropic, LangChain, MCP.
Blog post: how to log OpenAI API callsTargets high-intent developer searches — drove significant citability score improvement.
/about, /contact, /compareTrust anchor pages agents check to verify business legitimacy.
/statusOperational status page for agent error recovery.
Semantic H2 headings5 H2s added to homepage for better vector embedding and semantic retrieval.
What we learned
The biggest quick wins were the discovery files
Going from 32 to 60 in one push — nearly doubling the score — came entirely from adding llms.txt, openapi.json, and robots.txt. These took less than an hour to write and had the highest ROI of anything we built. If you do nothing else, add these three files.
JSON-LD needs to be separate script blocks
We had all our schema types in a single array inside one script tag. The scanner only detected one type. Splitting into 5 separate script blocks — one per schema type — immediately gave us credit for all of them. This is not documented anywhere; we discovered it by watching the score change.
Content makes you citable
The AI citability score jumped from partial to full only after we published the second blog post. The scanner checks whether LLMs would actually cite your content when answering questions. Technical documentation alone wasn't enough — it needed real articles with examples, use cases, and structure.
x402 is genuinely novel but hard to detect
We implemented x402 pay-per-log correctly — payment-required headers, discovery endpoint, Base mainnet support — but the scanner still doesn't detect it. This is an early protocol and scanners haven't caught up. The implementation is real and working even if it doesn't score yet.
Most gaps above 88 require platform approval or weeks of SEO
Above 88, the remaining points come from verified AI platform integrations (requires approval from Anthropic/OpenAI), brand authority in training data (requires press coverage and time), and streamable HTTP MCP (requires significant engineering). These aren't quick fixes.
Why this matters for Logwick users
Logwick is a tool for AI agents — it logs what AI agents do. It makes sense that Logwick itself should be fully accessible to AI agents. The infrastructure we built isn't just for a benchmark score. It means:
✓Claude can set up Logwick autonomously
Tell Claude to add logging to your project. It reads llms.txt, understands the API, installs the SDK, and wires it up. No human steps required beyond saying what you want.
✓AI agents can log themselves without accounts
Via x402 pay-per-log on Base mainnet, any autonomous agent with a crypto wallet can start logging at $0.001 per entry — no signup, no API key, no human in the loop.
✓Any AI assistant can recommend Logwick accurately
With 5 JSON-LD schema types, comprehensive llms.txt, and consistent metadata, ChatGPT, Claude, Gemini, and DeepSeek can all accurately describe what Logwick does and recommend it appropriately.
✓Agents can query logs without human accounts
Via wallet-based identity — agents prove ownership of their wallet address and query only their own logs. No login, no password, just cryptographic proof.
What's next
The remaining points on the score require either platform approvals or significant engineering. The three items on our roadmap:
Streamable HTTP MCP transport
Upgrading the MCP server from stdio to HTTP transport so agents can connect without installing anything locally. This would unlock 6+ bonus points and more importantly make the MCP integration more useful.
Press coverage and community presence
The brand authority gap is pure distribution — we need third-party mentions, Reddit discussions, and developer community presence. This is marketing work, not engineering.
CLI tool
A command-line tool that lets developers and agents script interactions with Logwick without building API integrations from scratch.
Verified by ora.run

Try the most agent-ready logging tool available
Free tier includes 5,000 logs/month. No credit card required.
Or just tell Claude: "Here are the Logwick docs: [paste from logwick.io/docs] — add Logwick to my project"