AI Developer Tools
Practical tools for working with AI APIs and models every day. Compare pricing and specs across providers, count tokens before you hit send, test prompt variations, validate function schemas, and build function-calling contracts — all without leaving your browser. Everything runs client-side, so your data stays private.
63 tools
Agent Skill Validator
Validate skill definitions across OpenClaw, Claude, Codex, and MCP with portability scoring and exact fixes
SkillSpec Converter
Convert one canonical skill definition into OpenClaw SKILL.md, Claude blocks, Codex scaffolds, and MCP manifest snippets
Skill Regression Suite Builder
Build deterministic regression suites for skill updates with risk-weighted pass-rate gates and CI-ready case definitions
Skill Scope Collision Detector
Detect cross-scope skill version and enabled-state collisions across global, user, project, and local configuration layers
Skill Payload Budget Optimizer
Optimize skill pack token and byte budgets against context windows with deterministic compress, defer, and keep recommendations
Tool Approval Matrix Compiler
Compile cross-platform allow, ask, and deny decisions for tool capabilities across Codex, Claude, and managed MCP policies
Skill Release Canary Planner
Generate staged canary rollout plans for skill updates with deterministic stop conditions and rollback checklists
Trace Failure Classifier
Classify failed trace events into root-cause buckets and output deterministic remediation guidance for agent incident triage
LLM Crawl Policy Validator
Validate robots.txt and llms.txt files, detect conflicts, simulate AI bot access, and export corrected policies
MCP Governance Composer
Compose managed MCP governance packs with allow/deny lists, approval boundaries, and operator rollout checklists
MCP Tool Search Budget Simulator
Simulate context-window usage for full MCP tool injection versus search-first retrieval strategies
Claude Settings Scope Diff
Diff managed, user, project, and local settings scopes and compute the effective merged Claude configuration
Claude Hook Policy Simulator
Simulate Claude hook decisions for pre/post tool events and validate policy rule coverage before rollout
OpenClaw Skill Trust Scanner
Scan SKILL.md instructions for destructive command patterns, missing safety boundaries, and trust posture
Agent Tool Blast Radius Mapper
Map tool capability blast radius, score operational risk, and produce least-privilege policy buckets
AI Model Comparison
Compare AI models: pricing, context windows, benchmarks, and specs side by side
MCP Server Directory
Curated directory of Model Context Protocol servers with install commands and categories
AI Agent Framework Comparison
Compare AI agent frameworks: LangChain, CrewAI, AutoGen, Mastra, and more side by side
AI Pricing Calculator
Compare AI API costs across providers for your specific workload
LLM Token Counter
Count tokens and estimate model costs across GPT, Claude, Gemini, Llama, and more — with optional free API access for apps and agents
CLAUDE.md / Rules File Generator
Generate CLAUDE.md, .cursorrules, and copilot-instructions.md files for your project with templates
AI Model Picker Quiz
Answer 7 questions and get personalized AI model recommendations — compare GPT, Claude, Gemini, Llama, and more
System Prompt Library
Curated collection of system prompts for coding, writing, analysis, and more
MCP Server Config Generator
Generate MCP server configurations for Claude Desktop, Cursor, and Windsurf with visual editor and presets
JSON Schema Generator
Generate production-ready JSON Schemas from examples for function calling, structured output, and agent tool contracts
Prompt Template Builder
Build AI prompt templates with variables, live preview, and export to JSON/YAML
AI Cost Estimator
Estimate total AI API costs for real-world workloads across all major providers
System Prompt Editor
Write and analyze AI system prompts with live token counting, variable detection, and XML highlighting
LLM Output Diff Tool
Compare outputs from different AI models side-by-side with diff highlighting
AI Context Window Visualizer
Visualize how your AI model's context window is allocated across system prompt, tools, conversation, and RAG
AI Prompt Tester & Comparator
Compare AI prompt variations side-by-side with token counting, diff highlighting, and variable tracking — test prompts before deployment
Prompt Token Budget Planner
Plan your AI system prompt token budget visually across sections against model context limits
AI Tool Schema Builder
Visual schema builder for AI tool definitions with export to OpenAI, Anthropic, MCP, and JSON Schema formats
LLM Structured Output Validator
Validate structured outputs and tool schemas for OpenAI, Anthropic, MCP, and JSON Schema with detailed errors and fix guidance
Markdown Memory File Builder
Create structured markdown memory files for AI agents — SOUL.md, USER.md, AGENTS.md, daily logs, decision records, and more
Embedding Similarity Calculator
Calculate cosine similarity, dot product, and distance between embedding vectors from OpenAI, Cohere, and more
RAG Chunk Size Calculator
Calculate optimal chunk size and overlap for RAG pipelines based on document type and embedding model
Agent Trace Viewer
Visualize AI agent execution traces with timeline, table, and detail views for debugging LangChain and OpenAI agents
Prompt Version Diff
Compare AI prompt versions with semantic diff — track variable changes, instruction modifications, and token deltas
AI Guardrail Rule Tester
Build and test AI guardrail rules with instant feedback — preset PII, injection, and safety patterns
Function Call Flow Simulator
Simulate AI function calling and tool use flows without API calls — test multi-step agent conversations
AI API Key Tester
Validate AI API keys from Anthropic, OpenAI, OpenRouter, Groq, and more — detect provider, check format, get config snippets
AI Token + Pricing Calculator
Paste text, count tokens, and compare LLM API costs across GPT, Claude, Gemini and more with batch estimation and CSV export
WebMCP Playground
Test and validate WebMCP tool definitions — paste manifest JSON, simulate agent tool calls, validate against the W3C spec, preview tool discovery.
MCP Server Starter Generator
Generate complete MCP server projects with tools, resources, and auth — TypeScript, Python, or Go
Cursor Rules Generator
Generate .cursorrules and .windsurfrules files for AI coding assistants
AI Response Comparator
Compare model outputs side-by-side with diff and analysis modes
System Prompt Analyzer
Analyze system prompts for clarity, structure, and token posture
MCP Tool Tester
Validate and test MCP/WebMCP tool definitions
Web-to-Markdown Converter
Convert webpage HTML to markdown and estimate token savings
AI Code Smell Detector
Detect AI-generated code anti-patterns — hallucinated imports, over-abstraction, verbose error handling, redundant logic, and AI tells. Instant scored analysis.
LLM Workflow Cost Calculator
Model multi-step AI pipelines — embed, retrieve, generate, validate — and see total cost per execution across providers with caching simulation.
Codebase Context Packer
Pack code files into optimized LLM context with smart truncation, token budgeting, and XML/markdown/plain output formats. Ready to paste into Claude or GPT.
AI Prompt Injection Tester
Test system prompts against 50+ known injection attack patterns — role hijacking, instruction override, delimiter abuse, encoding attacks, and jailbreaks.
AI API Error Decoder
Paste an error response from Claude, OpenAI, Gemini, or Mistral API — get a human-readable explanation, fix with code snippet, and retry strategy.
AI Agent Cost Simulator
Configure multi-agent architectures and visualize cost explosion curves. Compare multi-agent vs single-agent with context growth modeling and monthly projections.
AI Rules Linter
Lint CLAUDE.md, .cursorrules, and copilot-instructions files for redundancy, conflicting instructions, missing sections, and token efficiency.
Git Diff Token Counter
Paste a git diff and see token counts per file, cost across AI models for code review, and chunking suggestions when diffs exceed context limits.
LLM Latency Estimator
Estimate time-to-first-token, generation time, and total latency for any AI model. Get UX recommendations for spinners, streaming, and background jobs.
Prompt A/B Test Designer
Design rigorous prompt experiments with sample size calculation, statistical significance, cost estimation, and evaluation framework export.
MCP Permission Auditor
Audit MCP server configurations for security risks — permission surface analysis, risk scoring, dangerous capability combinations, and least-privilege recommendations.
AI Doc Readability Scorer
Score documentation for human and AI-agent readability across 6 dimensions — structure, code examples, API discoverability, schema coverage, LLM parseability.
AI Model Sunset Tracker
Track AI model deprecation dates, migration paths, and breaking changes across OpenAI, Anthropic, Google, Meta, and Mistral. Filterable and searchable.
Category facts
This category currently contains 63 tools in AI Tools. For machine-readable discovery, see /api/tools-by-category.json.
How do I browse similar tools? Use category pages and related links on each tool page to navigate adjacent workflows quickly.
How often are counts updated? Counts are generated from the canonical tools metadata at build time.