MCP Server¶
The IF Craft Corpus includes a Model Context Protocol (MCP) server built with FastMCP 2. This allows AI tools like Claude Code and Cursor to search the corpus directly.
Installation¶
Adding to Claude Code¶
Or manually add to your MCP configuration:
Available Tools¶
search_corpus¶
Search the corpus for craft guidance.
search_corpus(
query: str, # Search query
cluster: str, # Optional cluster filter
limit: int = 5 # Max results (1-20)
)
Example prompts:
- "Search the IF craft corpus for dialogue subtext techniques"
- "Find guidance on branching narrative structure"
- "Look up horror atmosphere in the craft corpus"
get_document¶
Get a specific document by name.
list_documents¶
List all available documents, optionally filtered by cluster.
list_clusters¶
List all topic clusters with document counts.
corpus_stats¶
Get corpus statistics.
embeddings_status¶
Check embedding provider and index status.
Returns information about available providers (Ollama, OpenAI, SentenceTransformers) and whether embeddings are loaded.
build_embeddings¶
Build or rebuild the semantic search embedding index.
build_embeddings(
provider: str | None = None, # "ollama", "openai", or "sentence_transformers"
force: bool = False # Rebuild even if exists
)
Running Standalone¶
For development or testing:
# Run with stdio transport (default)
ifcraftcorpus-mcp
# Or with Python
python -m ifcraftcorpus.mcp_server
Verbose Logging¶
Set LOG_LEVEL (for example INFO, DEBUG) or the convenience flag
VERBOSE=1 before launching the server to emit structured logs to
stderr. Logging never touches stdout, so stdio transport remains
compatible with Claude Desktop and other MCP clients.
HTTP Transport¶
For remote deployment:
from ifcraftcorpus.mcp_server import run_server
run_server(transport="http", host="0.0.0.0", port=8000)
Docker Deployment¶
The MCP server is available as a Docker image:
# Pull from GitHub Container Registry
docker pull ghcr.io/pvliesdonk/if-craft-corpus:latest
# Run with default settings (keyword search only)
docker run -p 8000:8000 ghcr.io/pvliesdonk/if-craft-corpus
# Run with Ollama for semantic search
docker run -p 8000:8000 \
-e OLLAMA_HOST=http://host.docker.internal:11434 \
ghcr.io/pvliesdonk/if-craft-corpus
# Run with OpenAI
docker run -p 8000:8000 \
-e OPENAI_API_KEY=sk-... \
ghcr.io/pvliesdonk/if-craft-corpus
Pre-building Embeddings¶
For faster startup, you can pre-build embeddings and mount them:
# Build embeddings locally
ifcraftcorpus embeddings build --provider ollama -o ./embeddings
# Mount into container
docker run -p 8000:8000 \
-v ./embeddings:/app/embeddings:ro \
-e EMBEDDINGS_PATH=/app/embeddings \
ghcr.io/pvliesdonk/if-craft-corpus
Environment Variables¶
| Variable | Default | Description |
|---|---|---|
MCP_HOST |
0.0.0.0 |
Host to bind to |
MCP_PORT |
8000 |
Port to listen on |
OLLAMA_HOST |
http://localhost:11434 |
Ollama server URL |
OPENAI_API_KEY |
(none) | OpenAI API key |
EMBEDDINGS_PATH |
embeddings |
Path to embeddings directory |
LOG_LEVEL |
(none) | Optional log level (DEBUG, INFO, etc.). Logs go to stderr. |
VERBOSE |
(none) | Convenience flag; set to 1/true to enable debug logging. |