Skip to content

Quick Start

from ifcraftcorpus import Corpus

# Initialize corpus
corpus = Corpus()

# Search for craft guidance
results = corpus.search("dialogue subtext techniques")

for result in results:
    print(f"Source: {result.source}")
    print(f"Title: {result.title}")
    print(f"Cluster: {result.cluster}")
    print(f"Score: {result.score:.2f}")
    print(f"Content: {result.content[:200]}...")
    print()

Filter by Cluster

Focus on specific topic areas:

# Search only in prose-and-language cluster
results = corpus.search(
    "character voice",
    cluster="prose-and-language",
    limit=5
)

List Available Documents

# See all documents
for doc in corpus.list_documents():
    print(f"{doc['name']}: {doc['title']} ({doc['cluster']})")

# See available clusters
for cluster in corpus.list_clusters():
    print(cluster)

Get Full Document

# Retrieve a specific document
doc = corpus.get_document("dialogue_craft")

if doc:
    print(f"Title: {doc['title']}")
    print(f"Summary: {doc['summary']}")
    print(f"Topics: {', '.join(doc['topics'])}")

    for section in doc['sections']:
        print(f"\n## {section['heading']}")
        print(section['content'][:500])

Context Manager

# Automatically close resources
with Corpus() as corpus:
    results = corpus.search("branching narrative")
    # ... process results

Search Modes

The library supports multiple search modes:

# Keyword search (default) - fast, exact matching
results = corpus.search("horror atmosphere", mode="keyword")

# Semantic search (requires embeddings) - meaning-based
results = corpus.search("scary mood setting", mode="semantic")

# Hybrid (both modes combined)
results = corpus.search("tension building", mode="hybrid")

Note

Semantic search requires building embeddings first. See below.

Building Embeddings

To enable semantic search, you need to build embeddings using one of the available providers.

Using the CLI

# Auto-detect provider (checks Ollama, OpenAI, SentenceTransformers)
ifcraftcorpus embeddings build

# Use specific provider
ifcraftcorpus embeddings build --provider ollama
ifcraftcorpus embeddings build --provider openai
ifcraftcorpus embeddings build --provider sentence-transformers

# Check provider status
ifcraftcorpus embeddings status

Using the Library

from pathlib import Path
from ifcraftcorpus import Corpus
from ifcraftcorpus.providers import get_embedding_provider

# Auto-detect best available provider
provider = get_embedding_provider()

# Or use a specific provider
from ifcraftcorpus.providers import OllamaEmbeddings, OpenAIEmbeddings
provider = OllamaEmbeddings()  # Uses OLLAMA_HOST env var
provider = OpenAIEmbeddings()  # Uses OPENAI_API_KEY env var

# Build embeddings
corpus = Corpus(
    embeddings_path=Path("embeddings/"),
    embedding_provider=provider
)
corpus.build_embeddings()

# Now semantic search works
results = corpus.search("creating tension", mode="semantic")

Using Docker/MCP

The MCP server can build embeddings on demand:

# Via MCP tool
embeddings_status()  # Check current status
build_embeddings(provider="ollama")  # Build with Ollama

Configure the provider via environment variables:

  • OLLAMA_HOST: Ollama server URL (default: http://localhost:11434)
  • OPENAI_API_KEY: OpenAI API key
  • EMBEDDINGS_PATH: Where to save embeddings (default: ./embeddings)
  • LOG_LEVEL / VERBOSE: Optional logging controls; set LOG_LEVEL=DEBUG or VERBOSE=1 to stream detailed MCP logs to stderr.