Quick Start¶
Get up and running with the LLM package in 5 minutes.
Installation¶
Or with specific LLM provider support:
# OpenAI
pip install dataknobs-llm[openai]
# Anthropic
pip install dataknobs-llm[anthropic]
# All providers
pip install dataknobs-llm[all]
Basic Usage¶
1. Create an LLM Provider¶
from dataknobs_llm import create_llm_provider, LLMConfig
# OpenAI
config = LLMConfig(provider="openai", api_key="your-api-key")
llm = create_llm_provider(config)
# Anthropic
config = LLMConfig(provider="anthropic", api_key="your-api-key")
llm = create_llm_provider(config)
# With custom configuration
config = LLMConfig(
provider="openai",
api_key="your-api-key",
model="gpt-4",
temperature=0.7
)
llm = create_llm_provider(config)
2. Simple Completion¶
# Asynchronous (recommended)
response = await llm.complete("What is Python?")
print(response.content)
# Override config per-request (model, temperature, max_tokens, etc.)
response = await llm.complete(
"Write a creative story",
config_overrides={"model": "gpt-4-turbo", "temperature": 1.2}
)
3. Structured Prompts¶
from dataknobs_llm.prompts import FileSystemPromptLibrary, AsyncPromptBuilder
from pathlib import Path
# Load prompts from filesystem
library = FileSystemPromptLibrary(prompt_dir=Path("prompts/"))
builder = AsyncPromptBuilder(library=library)
# Render and use prompts
prompt = await builder.render_user_prompt(
"code_review",
params={"language": "python", "code": "def foo(): pass"}
)
response = await llm.acomplete(prompt)
4. Conversations¶
from dataknobs_llm.conversations import (
ConversationManager,
DataknobsConversationStorage
)
from dataknobs_data.backends import AsyncMemoryDatabase
# Create storage
db = AsyncMemoryDatabase()
storage = DataknobsConversationStorage(db)
# Create conversation
manager = await ConversationManager.create(
llm=llm,
prompt_builder=builder,
storage=storage
)
# Add user message
await manager.add_message(
role="user",
prompt_name="greeting",
params={"name": "Alice"}
)
# Get assistant response
response = await manager.complete()
print(response.content)
# Continue conversation
await manager.add_message(
role="user",
content="Tell me more about Python decorators"
)
response = await manager.complete()
Common Patterns¶
RAG (Retrieval-Augmented Generation)¶
RAG is configured in prompt templates using YAML:
# prompts/user/code_question.yaml
template: |
Answer this {{language}} question:
{{question}}
Relevant documentation:
{{RAG_DOCS}}
rag_configs:
- adapter_name: docs
query_template: "{{language}} {{question}}"
k: 3
placeholder: "RAG_DOCS"
Then use resource adapters to provide the data:
from dataknobs_llm.prompts import InMemoryAdapter
# Create resource adapter with documents
adapter = InMemoryAdapter(
documents=[
{"id": "1", "content": "Python is a programming language"},
{"id": "2", "content": "Python supports decorators"}
]
)
# Use in prompt builder
builder = AsyncPromptBuilder(
library=library,
adapters={"docs": adapter}
)
# RAG automatically retrieves and injects relevant docs
result = await builder.render_user_prompt(
"code_question",
params={"language": "python", "question": "What are decorators?"}
)
A/B Testing¶
from dataknobs_llm.prompts import (
VersionManager,
ABTestManager,
PromptVariant
)
# Create versions
vm = VersionManager()
v1 = await vm.create_version(
name="greeting",
prompt_type="system",
template="Hello {{name}}!",
version="1.0.0"
)
v2 = await vm.create_version(
name="greeting",
prompt_type="system",
template="Hi {{name}}, welcome!"
)
# Create A/B test
ab = ABTestManager()
exp = await ab.create_experiment(
name="greeting",
prompt_type="system",
variants=[
PromptVariant("1.0.0", 0.5, "Control"),
PromptVariant("1.0.1", 0.5, "Treatment")
]
)
# Get variant for user (sticky)
variant = await ab.get_variant_for_user(exp.experiment_id, "user123")
Next Steps¶
Learn More¶
- Prompt Engineering Guide - Master prompt templates
- Conversation Management - Multi-turn conversations
- Config Overrides - Per-request configuration
- Versioning & A/B Testing - Track and test prompts
- Performance & Benchmarking - Optimize your application
Examples¶
- Basic Usage Examples - Common use cases
- Advanced Prompting - Complex templates
- Conversation Flows - FSM-based workflows
- A/B Testing - Running experiments
API Reference¶
- LLM API - LLM provider interface
- Prompts API - Prompt library and builders
- Conversations API - Conversation management
- Versioning API - Version and experiment management
Configuration¶
Environment Variables¶
# OpenAI
export OPENAI_API_KEY=your-key
# Anthropic
export ANTHROPIC_API_KEY=your-key
# Prompt directory
export PROMPT_DIR=/path/to/prompts
File Structure¶
Organize your prompts in a directory structure:
prompts/
├── system/
│ ├── greeting.yaml
│ └── code_reviewer.yaml
└── user/
├── code_question.yaml
└── general_question.yaml
Example prompt file (greeting.yaml):
template: |
You are a friendly assistant.
Greet the user named {{name}}.
defaults:
name: User
validation:
required:
- name
Troubleshooting¶
Common Issues¶
Import Error: Make sure you've installed the package and any required extras:
API Key Not Found: Set environment variables or pass explicitly:
from dataknobs_llm import create_llm_provider, LLMConfig
config = LLMConfig(provider="openai", api_key="your-key")
llm = create_llm_provider(config)
Template Not Found: Check your prompt directory path:
Getting Help¶
- Documentation: Full Documentation
- GitHub: Issues
- Examples: See the examples directory
What's Next?¶
Now that you have the basics, explore:
- Advanced Prompting: Learn Jinja2 templating, RAG integration, and conditional logic
- Conversation Trees: Branch conversations and explore alternatives
- Performance Optimization: Use benchmarking and caching for production
- A/B Testing: Run experiments to find the best prompts