ReAct Agent Example¶
A tool-using agent with Reasoning + Acting (ReAct) pattern.
Overview¶
This example demonstrates:
- Custom tool creation
- ReAct reasoning strategy
- Tool execution and reasoning trace
- Configuration-driven tool loading
Prerequisites¶
# Install Ollama: https://ollama.ai/
# Pull a tool-capable model (required for ReAct with tools)
ollama pull qwen3:8b
# Install dataknobs-bots
pip install dataknobs-bots
What is ReAct?¶
ReAct (Reasoning + Acting) is a prompting pattern where the LLM:
- Thinks - Reasons about what to do next
- Acts - Decides to use a tool
- Observes - Sees the tool result
- Repeats - Until the answer is found
ReAct Flow¶
graph TD
A[User Question] --> B[Think]
B --> C{Need Tool?}
C -->|Yes| D[Act: Use Tool]
C -->|No| E[Final Answer]
D --> F[Observe Result]
F --> B
Configuration¶
Add reasoning and tools sections:
config = {
"llm": {
"provider": "ollama",
"model": "qwen3:8b" # Must support tool/function calling
},
"conversation_storage": {
"backend": "memory"
},
"reasoning": {
"strategy": "react",
"max_iterations": 5, # Maximum reasoning steps
"verbose": True # Show reasoning trace
},
"tools": [
{
"class": "examples.tools.CalculatorTool",
"params": {"precision": 2}
}
]
}
Important: The ReAct strategy with tools requires a model that supports function calling. Models like qwen3:8b, llama3.1:8b, and mistral:7b support this. Models like gemma3 do not support tool calling and will raise a ToolsNotSupportedError.
Creating a Custom Tool¶
Tools implement the Tool interface:
from dataknobs_llm.tools import Tool
from typing import Dict, Any
class CalculatorTool(Tool):
def __init__(self, precision: int = 2):
super().__init__(
name="calculator",
description="Performs basic arithmetic operations"
)
self.precision = precision
@property
def schema(self) -> Dict[str, Any]:
"""JSON schema defining tool parameters."""
return {
"type": "object",
"properties": {
"operation": {
"type": "string",
"enum": ["add", "subtract", "multiply", "divide"]
},
"a": {"type": "number"},
"b": {"type": "number"}
},
"required": ["operation", "a", "b"]
}
async def execute(
self,
operation: str,
a: float,
b: float,
**kwargs
) -> float:
"""Execute the calculation."""
if operation == "add":
return round(a + b, self.precision)
# ... other operations
Complete Code¶
"""ReAct agent example.
This example demonstrates:
- ReAct (Reasoning + Acting) strategy
- Tool definition and registration
- Multi-step problem solving
- Reasoning trace storage
- Verbose logging
Required Ollama model:
ollama pull gemma3:1b
"""
import asyncio
from typing import Any, Dict
from dataknobs_bots import BotContext, DynaBot
from dataknobs_llm.tools import Tool
# Define custom tools for the agent
class CalculatorTool(Tool):
"""Tool for performing basic arithmetic operations."""
def __init__(self):
super().__init__(
name="calculator",
description="Performs basic arithmetic operations (add, subtract, multiply, divide)",
)
@property
def schema(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {
"operation": {
"type": "string",
"enum": ["add", "subtract", "multiply", "divide"],
"description": "The arithmetic operation to perform",
},
"a": {
"type": "number",
"description": "First number",
},
"b": {
"type": "number",
"description": "Second number",
},
},
"required": ["operation", "a", "b"],
}
async def execute(self, operation: str, a: float, b: float) -> float:
"""Execute the calculation."""
if operation == "add":
result = a + b
elif operation == "subtract":
result = a - b
elif operation == "multiply":
result = a * b
elif operation == "divide":
if b == 0:
raise ValueError("Cannot divide by zero")
result = a / b
else:
raise ValueError(f"Unknown operation: {operation}")
print(f" → Calculator: {a} {operation} {b} = {result}")
return result
class WeatherTool(Tool):
"""Mock tool for getting weather information."""
def __init__(self):
super().__init__(
name="get_weather",
description="Get current weather information for a location",
)
@property
def schema(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or location",
},
},
"required": ["location"],
}
async def execute(self, location: str) -> Dict[str, Any]:
"""Get mock weather data."""
# Mock weather data
mock_weather = {
"location": location,
"temperature": 72,
"condition": "Partly cloudy",
"humidity": 65,
"wind_speed": 8,
}
print(f" → Weather: {location} is {mock_weather['condition']}, {mock_weather['temperature']}°F")
return mock_weather
class TimeTool(Tool):
"""Tool for getting current time information."""
def __init__(self):
super().__init__(
name="get_time",
description="Get current time in a specific timezone",
)
@property
def schema(self) -> Dict[str, Any]:
return {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "Timezone (e.g., 'UTC', 'America/New_York', 'Europe/London')",
"default": "UTC",
},
},
}
async def execute(self, timezone: str = "UTC") -> str:
"""Get current time (mocked)."""
import datetime
# Simple mock - just return UTC time with timezone label
now = datetime.datetime.now(datetime.timezone.utc)
time_str = now.strftime("%Y-%m-%d %H:%M:%S")
print(f" → Time: {time_str} {timezone}")
return f"{time_str} {timezone}"
async def main():
"""Run a ReAct agent conversation."""
print("=" * 60)
print("ReAct Agent Example")
print("=" * 60)
print()
print("This example shows an agent using ReAct reasoning with tools.")
print("Required: ollama pull gemma3:1b")
print()
# Configuration with ReAct reasoning
config = {
"llm": {
"provider": "ollama",
"model": "gemma3:1b",
"temperature": 0.7,
"max_tokens": 1000,
},
"conversation_storage": {
"backend": "memory",
},
"reasoning": {
"strategy": "react",
"max_iterations": 5,
"verbose": True, # Enable debug logging
"store_trace": True, # Store reasoning trace in metadata
},
"prompts": {
"agent_system": "You are a helpful AI agent with access to tools. "
"When you need to perform calculations, get weather, or check time, "
"use the appropriate tools. Think step by step and explain your reasoning."
},
"system_prompt": {
"name": "agent_system",
},
}
print("Creating ReAct agent with tools...")
bot = await DynaBot.from_config(config)
# Register tools
calculator = CalculatorTool()
weather = WeatherTool()
time_tool = TimeTool()
bot.tool_registry.register_tool(calculator)
bot.tool_registry.register_tool(weather)
bot.tool_registry.register_tool(time_tool)
print("✓ Bot created successfully")
print(f"✓ Reasoning: ReAct (max {config['reasoning']['max_iterations']} iterations)")
print("✓ Tools registered:")
print(f" - {calculator.name}: {calculator.description}")
print(f" - {weather.name}: {weather.description}")
print(f" - {time_tool.name}: {time_tool.description}")
print()
# Create context for this conversation
context = BotContext(
conversation_id="react-agent-001",
client_id="example-client",
user_id="demo-user",
)
# Tasks that require tool use
tasks = [
"What is 15 multiplied by 24?",
"What's the weather like in San Francisco?",
"What time is it in UTC?",
"Calculate 100 divided by 4, then multiply the result by 3",
]
for i, task in enumerate(tasks, 1):
print(f"[Task {i}] User: {task}")
print()
response = await bot.chat(
message=task,
context=context,
)
print(f"[Task {i}] Agent: {response}")
print()
print("-" * 60)
print()
# Add a small delay between tasks
if i < len(tasks):
await asyncio.sleep(2)
print("=" * 60)
print("ReAct agent demonstration complete!")
print()
print("Notice how the agent:")
print("- Identified which tools to use for each task")
print("- Called tools with appropriate parameters")
print("- Reasoned through multi-step problems")
print("- Provided final answers based on tool results")
print()
print("The reasoning trace is stored in conversation metadata")
print("and can be retrieved for audit/debugging purposes.")
if __name__ == "__main__":
asyncio.run(main())
Running the Example¶
Expected Output¶
With verbose: True, you'll see the reasoning trace:
User: What is 15 multiplied by 23?
Thought: I need to use the calculator tool to multiply these numbers.
Action: calculator
Action Input: {"operation": "multiply", "a": 15, "b": 23}
Observation: 345.0
Thought: I now have the answer.
Final Answer: 15 multiplied by 23 equals 345.
Bot: 15 multiplied by 23 equals 345.
Tool Configuration¶
Direct Class Instantiation¶
XRef Pattern¶
Define tools once, reuse across configurations:
# config.yaml
tools:
calculator:
class: my_tools.CalculatorTool
params:
precision: 2
bots:
math_bot:
tools:
- xref:tools[calculator]
Built-in Tools¶
KnowledgeSearchTool¶
Automatically available when knowledge base is enabled:
The bot can now search its knowledge base using the knowledge_search tool.
Reasoning Strategies¶
Simple Reasoning (Default)¶
Direct LLM response, no tools:
ReAct Reasoning¶
Reasoning + Acting with tools:
"reasoning": {
"strategy": "react",
"max_iterations": 5,
"verbose": True,
"store_trace": True # Save reasoning trace
}
Multiple Tools¶
Agents can use multiple tools:
"tools": [
{"class": "tools.CalculatorTool", "params": {}},
{"class": "tools.WeatherTool", "params": {}},
{"class": "tools.WebSearchTool", "params": {}}
]
Best Practices¶
Tool Design¶
- Clear Names - Use descriptive tool names
- Good Descriptions - Help LLM know when to use the tool
- Typed Parameters - Use proper JSON schema
- Error Handling - Handle errors gracefully
- Documentation - Document what the tool does
Reasoning Configuration¶
| max_iterations | Use Case |
|---|---|
| 3 | Simple tools |
| 5 | Standard agents |
| 10 | Complex multi-step tasks |
Model Selection¶
ReAct with tools requires a model that supports function calling:
| Model | Tool Calling | Use Case |
|---|---|---|
| qwen3:8b | Yes | Default — chat, tools, reasoning |
| llama3.1:8b | Yes | Complex reasoning with tools |
| mistral:7b | Yes | General purpose with tools |
| command-r:latest | Yes | Tool use, RAG |
| gemma3:4b | No | Chat only (no tools) |
| gemma3:1b | No | Simple conversations (no tools) |
Key Takeaways¶
- ✅ Tool Integration - Extend bot capabilities
- ✅ ReAct Pattern - Systematic reasoning
- ✅ Configuration-Driven - Load tools from config
- ✅ Visible Reasoning - See how the bot thinks
Common Issues¶
Tool Not Called¶
Problem: Agent doesn't use the tool
Solutions:
- Verify the model supports function calling (use qwen3:8b, llama3.1:8b, or mistral:7b)
- If you see ToolsNotSupportedError, the model cannot do tool calling — switch to a tool-capable model
- Improve tool description
- Increase max_iterations
- Enable verbose mode to see reasoning
Invalid Tool Parameters¶
Problem: LLM provides wrong parameters
Solutions: - Improve JSON schema descriptions - Add parameter examples in tool description - Validate parameters in execute()
What's Next?¶
To set up multi-tenant bots, see the Multi-Tenant Bot Example.
Related Examples¶
- RAG Chatbot - Knowledge base integration
- Custom Tools - Configuration patterns
- Multi-Tenant Bot - Multiple clients