Prompt Customization#
This guide covers customizing framework prompts for domain-specific applications. The framework’s prompt management system enables clean separation between generic framework functionality and domain-specific prompt customization through sophisticated dependency injection patterns.
Architecture Overview#
The prompt system uses a provider architecture where applications register custom prompt implementations that the framework components request through dependency injection. This enables applications to provide domain-specific prompts while the framework remains generic.
Applications can override any prompt builder with domain-specific implementations while maintaining full compatibility with all framework components.
Key Benefits:
Domain Agnostic: Framework remains generic while supporting specialized prompts
No Circular Dependencies: Clean separation through dependency injection
Flexible Composition: Modular prompt building with optional components
Development Support: Integrated debugging and prompt inspection tools
Quick Start: Custom Prompt Provider#
Here’s a minimal example of creating a custom prompt provider:
from framework.prompts import FrameworkPromptBuilder, FrameworkPromptProvider
from framework.prompts.defaults import DefaultPromptProvider
class MyDomainPromptBuilder(FrameworkPromptBuilder):
def get_role_definition(self) -> str:
return "You are a domain-specific expert system."
def get_instructions(self) -> str:
return "Provide analysis using domain-specific terminology."
class MyAppPromptProvider(FrameworkPromptProvider):
def __init__(self):
# Use custom builders for key prompts
self._orchestrator = MyDomainPromptBuilder()
# Use framework defaults for others
self._defaults = DefaultPromptProvider()
def get_orchestrator_prompt_builder(self):
return self._orchestrator
def get_classification_prompt_builder(self):
# Delegate to framework default
return self._defaults.get_classification_prompt_builder()
# ... implement other required methods
Development and Debugging#
All prompts automatically integrate with the framework’s debug system for development visibility. The system provides comprehensive debugging capabilities through both console output and file persistence, making it invaluable for prompt development, troubleshooting, and optimization.
Configuration Options#
The debug system is controlled through the development.prompts
configuration section in your config.yml
file:
development:
prompts:
# Console output with detailed formatting and separators
show_all: true
# File output to prompts directory for inspection
print_all: true
# File naming strategy
latest_only: false # true: latest.md files, false: timestamped files
Console Output#
When show_all: true
is set, all generated prompts are displayed in the console with clear visual separators and metadata:
================================================================================
🔍 DEBUG PROMPT: orchestrator_system (DefaultOrchestratorPromptBuilder)
================================================================================
You are an intelligent orchestration agent for the ALS Expert system...
================================================================================
File Persistence#
When print_all: true
is enabled, prompts are automatically saved to the configured prompts_dir
with rich metadata headers:
Timestamped files (
latest_only: false
): Each prompt generation creates a new file with timestampFormat:
{name}_{YYYYMMDD_HHMMSS}.md
Use case: Track prompt evolution over time, compare versions, debug prompt changes
Example:
orchestrator_system_20241215_143022.md
Latest files (
latest_only: true
): Overwrites the previous version, keeping only current stateFormat:
{name}_latest.md
Use case: Always see current prompt without file clutter
Example:
orchestrator_system_latest.md
Metadata Headers#
All saved prompt files include comprehensive metadata for traceability:
# PROMPT METADATA
# Generated: 2024-12-15 14:30:22
# Name: orchestrator_system
# Builder: DefaultOrchestratorPromptBuilder
# File: /path/to/prompts/orchestrator_system_latest.md
# Latest Only: true
Provider Interface Implementation#
Applications implement the FrameworkPromptProvider interface to provide domain-specific prompts to framework infrastructure. All methods are required and must return FrameworkPromptBuilder instances.
Note
Applications typically inherit from DefaultPromptProvider and override only the prompt builders they want to customize, using framework defaults for the rest.
Complete Provider Interface#
Controls execution planning and coordination:
def get_orchestrator_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for orchestration operations.
Used by the orchestrator node to create execution plans
and coordinate capability execution sequences.
"""
Handles task parsing and structuring:
def get_task_extraction_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for task extraction operations.
Used by task extraction node to parse user requests
into structured, actionable tasks.
"""
Manages request classification and routing:
def get_classification_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for classification operations.
Used by classification node to determine which capabilities
should handle specific user requests.
"""
Controls final response formatting:
def get_response_generation_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for response generation.
Used by response generation to format final answers
using capability results and conversation context.
"""
Handles error classification and recovery:
def get_error_analysis_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for error analysis operations.
Used by error handling system to classify errors
and determine recovery strategies.
"""
Manages clarification requests:
def get_clarification_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for clarification requests.
Used when the system needs additional information
from users to complete tasks.
"""
Controls memory operations:
def get_memory_extraction_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for memory extraction operations.
Used by memory capability to extract and store
relevant information from conversations.
"""
Handles temporal query parsing:
def get_time_range_parsing_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for time range parsing.
Used by time parsing capability to understand
temporal references in user queries.
"""
Controls code generation and execution:
def get_python_prompt_builder(self) -> FrameworkPromptBuilder:
"""
Return prompt builder for Python operations.
Used by Python capability for code generation,
analysis, and execution guidance.
"""
Default Builder Reference#
The framework provides individual default prompt builder implementations organized by framework node. Each node has its own specialized prompt builder that applications can use directly or extend.
View Default Implementation Examples
"""Default orchestrator prompts."""
import textwrap
from typing import Optional, List
from framework.prompts.base import FrameworkPromptBuilder
from framework.context import ContextManager
from framework.base import BaseCapability, OrchestratorGuide, OrchestratorExample
class DefaultOrchestratorPromptBuilder(FrameworkPromptBuilder):
"""Default orchestrator prompt builder."""
PROMPT_TYPE = "orchestrator"
def get_role_definition(self) -> str:
"""Get the generic role definition."""
return "You are an expert execution planner for the assistant system."
def get_task_definition(self) -> str:
"""Get the task definition."""
return "TASK: Create a detailed execution plan that breaks down the user's request into specific, actionable steps."
def get_instructions(self) -> str:
"""Get the generic planning instructions."""
return textwrap.dedent(f"""
Each step must follow the PlannedStep structure:
- context_key: Unique identifier for this step's output (e.g., "data_sources", "historical_data")
- capability: Type of execution node (determined based on available capabilities)
- task_objective: Complete, self-sufficient description of what this step must accomplish
- expected_output: Context type key (e.g., "HISTORICAL_DATA", "SYSTEM_STATUS")
- success_criteria: Clear criteria for determining step success
- inputs: List of input dictionaries mapping context types to context keys:
[
{{"DATA_QUERY_RESULTS": "some_data_context"}},
{{"ANALYSIS_RESULTS": "some_analysis_context"}}
]
**CRITICAL**: Include ALL required context sources! Complex operations often need multiple inputs.
- parameters: Optional dict for step-specific configuration (e.g., {{"precision_ms": 1000}})
Planning Guidelines:
1. Dependencies between steps (ensure proper sequencing)
2. Cost optimization (avoid unnecessary expensive operations)
3. Clear success criteria for each step
4. Proper input/output schema definitions
5. Always reference available context using exact keys shown in context information
6. **CRITICAL**: End plans with either "respond" or "clarify" step to ensure user gets feedback
The execution plan should be an ExecutionPlan containing a list of PlannedStep json objects.
Focus on being practical and efficient while ensuring robust execution.
Be factual and realistic about what can be accomplished.
Never plan for simulated or fictional data - only real system operations.
""").strip()
def _get_dynamic_context(self, context_manager: Optional[ContextManager] = None, **kwargs) -> Optional[str]:
"""Get dynamic context showing available context data."""
if context_manager and context_manager.get_raw_data():
return self._build_context_section(context_manager)
return None
def get_system_instructions(
self,
active_capabilities: List[BaseCapability] = None,
context_manager: ContextManager = None,
task_depends_on_chat_history: bool = False,
task_depends_on_user_memory: bool = False,
error_context: Optional[str] = None,
**kwargs
) -> str:
"""
Get system instructions for orchestrator agent configuration.
Args:
active_capabilities: List of active capabilities
context_manager: Current context manager with available data
task_depends_on_chat_history: Whether task builds on previous conversation context
task_depends_on_user_memory: Whether task depends on user memory information
error_context: Formatted error context from previous execution failure (for replanning)
Returns:
Complete orchestrator prompt text
"""
if not active_capabilities:
active_capabilities = []
# Build the main prompt sections
prompt_sections = []
# 1. Add base orchestrator prompt (role, task, instructions)
# Build directly without textwrap.dedent to avoid indentation issues
base_prompt_parts = [
self.get_role_definition(),
self.get_task_definition(),
self.get_instructions()
]
base_prompt = "\n\n".join(base_prompt_parts)
prompt_sections.append(base_prompt)
# 2. Add context reuse guidance if task builds on previous context
context_guidance = self._build_context_reuse_guidance(
task_depends_on_chat_history,
task_depends_on_user_memory
)
if context_guidance:
prompt_sections.append(context_guidance)
# 3. Add error context for replanning if available
if error_context:
error_section = textwrap.dedent(f"""
**REPLANNING CONTEXT:**
The previous execution failed and needs replanning. Consider this error information when creating the new plan:
{error_context}
**Replanning Guidelines:**
- Analyze the error context to understand why the previous approach failed
- Consider alternative capabilities or different sequencing to avoid the same issue
- If required context is missing, include clarification steps to gather needed information
- Learn from the technical details and suggestions provided in the error context
- Adapt the execution strategy based on the specific failure mode identified""").strip()
prompt_sections.append(error_section)
# 4. Add context information if available
if context_manager and context_manager.get_raw_data():
context_section = self._build_context_section(context_manager)
if context_section:
prompt_sections.append(context_section)
# 5. Add capability-specific prompts with examples
capability_sections = self._build_capability_sections(active_capabilities)
prompt_sections.extend(capability_sections)
# Combine all sections
final_prompt = "\n\n".join(prompt_sections)
# Debug: Print prompt if enabled (same as base class)
self.debug_print_prompt(final_prompt)
return final_prompt
def _build_context_reuse_guidance(
self,
task_depends_on_chat_history: bool,
task_depends_on_user_memory: bool
) -> Optional[str]:
"""Build context reuse guidance section when task builds on previous context."""
if not task_depends_on_chat_history and not task_depends_on_user_memory:
return None
guidance_parts = []
if task_depends_on_chat_history:
guidance_parts.append(
"• **PRIORITIZE CONTEXT REUSE**: This task builds on previous conversation context. "
"Look for existing context data that can be reused instead of recreating it."
)
if task_depends_on_user_memory:
guidance_parts.append(
"• **LEVERAGE USER MEMORY**: This task depends on user memory information. "
"Check for existing memory context before planning new retrieval steps."
)
guidance_parts.append(
"• **EFFICIENCY FIRST**: Avoid redundant context creation when suitable data already exists. "
"Reference existing context keys in your step inputs."
)
guidance_text = "\n".join(guidance_parts)
return textwrap.dedent(f"""
**CONTEXT REUSE GUIDANCE:**
{guidance_text}
""").strip()
def _build_context_section(self, context_manager: ContextManager) -> Optional[str]:
"""Build the context section of the prompt."""
context_data = context_manager.get_raw_data()
if not context_data:
return None
# Create a simple dictionary showing context_type -> [list of available keys]
context_dict = {}
for context_type, contexts in context_data.items():
context_dict[context_type] = list(contexts.keys())
# Format as a clean dictionary representation
formatted_lines = ["["]
for context_type, keys in context_dict.items():
if len(keys) == 1:
formatted_lines.append(f' "{context_type}": "{keys[0]}",')
else:
keys_str = ", ".join(f'"{key}"' for key in keys)
formatted_lines.append(f' "{context_type}": [{keys_str}],')
# Remove trailing comma from last line and close brace
if len(formatted_lines) > 1:
formatted_lines[-1] = formatted_lines[-1].rstrip(',')
formatted_lines.append("]")
formatted_context = '\n'.join(formatted_lines)
return textwrap.dedent(f"""
**AVAILABLE CONTEXT:**
The following context data is already available for use in your execution plan:
{formatted_context}
""").strip()
def _build_capability_sections(self, active_capabilities: List[BaseCapability]) -> List[str]:
"""Build capability-specific sections with examples."""
sections = []
# Group capabilities by order for proper sequencing
capability_prompts = []
for capability in active_capabilities:
if capability.orchestrator_guide:
capability_prompts.append((capability, capability.orchestrator_guide))
# Sort by priority (lower priority = higher priority)
sorted_prompts = sorted(capability_prompts, key=lambda p: p[1].priority)
# Add header for capability sections
if sorted_prompts:
sections.append("# CAPABILITY PLANNING GUIDELINES")
# Build each capability section with clear separators
for i, (capability, orchestrator_guide) in enumerate(sorted_prompts):
if orchestrator_guide.instructions: # Only include non-empty prompts
# Add capability name header
capability_name = capability.__class__.__name__.replace('Capability', '')
section_text = f"## {capability_name}\n{orchestrator_guide.instructions}"
# Add formatted examples if they exist
if orchestrator_guide.examples:
examples_text = OrchestratorExample.format_examples_for_prompt(orchestrator_guide.examples)
section_text += examples_text
sections.append(section_text)
return sections
"""
Task Extraction Prompt Builder - Application-agnostic prompts for task extraction
"""
from __future__ import annotations
from typing import List, Dict, Any, Optional
from dataclasses import dataclass, field
from pydantic import BaseModel, Field
from framework.state import MessageUtils, ChatHistoryFormatter, UserMemories
from langchain_core.messages import BaseMessage
from ..base import FrameworkPromptBuilder
from framework.base import BaseExample
@dataclass
class TaskExtractionExample(BaseExample):
"""Example for task extraction prompt."""
def __init__(self, messages: List[BaseMessage], user_memory: UserMemories, expected_output: 'ExtractedTask'):
self.messages = messages
self.user_memory = user_memory
self.expected_output = expected_output
def format_for_prompt(self) -> str:
"""Format this example for inclusion in prompts."""
# Format chat history using native message formatter
chat_formatted = ChatHistoryFormatter.format_for_llm(self.messages)
# Format user memory
memory_formatted = self.user_memory.format_for_prompt()
return f"""
**Chat History:**
{chat_formatted}
**User Memory:**
{memory_formatted if memory_formatted else "No stored memories"}
**Expected Output:**
{self.expected_output.format_for_prompt()}
"""
class ExtractedTask(BaseModel):
"""Task extraction result."""
task: str = Field(description="The actionable task extracted from the conversation")
depends_on_chat_history: bool = Field(description="Whether the task depends on previous conversation context")
depends_on_user_memory: bool = Field(description="Whether the task depends on stored user memory")
def format_for_prompt(self) -> str:
return f"""
Task: {self.task}
Depends on Chat History: {self.depends_on_chat_history}
Depends on User Memory: {self.depends_on_user_memory}
"""
class DefaultTaskExtractionPromptBuilder(FrameworkPromptBuilder):
"""Framework prompt builder for task extraction."""
def __init__(self):
super().__init__()
self.examples = []
self._load_examples()
def _load_examples(self):
"""Load task extraction examples with native LangGraph messages."""
# Examples without memory first
# Simple follow-up question requiring temporal reference resolution
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("What's the current system status?"),
MessageUtils.create_assistant_message("The current system status shows all services running normally. CPU usage is at 45%, checked at 14:23:05."),
MessageUtils.create_user_message("What was it an hour ago?"),
],
user_memory=UserMemories(entries=[]),
expected_output=ExtractedTask(
task="Retrieve historical system status and CPU usage data from approximately one hour ago (around 13:23:05)",
depends_on_chat_history=True,
depends_on_user_memory=False
)
))
# Complex multi-turn technical troubleshooting with progressive problem refinement
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("I'm seeing some instability in our database cluster"),
MessageUtils.create_assistant_message("I can help analyze the database cluster stability. Could you be more specific about what type of instability you're observing?"),
MessageUtils.create_user_message("The response times are showing oscillations"),
MessageUtils.create_assistant_message("I'll need to look at the performance metrics. Which specific nodes in the cluster are you concerned about?"),
MessageUtils.create_user_message("Mainly nodes DB-01 and DB-02, the oscillations started around 2 hours ago"),
MessageUtils.create_assistant_message("I found response time oscillations in DB-01 and DB-02 starting at 12:15. The pattern appears to be cyclical with 30-second intervals. Here's the analysis showing clear oscillatory behavior with response times varying between 50ms and 300ms."),
MessageUtils.create_user_message("Can you check if there were any configuration changes to the load balancer around that time?"),
],
user_memory=UserMemories(entries=[]),
expected_output=ExtractedTask(
task="Investigate load balancer configuration changes around 12:15 to correlate with observed database response time oscillations",
depends_on_chat_history=True,
depends_on_user_memory=False
)
))
# Reference resolution requiring extraction of specific values from previous analysis
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("Show me the system uptime trend for the last 24 hours"),
MessageUtils.create_assistant_message("Here's the system uptime data for the past 24 hours. The trend shows generally stable performance with 99.8% uptime, with a notable dip at 03:17 where it dropped to 95.2% before recovering by 04:30."),
MessageUtils.create_user_message("That dip around 3 AM is concerning"),
MessageUtils.create_assistant_message("Yes, I see the uptime dropped from 99.8% to 95.2% at 03:17. This represents a significant 4.6 percentage point decrease in system uptime. Would you like me to investigate potential causes?"),
MessageUtils.create_user_message("Please do that, and also compare it to the same time period last week"),
],
user_memory=UserMemories(entries=[]),
expected_output=ExtractedTask(
task="Investigate the system uptime drop from 99.8% to 95.2% that occurred at 03:17 today, and perform comparative analysis with the same time period from exactly one week ago",
depends_on_chat_history=True,
depends_on_user_memory=False
)
))
# Pure conversational query requiring no technical data or analysis
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("Hi, what can you help me with?"),
MessageUtils.create_assistant_message("I'm your workflow automation assistant! I can help with data analysis, system monitoring, process automation, and much more. I can retrieve historical data, analyze trends, troubleshoot issues, and provide insights about your systems."),
MessageUtils.create_user_message("That's great. What's the difference between you and the other assistants?"),
],
user_memory=UserMemories(entries=[]),
expected_output=ExtractedTask(
task="Explain the differences and unique capabilities of this workflow automation assistant compared to other available assistants in the system",
depends_on_chat_history=False,
depends_on_user_memory=False
)
))
# Fresh data request with no previous context needed
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("Can you analyze the performance metrics for our web servers?"),
],
user_memory=UserMemories(entries=[]),
expected_output=ExtractedTask(
task="Analyze performance metrics for web servers",
depends_on_chat_history=False,
depends_on_user_memory=False
)
))
# Fresh data request with no previous context needed
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("What tools do you have?"),
],
user_memory=UserMemories(entries=[]),
expected_output=ExtractedTask(
task="List all the tools you have",
depends_on_chat_history=False,
depends_on_user_memory=False
)
))
# Examples with memory
# Memory-informed request referring to previously saved information
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("Check if that problematic pattern I saved is happening again"),
],
user_memory=UserMemories(entries=[
"[2025-01-15 14:23] Database performance drops consistently at 3:15 AM every Tuesday - correlation with backup scheduling",
"[2025-01-16 09:45] Web server response times spike when CPU usage exceeds 85%",
"[2025-01-17 11:30] Important: Cache invalidation causes temporary performance degradation - need 30min recovery time"
]),
expected_output=ExtractedTask(
task="Monitor for database performance drops around 3:15 AM (Tuesday pattern), web server response time spikes when CPU usage > 85%, and performance degradation following cache invalidation",
depends_on_chat_history=False,
depends_on_user_memory=True
)
))
# Memory helps disambiguate vague reference
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("How is that critical metric doing today?"),
],
user_memory=UserMemories(entries=[
"[2025-01-14 16:42] Critical metric to monitor: API response time - has been unstable lately",
"[2025-01-15 08:15] Reminder: Weekly check needed for database connection pool utilization trends",
"[2025-01-16 13:20] Follow up on load balancer configuration effectiveness"
]),
expected_output=ExtractedTask(
task="Check current status and recent behavior of API response time metric, focusing on stability assessment",
depends_on_chat_history=False,
depends_on_user_memory=True
)
))
# Memory provides context for comparative analysis request
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("Compare today's performance with my baseline measurements"),
],
user_memory=UserMemories(entries=[
"[2025-01-10 10:30] Baseline established: System uptime 99.2±0.1%, response time 145±25 ms",
"[2025-01-10 10:35] Baseline CPU usage: <75% average across all servers during normal operations",
"[2025-01-12 14:15] Good performance day: uptime 99.7%, very stable response times, no scaling adjustments needed"
]),
expected_output=ExtractedTask(
task="Compare today's system uptime and response time performance against baseline of 99.2±0.1% and 145±25 ms, and assess CPU usage against <75% baseline from normal operations",
depends_on_chat_history=False,
depends_on_user_memory=True
)
))
# Memory helps resolve time-specific reference
self.examples.append(TaskExtractionExample(
messages=[
MessageUtils.create_user_message("Is that maintenance window issue resolved?"),
],
user_memory=UserMemories(entries=[
"[2025-01-13 22:45] Next Tuesday 2AM maintenance: Database upgrade work on cluster 2, expect service interruption",
"[2025-01-14 15:30] Maintenance concern: Last time database work caused connection pool instability in adjacent services",
"[2025-01-15 08:00] Post-maintenance checklist: Verify database connections for services 1-3, check query performance consistency"
]),
expected_output=ExtractedTask(
task="Verify resolution of database upgrade maintenance issues from Tuesday 2AM work on cluster 2, specifically checking connection pool stability for services 1-3 and query performance consistency",
depends_on_chat_history=False,
depends_on_user_memory=True
)
))
def get_role_definition(self) -> str:
"""Get the role definition (generic for task extraction)."""
return "Convert chat conversations into actionable task descriptions."
def get_instructions(self) -> str:
"""Get the generic task extraction instructions."""
return """
Core requirements:
• Create self-contained task descriptions executable without conversation context
• Resolve temporal references ("an hour ago", "yesterday") to specific times/values
• Extract specific details and parameters from previous responses
• Determine if task builds on previous conversation context
• Consider available data sources when interpreting requests
• Set depends_on_user_memory=true only when the task directly incorporates specific information from user memory
""".strip()
def get_system_instructions(self, messages: List[BaseMessage], retrieval_result=None) -> str:
"""Get system instructions for task extraction agent configuration.
:param messages: Native LangGraph messages to extract task from
:param retrieval_result: Optional data retrieval result
:return: Complete prompt for task extraction
"""
examples_text = "\n\n".join([
f"## Example {i+1}:\n{example.format_for_prompt()}"
for i, example in enumerate(self.examples)
])
# Format the actual chat history using native message formatter
chat_formatted = ChatHistoryFormatter.format_for_llm(messages)
# Add data source context if available
data_context = ""
if retrieval_result and retrieval_result.has_data:
data_context = f"\n\n**Available Data Sources:**\n{retrieval_result.get_summary()}"
return f"""
You are a task extraction system that analyzes chat history and user memory to extract actionable tasks.
Your job is to:
1. Understand what the user is asking for
2. Extract a clear, actionable task
3. Determine if the task depends on chat history context
4. Determine if the task depends on user memory
## Guidelines:
- Extract the core task the user wants accomplished
- Set depends_on_chat_history=True if the task references previous messages or needs conversation context
- Set depends_on_user_memory=True if the task references stored user information or patterns
- Be specific and actionable in task descriptions
- Consider the full conversation context when determining dependencies
## Examples:
{examples_text}
## Current Chat History:
{chat_formatted}{data_context}
## User Memory:
No stored memories
Now extract the task from the provided chat history and user memory.
"""
"""Default classification prompt implementation."""
import json
import textwrap
from typing import Optional, Dict
from framework.prompts.base import FrameworkPromptBuilder
class DefaultClassificationPromptBuilder(FrameworkPromptBuilder):
"""Default classification prompt builder."""
PROMPT_TYPE = "classification"
def get_role_definition(self) -> str:
"""Get the role definition."""
return "You are an expert task classification assistant."
def get_task_definition(self) -> str:
"""Get the task definition."""
return "Your goal is to determine if a user's request requires a certain capability."
def get_instructions(self) -> str:
"""Get the classification instructions."""
return textwrap.dedent("""
Based on the instructions and examples, you must output a JSON object with a key "is_match": A boolean (true or false) indicating if the user's request matches the capability.
Respond ONLY with the JSON object. Do not provide any explanation, preamble, or additional text.
""").strip()
def _get_dynamic_context(self,
capability_instructions: str = "",
classifier_examples: str = "",
context: Optional[Dict] = None,
previous_failure: Optional[str] = None,
**kwargs) -> str:
"""Build dynamic context with capability info and context."""
sections = []
# Capability instructions
if capability_instructions:
sections.append(f"Here is the capability you need to assess:\n{capability_instructions}")
# Examples
if classifier_examples:
sections.append(f"Examples:\n{classifier_examples}")
# Previous execution context
if context:
sections.append(f"Previous execution context:\n{json.dumps(context, indent=4)}")
# Previous failure context
if previous_failure:
sections.append(f"Previous approach failed: {previous_failure}")
return "\n\n".join(sections)
def get_system_instructions(self,
capability_instructions: str = "",
classifier_examples: str = "",
context: Optional[Dict] = None,
previous_failure: Optional[str] = None,
**kwargs) -> str:
"""Get system instructions for task classification agent configuration."""
sections = []
# Role and instructions
sections.append(self.get_role_definition())
sections.append(self.get_task_definition())
sections.append(self.get_instructions())
# Dynamic context
dynamic_context = self._get_dynamic_context(
capability_instructions=capability_instructions,
classifier_examples=classifier_examples,
context=context,
previous_failure=previous_failure,
**kwargs
)
if dynamic_context:
sections.append(dynamic_context)
final_prompt = "\n\n".join(sections)
# Debug: Print prompt if enabled
self.debug_print_prompt(final_prompt)
return final_prompt
"""Default response generation prompt implementation."""
import textwrap
from typing import Optional, Dict, Any
from framework.prompts.base import FrameworkPromptBuilder
from framework.base import OrchestratorGuide, OrchestratorExample, PlannedStep, TaskClassifierGuide
class DefaultResponseGenerationPromptBuilder(FrameworkPromptBuilder):
"""Default response generation prompt builder."""
def get_role_definition(self) -> str:
"""Get the generic role definition."""
return "You are an expert assistant for workflow automation and data analysis."
def get_task_definition(self) -> Optional[str]:
"""Task definition is embedded dynamically in instructions."""
return None
def get_instructions(self) -> str:
"""Instructions are completely dynamic based on execution context."""
return ""
def _get_dynamic_context(self, current_task: str = "", info=None, **kwargs) -> str:
"""Build dynamic response generation prompt based on execution context."""
sections = []
# Base role with current task
sections.append(f"You are an expert assistant for workflow automation and data analysis.\n\nCURRENT TASK: {current_task}")
if info:
# Context prioritization: Show specific context if available, otherwise show all execution context
if hasattr(info, 'relevant_context') and info.relevant_context:
# Specific execution context provided as input to response node - show only this
sections.append(self._get_data_section(info.relevant_context))
elif hasattr(info, 'execution_history') and info.execution_history:
# No specific context, but execution history available - show all execution context
sections.append(self._get_execution_section(info))
# Capabilities section for conversational responses
if (not hasattr(info, 'execution_history') or not info.execution_history) and hasattr(info, 'capabilities_overview') and info.capabilities_overview:
sections.append(self._get_capabilities_section(info.capabilities_overview))
# Guidelines section
sections.append(self._get_guidelines_section(info))
return "\n\n".join(sections)
def _get_execution_section(self, info) -> str:
"""Get execution summary - keep concise but informative."""
if hasattr(info, 'is_killed') and info.is_killed:
# Handle terminated execution
partial_results = []
for record in info.execution_history:
if record.get("success", False):
task_objective = record.get("step", {}).get("task_objective", "Unknown task")
partial_results.append(f"✓ {task_objective}")
else:
task_objective = record.get("step", {}).get("task_objective", "Unknown task")
error_msg = record.get("result_summary", "Unknown error")
partial_results.append(f"✗ {task_objective}: {error_msg}")
partial_summary = "\n".join(partial_results) if partial_results else "No steps completed"
return textwrap.dedent(f"""
EXECUTION STATUS: Terminated
TERMINATION REASON: {getattr(info, 'kill_reason', None) or "Unknown termination reason"}
PARTIAL SUMMARY:
{partial_summary}
EXECUTION STATS:
- Total steps executed: {getattr(info, 'total_steps_executed', 0)}
- Execution time: {getattr(info, 'execution_start_time', 'Unknown')}
- Reclassifications: {getattr(info, 'reclassification_count', 0)}
""").strip()
else:
# Handle successful execution
summary_parts = []
for i, record in enumerate(info.execution_history, 1):
if record.get("success", False):
task_objective = record.get("step", {}).get("task_objective", "Unknown task")
summary_parts.append(f"Step {i}: {task_objective} - Completed")
else:
task_objective = record.get("step", {}).get("task_objective", "Unknown task")
error_msg = record.get("result_summary", "Unknown error")
summary_parts.append(f"Step {i}: {task_objective} - Failed: {error_msg}")
summary_text = "\n".join(summary_parts) if summary_parts else "No execution steps completed"
return textwrap.dedent(f"""
EXECUTION SUMMARY:
{summary_text}
""").strip()
def _get_data_section(self, relevant_context: Dict[str, Any]) -> str:
"""Get retrieved data section."""
formatted_context = self._format_context_data(relevant_context)
return textwrap.dedent(f"""
RETRIEVED DATA:
{formatted_context}
""").strip()
def _format_context_data(self, context_data: Dict[str, Any]) -> str:
"""Format retrieved context data for the response.
:param context_data: Dictionary of context_type.context_key -> data
:type context_data: Dict[str, Any]
:return: Formatted string with the context data
:rtype: str
"""
if not context_data:
return "No context data was retrieved."
formatted_lines = []
formatted_lines.append("-" * 30)
for data_key, data in context_data.items():
formatted_lines.append(f"\n[{data_key}]")
try:
# Format as clean JSON for readability
import json
formatted_data = json.dumps(data, indent=2, default=str)
formatted_lines.append(formatted_data)
except Exception as e:
# Fallback to string representation
formatted_lines.append(f"<Could not format as JSON: {str(e)}>")
formatted_lines.append(str(data))
formatted_lines.append("-" * 30)
return "\n".join(formatted_lines)
def _get_capabilities_section(self, capabilities_overview: str) -> str:
"""Get capabilities overview for conversational responses."""
return textwrap.dedent(f"""
SYSTEM CAPABILITIES:
{capabilities_overview}
""").strip()
def _get_guidelines_section(self, info) -> str:
"""Get contextually appropriate guidelines to avoid conflicts."""
guidelines = ["Provide a clear, accurate response"]
# Add current date context for temporal awareness
if hasattr(info, 'current_date') and info.current_date:
guidelines.append(f"Today's date is {info.current_date} - use this for temporal context and date references")
if not hasattr(info, 'execution_history') or not info.execution_history:
# Conversational response
guidelines.extend([
"Be warm, professional, and genuine while staying focused on providing assistance",
"Answer general questions about the system and your capabilities naturally",
"Respond to greetings and social interactions professionally",
"Ask clarifying questions to better understand user needs when appropriate",
"Provide helpful context about system operations when relevant",
"Be encouraging about the technical assistance available"
])
elif hasattr(info, 'relevant_context') and info.relevant_context:
# Technical response with relevant context
guidelines.extend([
"Be very accurate but use reasonable judgment when rounding or abbreviating numerical data for readability",
"NEVER make up, estimate, or fabricate any data - only use what is actually retrieved",
"Explain any data limitations or warnings",
"Be specific about time ranges and data sources"
])
if hasattr(info, 'is_killed') and info.is_killed:
# Execution terminated
guidelines.extend([
"Clearly explain why execution was terminated",
"Acknowledge any partial progress that was made",
"Suggest practical alternatives (simpler query, different approach, etc.)",
"Be helpful and encouraging, not apologetic",
"Offer to help with a modified or simpler version of the request",
"NEVER make up or fabricate any results that weren't actually obtained"
])
elif hasattr(info, 'execution_history') and info.execution_history and (not hasattr(info, 'relevant_context') or not info.relevant_context):
# Technical response with no relevant context
guidelines.extend([
"Explain what was accomplished during execution",
"Note any limitations in accessing detailed results",
"NEVER make up or fabricate any technical data - only describe what actually happened"
])
return "GUIDELINES:\n" + "\n".join(f"{i+1}. {g}" for i, g in enumerate(guidelines))
def get_orchestrator_guide(self) -> Optional[OrchestratorGuide]:
"""Create generic orchestrator guide for respond capability."""
technical_with_context_example = OrchestratorExample(
step=PlannedStep(
context_key="user_response",
capability="respond",
task_objective="Respond to user question about data analysis with statistical results",
expected_output="user_response",
success_criteria="Complete response using execution context data and analysis results",
inputs=[
{"ANALYSIS_RESULTS": "data_statistics"},
{"DATA_VALUES": "current_readings"}
]
),
scenario_description="Technical query with available execution context",
notes="Will automatically use context-aware response generation with data retrieval."
)
conversational_example = OrchestratorExample(
step=PlannedStep(
context_key="user_response",
capability="respond",
task_objective="Respond to user question about available tools",
expected_output="user_response",
success_criteria="Friendly, informative response about assistant capabilities",
inputs=[]
),
scenario_description="Conversational query 'What tools do you have?'",
notes="Applies to all conversational user queries with no clear task objective."
)
return OrchestratorGuide(
instructions="""
Plan "respond" as the final step to deliver results to the user.
Always include respond as the last step in execution plans.
""",
examples=[technical_with_context_example, conversational_example],
priority=100 # Should come last in prompt ordering (same as final_response)
)
def get_classifier_guide(self) -> Optional[TaskClassifierGuide]:
"""Respond has no classifier guide - it's orchestrator-driven."""
return None # Always available, not detected from user intent
"""Default error analysis prompts."""
import textwrap
from typing import Optional
from framework.prompts.base import FrameworkPromptBuilder
class DefaultErrorAnalysisPromptBuilder(FrameworkPromptBuilder):
"""Default error analysis prompt builder."""
PROMPT_TYPE = "error_analysis"
def get_role_definition(self) -> str:
"""Get the generic role definition."""
return "You are providing error analysis for the assistant system."
def get_task_definition(self) -> Optional[str]:
"""Task definition is embedded in instructions."""
return None
def get_instructions(self) -> str:
"""Get the error analysis instructions."""
return textwrap.dedent("""
A structured error report has already been generated with the following information:
- Error type and timestamp
- Task description and failed operation
- Error message and technical details
- Execution statistics and summary
- Capability-specific recovery options
Your role is to provide a brief explanation that adds value beyond the structured data:
Requirements:
- Write 2-3 sentences explaining what likely went wrong
- Focus on the "why" rather than repeating the "what"
- Do NOT repeat the error message, recovery options, or execution details
- Be specific to system operations when relevant
- Consider the system capabilities context when suggesting alternatives
- Keep it under 100 words
- Use a professional, technical tone
""").strip()
def _get_dynamic_context(self,
capabilities_overview: str = "",
error_context=None,
**kwargs) -> str:
"""Build dynamic context with capabilities and error information."""
sections = []
# System capabilities
if capabilities_overview:
sections.append(f"SYSTEM CAPABILITIES:\n{capabilities_overview}")
# Error context
if error_context:
error_info = textwrap.dedent(f"""
ERROR CONTEXT:
- Current task: {getattr(error_context, 'current_task', 'Unknown')}
- Error type: {getattr(error_context, 'error_type', {}).value if hasattr(getattr(error_context, 'error_type', {}), 'value') else 'Unknown'}
- Capability: {getattr(error_context, 'capability_name', None) or 'Unknown'}
- Error message: {getattr(error_context, 'error_message', 'Unknown')}
""").strip()
sections.append(error_info)
return "\n\n".join(sections)
"""Default clarification prompts."""
import textwrap
from typing import Optional
from framework.prompts.base import FrameworkPromptBuilder
from framework.base import OrchestratorGuide, OrchestratorExample, PlannedStep, TaskClassifierGuide
class DefaultClarificationPromptBuilder(FrameworkPromptBuilder):
"""Default clarification prompt builder."""
def get_role_definition(self) -> str:
"""Get the generic role definition."""
return "You are helping to clarify ambiguous user queries for the assistant system."
def get_task_definition(self) -> str:
"""Get the task definition."""
return "Your task is to generate specific, targeted questions that will help clarify what the user needs."
def get_instructions(self) -> str:
"""Get the generic clarification instructions."""
return textwrap.dedent("""
GUIDELINES:
1. Ask about missing technical details (which system, time range, specific parameters)
2. Clarify vague terms (what type of "data", "status", "analysis" etc.)
3. Ask about output preferences (format, detail level, specific metrics, etc.)
4. Be specific and actionable - avoid generic questions
5. Limit to 2-3 most important questions
6. Make questions easy to answer
Generate targeted questions that will help get the specific information needed to provide accurate assistance.
""").strip()
def get_orchestrator_guide(self) -> Optional[OrchestratorGuide]:
"""Create generic orchestrator guide for clarification capability."""
ambiguous_system_example = OrchestratorExample(
step=PlannedStep(
context_key="data_clarification",
capability="clarify",
task_objective="Ask user for clarification when request 'show me some data' is too vague",
expected_output=None, # No context output - questions sent directly to user
success_criteria="Specific questions about data type, system, and time range",
inputs=[]
),
scenario_description="Vague data request needing system and parameter clarification",
)
return OrchestratorGuide(
instructions="""
Plan "clarify" when user queries lack specific details needed for execution.
Use instead of respond when information is insufficient.
Replaces technical execution steps until user provides clarification.
""",
examples=[ambiguous_system_example],
priority=99 # Should come near the end, but before respond
)
def get_classifier_guide(self) -> Optional[TaskClassifierGuide]:
"""Clarify has no classifier guide - it's orchestrator-driven."""
return None # Always available, not detected from user intent
def build_clarification_query(self, chat_history: str, task_objective: str) -> str:
"""Build clarification query for generating questions based on conversation context.
Used by the clarification infrastructure to generate specific questions
when information is missing from user requests.
Args:
chat_history: Formatted conversation history
task_objective: Extracted task objective from user request
Returns:
Complete query for question generation with automatic debug printing
"""
prompt = textwrap.dedent(f"""
CONVERSATION HISTORY:
{chat_history}
EXTRACTED TASK OBJECTIVE: {task_objective}
Based on the full conversation history and the extracted task, generate specific clarifying questions.
Consider:
- What has already been discussed in the conversation
- What information is still missing to execute the task
- Avoid asking for information already provided earlier in the conversation
- Focus on the most critical missing information that would enable accurate assistance
""").strip()
# Automatic debug printing for framework helper prompts
self.debug_print_prompt(prompt, "clarification_query")
return prompt
"""
Memory Extraction Prompt Builder - Application-agnostic prompts for memory extraction
"""
from __future__ import annotations
from typing import List, Dict, Any, Optional
from dataclasses import dataclass
import textwrap
from framework.state import MessageUtils, ChatHistoryFormatter, UserMemories
from langchain_core.messages import BaseMessage
from ..base import FrameworkPromptBuilder
from framework.base import BaseExample
from pydantic import BaseModel, Field
# Imports for orchestrator and classifier guides
from framework.base import OrchestratorGuide, OrchestratorExample, PlannedStep, TaskClassifierGuide, ClassifierExample, ClassifierActions
from framework.registry import get_registry
class MemoryContentExtraction(BaseModel):
"""Structured output model for memory content extraction."""
content: str = Field(description="The content that should be saved to memory, or empty string if no content identified")
found: bool = Field(description="True if content to save was identified in the user message, False otherwise")
explanation: str = Field(description="Brief explanation of what content was extracted and why")
@dataclass
class MemoryExtractionExample(BaseExample):
"""Example for memory extraction prompt."""
def __init__(self, messages: List[BaseMessage], expected_output: MemoryContentExtraction):
self.messages = messages
self.expected_output = expected_output
def format_for_prompt(self) -> str:
"""Format this example for inclusion in prompts."""
# Format chat history using native message formatter
chat_formatted = ChatHistoryFormatter.format_for_llm(self.messages)
return textwrap.dedent(f"""
**Chat History:**
{textwrap.indent(chat_formatted, " ")}
**Expected Output:**
{{
"content": "{self.expected_output.content}",
"found": {str(self.expected_output.found).lower()},
"explanation": "{self.expected_output.explanation}"
}}
""").strip()
class DefaultMemoryExtractionPromptBuilder(FrameworkPromptBuilder):
"""Framework prompt builder for memory extraction."""
PROMPT_TYPE = "memory_extraction"
def __init__(self):
super().__init__()
self.examples = []
self._load_examples()
def _load_examples(self):
"""Load memory extraction examples with native LangGraph messages."""
# Explicit save instruction with quoted content
self.examples.append(MemoryExtractionExample(
messages=[
MessageUtils.create_user_message('Please save this finding: "Database performance degrades significantly when connection pool exceeds 50 connections - optimal range is 20-30 connections"')
],
expected_output=MemoryContentExtraction(
content="Database performance degrades significantly when connection pool exceeds 50 connections - optimal range is 20-30 connections",
found=True,
explanation="User explicitly requested to save a specific finding with quantitative thresholds"
)
))
# Technical insight with specific parameters
self.examples.append(MemoryExtractionExample(
messages=[
MessageUtils.create_user_message("I've been analyzing the server logs and found a pattern"),
MessageUtils.create_assistant_message("What pattern did you identify?"),
MessageUtils.create_user_message("Remember that API response times increase by 40% when memory usage exceeds 85% - this happens during peak hours between 2-4 PM")
],
expected_output=MemoryContentExtraction(
content="API response times increase by 40% when memory usage exceeds 85% - this happens during peak hours between 2-4 PM",
found=True,
explanation="Technical insight with specific performance metrics and timing patterns"
)
))
# Procedural discovery with workflow details
self.examples.append(MemoryExtractionExample(
messages=[
MessageUtils.create_user_message("Store this procedure for future reference: Code deployments work best when done after 6 PM, with database migrations run first, then application restart, followed by cache clearing - allow 15 minutes between each step")
],
expected_output=MemoryContentExtraction(
content="Code deployments work best when done after 6 PM, with database migrations run first, then application restart, followed by cache clearing - allow 15 minutes between each step",
found=True,
explanation="Detailed procedural workflow with timing and sequencing requirements"
)
))
# Configuration insight with mixed save/don't save instructions
self.examples.append(MemoryExtractionExample(
messages=[
MessageUtils.create_user_message("Today's incident was caused by a timeout issue, but don't save that. However, do remember that we found the root cause: default timeout of 30 seconds is too short for large file uploads - increase to 120 seconds for files over 100MB")
],
expected_output=MemoryContentExtraction(
content="default timeout of 30 seconds is too short for large file uploads - increase to 120 seconds for files over 100MB",
found=True,
explanation="User explicitly requested to save specific configuration finding while excluding incident details"
)
))
# Negative example - routine status check (should not save)
self.examples.append(MemoryExtractionExample(
messages=[
MessageUtils.create_user_message("What's the current system status and how are the servers performing today?"),
MessageUtils.create_assistant_message("All systems are running normally with 99.2% uptime. Server load is within normal parameters."),
MessageUtils.create_user_message("Thanks, everything looks good for today's operations")
],
expected_output=MemoryContentExtraction(
content="",
found=False,
explanation="This is routine operational status checking and acknowledgment, not content intended for permanent memory"
)
))
# Negative example - procedural question (should not save)
self.examples.append(MemoryExtractionExample(
messages=[
MessageUtils.create_user_message("How do I configure the load balancer to handle SSL termination?")
],
expected_output=MemoryContentExtraction(
content="",
found=False,
explanation="This is a procedural question about system configuration, not content to be saved to memory"
)
))
def get_role_definition(self) -> str:
"""Get the role definition."""
return "You are an expert content extraction assistant. Your task is to identify and extract content that a user wants to save to their memory from their message."
def get_task_definition(self) -> str:
"""Get the task definition."""
return "TASK: Extract content from the user's message that they want to save/store/remember."
def get_instructions(self) -> str:
"""Get the memory extraction instructions."""
return textwrap.dedent("""
INSTRUCTIONS:
1. Analyze the user message to identify content they explicitly want to save
2. Look for patterns like:
- "save this:" followed by content
- "remember that [content]"
- "store [content] in my memory"
- "add to memory: [content]"
- Content in quotes that should be saved
- Explicit instructions to save specific information
3. Extract the ACTUAL CONTENT to be saved, not the instruction itself
4. If no clear content is identified for saving, set found=false
5. Provide a brief explanation of your decision
CRITICAL REQUIREMENTS:
- Only extract content that is clearly intended for saving/storage
- Do not extract questions, commands, or conversational text
- Remove quotes and prefixes like "save this:", "remember that", etc.
- Extract the pure content to be remembered
- Be conservative - if unclear, set found=false
""").strip()
def _get_examples(self, **kwargs) -> List[MemoryExtractionExample]:
"""Get generic memory extraction examples."""
return self.examples
def _format_examples(self, examples: List[MemoryExtractionExample]) -> str:
"""Format multiple MemoryExtractionExample objects for inclusion in prompts."""
examples_formatted = ""
for i, example in enumerate(examples, 1):
examples_formatted += f"\n**Example {i}:**\n{example.format_for_prompt()}\n"
return examples_formatted.strip()
def _get_dynamic_context(self, **kwargs) -> str:
"""Build the response format section."""
return textwrap.dedent("""
RESPOND WITH VALID JSON IN THIS EXACT FORMAT:
{
"content": "the exact content to save (or empty string if none found)",
"found": true/false,
"explanation": "brief explanation of your decision"
}
You will be provided with a chat history. Extract the content to save from the user message in that chat history.
""").strip()
def get_orchestrator_guide(self) -> Optional[Any]:
"""Create generic orchestrator guide for memory operations."""
registry = get_registry()
# Define structured examples
save_memory_example = OrchestratorExample(
step=PlannedStep(
context_key="memory_save",
capability="memory",
task_objective="Save the important finding about data correlation to memory",
expected_output=registry.context_types.MEMORY_CONTEXT,
success_criteria="Memory entry saved successfully",
inputs=[]
),
scenario_description="Saving important information to user memory",
notes=f"Content is persisted to memory file and provided as {registry.context_types.MEMORY_CONTEXT} context for response confirmation."
)
show_memory_example = OrchestratorExample(
step=PlannedStep(
context_key="memory_display",
capability="memory",
task_objective="Show all my saved memory entries",
expected_output=registry.context_types.MEMORY_CONTEXT,
success_criteria="Memory content displayed to user",
inputs=[]
),
scenario_description="Displaying stored memory content",
notes=f"Retrieves memory content as {registry.context_types.MEMORY_CONTEXT}. Typically followed by respond step to present results to user."
)
return OrchestratorGuide(
instructions=textwrap.dedent(f"""
**When to plan "memory" steps:**
- When the user explicitly asks to save, store, or remember something for later
- When the user asks to show, display, or view their saved memory
- When the user explicitly mentions memory operations
**IMPORTANT**: This capability has a VERY STRICT classifier. Only use when users
explicitly mention memory-related operations. Do NOT use for general information
storage or context management.
**Step Structure:**
- context_key: Unique identifier for output (e.g., "memory_save", "memory_display")
- task_objective: The specific memory operation to perform
**Output: {registry.context_types.MEMORY_CONTEXT}**
- Save operations: Contains saved content for response confirmation
- Retrieve operations: Contains stored memory content for use by respond step
- Available to downstream steps via context system
Only plan this step when users explicitly request memory operations.
"""),
examples=[save_memory_example, show_memory_example],
priority=10 # Later in the prompt ordering since it's specialized
)
def get_classifier_guide(self) -> Optional[Any]:
"""Create generic classifier guide for memory operations."""
# Create generic memory-specific examples
memory_examples = [
ClassifierExample(
query="Save this finding to my memory: database performance correlates with connection pool size",
result=True,
reason="Direct memory save request with specific content to preserve"
),
ClassifierExample(
query="Remember that cache optimization works best 15 minutes after restart",
result=True,
reason="Memory save request using 'remember' keyword with procedural knowledge"
),
ClassifierExample(
query="What do I have saved in my memory about performance tuning?",
result=True,
reason="Memory retrieval request asking to show stored information"
),
ClassifierExample(
query="Show me my saved notes",
result=True,
reason="Memory retrieval request for displaying saved content"
),
ClassifierExample(
query="Store this configuration procedure: set timeout at 120 seconds",
result=True,
reason="Explicit store request with specific technical procedure to save"
)
]
return TaskClassifierGuide(
instructions="Determine if the task involves saving content to memory or retrieving content from memory.",
examples=memory_examples,
actions_if_true=ClassifierActions()
)
def get_memory_classification_prompt(self) -> str:
"""Get prompt for classifying memory operations as SAVE or RETRIEVE.
Returns the system prompt used by LLM to classify user tasks into
memory operation types. Includes automatic debug printing.
Returns:
System prompt string for memory operation classification
"""
prompt = textwrap.dedent("""
You are a memory operation classifier. Analyze the user's task and determine if they want to:
- SAVE: Store new information to memory (e.g. user asks to save, store, remember, record, add, append)
- RETRIEVE: Show existing memories (e.g. user asks to show, display, view, retrieve, see, list)
Focus on the user's intent, not just keyword matching. Context matters.
""").strip()
# Automatic debug printing for framework helper prompts
self.debug_print_prompt(prompt, "memory_operation_classification")
return prompt
"""
Time Range Parsing Prompt Builder
Default prompts for time range parsing capability.
"""
import textwrap
from typing import Optional
from framework.prompts.base import FrameworkPromptBuilder
from framework.base import (
OrchestratorGuide, OrchestratorExample, PlannedStep,
TaskClassifierGuide, ClassifierExample, ClassifierActions
)
from framework.registry import get_registry
class DefaultTimeRangeParsingPromptBuilder(FrameworkPromptBuilder):
"""Default time range parsing prompt builder."""
PROMPT_TYPE = "time_range_parsing"
def get_role_definition(self) -> str:
"""Get the role definition for time range parsing."""
return "You are an expert time range parser that converts natural language time expressions into precise datetime ranges."
def get_task_definition(self) -> str:
"""Get the task definition for time range parsing."""
return "TASK: Parse time references from user queries and convert them to absolute datetime ranges."
def get_instructions(self) -> str:
"""Get the instructions for time range parsing."""
return textwrap.dedent("""
INSTRUCTIONS:
1. Identify time references in the user query (relative, absolute, or implicit)
2. Convert all time expressions to absolute datetime values
3. Ensure start_date is always before end_date
4. Use current time as reference for relative expressions
5. Return structured datetime objects in YYYY-MM-DD HH:MM:SS format
SUPPORTED PATTERNS:
- Relative: "last X hours/days", "yesterday", "this week"
- Absolute: "from YYYY-MM-DD to YYYY-MM-DD"
- Implicit: "current", "recent" (default to last few minutes)
""").strip()
def get_orchestrator_guide(self) -> Optional[OrchestratorGuide]:
"""Create orchestrator guide for time range parsing."""
registry = get_registry()
# Define structured examples using simplified dict format
relative_time_example = OrchestratorExample(
step=PlannedStep(
context_key="last_week_timerange",
capability="time_range_parsing",
task_objective="Parse 'last week' time reference into absolute datetime objects",
expected_output=registry.context_types.TIME_RANGE,
success_criteria="Time range successfully parsed to absolute datetime objects",
inputs=[]
),
scenario_description="Parsing relative time references like 'last hour', 'yesterday'",
notes=f"Output stored under {registry.context_types.TIME_RANGE} context type as datetime objects with full datetime functionality."
)
absolute_time_example = OrchestratorExample(
step=PlannedStep(
context_key="explicit_timerange",
capability="time_range_parsing",
task_objective="Parse explicit datetime range '2024-01-15 09:00:00 to 2024-01-15 21:00:00' and validate format",
expected_output=registry.context_types.TIME_RANGE,
success_criteria="Explicit time range validated and converted to datetime objects",
inputs=[]
),
scenario_description="Parsing explicit time ranges in YYYY-MM-DD HH:MM:SS format",
notes=f"Output stored under {registry.context_types.TIME_RANGE} context type. Validates and converts user-provided time ranges to datetime objects"
)
implicit_time_example = OrchestratorExample(
step=PlannedStep(
context_key="current_data_timerange",
capability="time_range_parsing",
task_objective="Infer appropriate time range for current beam energy data request (last 5 minutes)",
expected_output=registry.context_types.TIME_RANGE,
success_criteria="Appropriate time range inferred and converted to datetime objects",
inputs=[]
),
scenario_description="Inferring time ranges for 'current' or 'recent' data requests",
notes=f"Output stored under {registry.context_types.TIME_RANGE} context type. Provides sensible defaults (e.g., last few minutes) as datetime objects"
)
return OrchestratorGuide(
instructions=textwrap.dedent(f"""
**When to plan "time_range_parsing" steps:**
- When tasks require time-based data (historical trends, archiver data, logs)
- When user queries contain time references that need to be converted to absolute datetime objects
- As a prerequisite step before archiver data retrieval or time-based analysis
**Step Structure:**
- context_key: Unique identifier for output (e.g., "last_week_timerange", "explicit_timerange")
- task_objective: The specific and self-contained time range parsing task to perform
**Output: {registry.context_types.TIME_RANGE}**
- Contains: start_date and end_date as datetime objects with full datetime functionality
- Available to downstream steps via context system
- Supports datetime arithmetic, comparison, and formatting operations
**Time Pattern Support:**
- Relative: "last X hours/minutes/days", "yesterday", "this week", "last week"
- Absolute: "from YYYY-MM-DD HH:MM:SS to YYYY-MM-DD HH:MM:SS"
- Implicit: "current", "recent" (defaults to last few minutes)
**Dependencies and sequencing:**
1. This step typically comes early when time-based data operations are needed
2. Results feed into subsequent data retrieval capabilities that require time ranges
3. Uses LLM to handle complex relative time references and natural language time expressions
4. Downstream steps can use datetime objects directly without string parsing
ALWAYS plan this step when any time-based data operations are needed,
regardless of whether the user provides explicit time ranges or relative time descriptions.
"""),
examples=[relative_time_example, absolute_time_example, implicit_time_example],
priority=5
)
def get_classifier_guide(self) -> Optional[TaskClassifierGuide]:
"""Create classifier guide for time range parsing."""
return TaskClassifierGuide(
instructions="Determine if the task involves time-based data requests that require parsing time ranges from user queries.",
examples=[
ClassifierExample(
query="Which tools do you have?",
result=False,
reason="This is a question about AI capabilities, no time range needed."
),
ClassifierExample(
query="Plot the beam current for the last 2 hours",
result=True,
reason="Request involves time range ('last 2 hours') that needs parsing."
),
ClassifierExample(
query="What is the current beam energy?",
result=False,
reason="Request is for current value, no time range needed."
),
ClassifierExample(
query="Show me vacuum trends from yesterday",
result=True,
reason="Request involves time range ('yesterday') that needs parsing."
),
ClassifierExample(
query="Get historical data from 2024-01-15 09:00:00 to 2024-01-15 21:00:00",
result=True,
reason="Request has explicit time range that needs parsing and validation."
),
ClassifierExample(
query="How does the accelerator work?",
result=False,
reason="This is a general question about accelerator principles, no time data needed."
),
ClassifierExample(
query="Show recent trends",
result=True,
reason="Request involves implicit time range ('recent') that needs parsing."
),
ClassifierExample(
query="Show me some data",
result=False,
reason="Request does not involve time range."
),
],
actions_if_true=ClassifierActions()
)
"""
Python Capability Prompt Builder
Default prompts for Python code generation and execution capability.
"""
import textwrap
from typing import Optional
from framework.prompts.base import FrameworkPromptBuilder
from framework.base import (
OrchestratorGuide, OrchestratorExample, PlannedStep,
TaskClassifierGuide, ClassifierExample, ClassifierActions
)
from framework.registry import get_registry
class DefaultPythonPromptBuilder(FrameworkPromptBuilder):
"""Default Python capability prompt builder."""
PROMPT_TYPE = "python"
def get_role_definition(self) -> str:
"""Get the role definition for Python code generation."""
return "You are a Python code generator that creates clean, simple, and effective Python code for computational tasks."
def get_task_definition(self) -> str:
"""Get the task definition for Python code generation."""
return "TASK: Generate minimal, working Python code to accomplish computational tasks and basic data processing."
def get_instructions(self) -> str:
"""Get the instructions for Python code generation."""
return textwrap.dedent("""
INSTRUCTIONS:
1. Write minimal, working Python code that accomplishes the specified task
2. Include basic error handling if needed
3. Use standard library when possible - avoid unnecessary imports
4. Print results clearly with descriptive output
5. Keep code simple, readable, and well-commented
6. Focus on the core computational task without over-engineering
CODE REQUIREMENTS:
- Use clear variable names
- Include comments for complex logic
- Print intermediate steps for debugging if helpful
- Handle common edge cases (division by zero, empty lists, etc.)
- Structure code logically with proper indentation
EXAMPLE OUTPUT FORMAT:
```python
# Brief comment explaining the task
import math # Only import what's needed
# Main computation
radius = 5
area = math.pi * radius ** 2
print(f"Area of circle with radius {radius}: {area:.2f}")
```
""").strip()
def get_orchestrator_guide(self) -> Optional[OrchestratorGuide]:
"""Create orchestrator guide for Python capability."""
registry = get_registry()
# Define structured examples
calculation_example = OrchestratorExample(
step=PlannedStep(
context_key="calculation_results",
capability="python",
task_objective="Calculate the area of a circle with radius 5 and display the result",
expected_output=registry.context_types.PYTHON_RESULTS,
success_criteria="Mathematical calculation completed with printed result",
inputs=[]
),
scenario_description="Simple mathematical calculations using Python",
notes=f"Output stored under {registry.context_types.PYTHON_RESULTS} context type. Generated code, execution output, and any errors are captured."
)
data_processing_example = OrchestratorExample(
step=PlannedStep(
context_key="processing_results",
capability="python",
task_objective="Process a list of numbers [1, 2, 3, 4, 5] and calculate mean, median, and standard deviation",
expected_output=registry.context_types.PYTHON_RESULTS,
success_criteria="Statistical analysis completed with all metrics calculated",
inputs=[]
),
scenario_description="Basic data processing and statistical calculations",
notes=f"Output stored under {registry.context_types.PYTHON_RESULTS} context type. Demonstrates data manipulation and statistical functions."
)
utility_example = OrchestratorExample(
step=PlannedStep(
context_key="utility_results",
capability="python",
task_objective="Generate 10 random numbers between 1 and 100 and find the maximum value",
expected_output=registry.context_types.PYTHON_RESULTS,
success_criteria="Random number generation completed with maximum value identified",
inputs=[]
),
scenario_description="Utility functions like random number generation and basic algorithms",
notes=f"Output stored under {registry.context_types.PYTHON_RESULTS} context type. Shows how to handle randomization and list operations."
)
return OrchestratorGuide(
instructions=textwrap.dedent(f"""
**When to plan "python" steps:**
- User requests simple calculations or mathematical operations
- Need to perform basic data processing or statistical analysis
- User wants to execute Python code for computational tasks
- Simple algorithms or utility functions are needed
- Processing of lists, numbers, or basic data structures
**Step Structure:**
- context_key: Unique identifier for output (e.g., "calculation_results", "processing_output")
- task_objective: Clear description of the computational task to perform
- inputs: Optional, can work standalone for most simple tasks
**Output: {registry.context_types.PYTHON_RESULTS}**
- Contains: Generated Python code, execution output, and any errors
- Available to downstream steps via context system
- Includes code explanation and execution status
**Use for:**
- Mathematical calculations (area, volume, statistics)
- Simple data processing (sorting, filtering, aggregations)
- Basic computational tasks (random numbers, algorithms)
- Code generation and mock execution
- Utility functions and helper calculations
**Dependencies and sequencing:**
1. Often used as a standalone capability for simple tasks
2. Can consume data from other capabilities if needed
3. Results can feed into visualization or analysis steps
4. Lightweight alternative to complex data analysis workflows
ALWAYS prefer this capability for simple computational tasks that don't require
sophisticated data analysis or complex external dependencies.
"""),
examples=[calculation_example, data_processing_example, utility_example],
priority=40
)
def get_classifier_guide(self) -> Optional[TaskClassifierGuide]:
"""Create classifier guide for Python capability."""
return TaskClassifierGuide(
instructions="Determine if the user query requires Python code execution for simple computational tasks, mathematical calculations, or basic data processing.",
examples=[
ClassifierExample(
query="Calculate the area of a circle with radius 5",
result=True,
reason="This requires mathematical calculation using Python."
),
ClassifierExample(
query="What is your name?",
result=False,
reason="This is a conversational question, not a computational task."
),
ClassifierExample(
query="Process this list of numbers and find the average",
result=True,
reason="This requires data processing and statistical calculation."
),
ClassifierExample(
query="Show me the current time",
result=False,
reason="This is a simple information request, not requiring custom code generation."
),
ClassifierExample(
query="Generate a random number between 1 and 100",
result=True,
reason="This requires Python code to generate random numbers."
),
ClassifierExample(
query="Sort these numbers in ascending order: 5, 2, 8, 1, 9",
result=True,
reason="This requires Python code for data manipulation and sorting."
),
ClassifierExample(
query="What tools do you have available?",
result=False,
reason="This is a question about AI capabilities, not a computational task."
),
ClassifierExample(
query="Calculate the fibonacci sequence up to 10 numbers",
result=True,
reason="This requires Python code to implement an algorithm."
),
ClassifierExample(
query="How does machine learning work?",
result=False,
reason="This is an educational question, not a request for code execution."
),
ClassifierExample(
query="Convert temperature from 32°F to Celsius",
result=True,
reason="This requires mathematical calculation and unit conversion."
),
],
actions_if_true=ClassifierActions()
)
Registration Patterns#
Applications register their prompt providers during initialization using the registry system:
Basic Registration#
from framework.prompts.loader import register_framework_prompt_provider
from applications.myapp.framework_prompts import MyAppPromptProvider
# During application initialization
register_framework_prompt_provider("myapp", MyAppPromptProvider())
Registry-Based Registration#
For automatic discovery, include prompt providers in your application registry:
# In applications/myapp/registry.py
from framework.registry import RegistryConfig, FrameworkPromptProviderRegistration
class MyAppRegistryProvider(RegistryConfigProvider):
def get_registry_config(self) -> RegistryConfig:
return RegistryConfig(
# ... other registrations
framework_prompt_providers=[
FrameworkPromptProviderRegistration(
application_name="myapp",
module_path="applications.myapp.framework_prompts",
class_name="MyAppPromptProvider",
description="Domain-specific prompt provider",
prompt_builders={
"orchestrator": "MyOrchestratorPromptBuilder",
"classification": "MyClassificationPromptBuilder"
# Others use framework defaults
}
)
]
)
Advanced Patterns#
Multi-Application Deployments#
For deployments with multiple applications, you can access specific providers:
from framework.prompts import get_framework_prompts
# Access specific application's prompts
als_provider = get_framework_prompts("als_expert")
wind_provider = get_framework_prompts("wind_turbine")
# Use default provider (first registered)
default_provider = get_framework_prompts()
Selective Override Pattern#
Override only specific builders while inheriting others:
from framework.prompts.defaults import DefaultPromptProvider
class MyAppPromptProvider(DefaultPromptProvider):
def __init__(self):
super().__init__()
# Override specific builders
self._custom_orchestrator = MyOrchestratorPromptBuilder()
def get_orchestrator_prompt_builder(self):
return self._custom_orchestrator
# All other methods inherited from DefaultPromptProvider
Testing Strategies#
Test your custom prompts in isolation:
def test_custom_orchestrator_prompt():
builder = MyOrchestratorPromptBuilder()
# Test role definition
role = builder.get_role_definition()
assert "domain-specific" in role.lower()
# Test full prompt generation
system_prompt = builder.get_system_instructions(
capabilities=["test_capability"],
context_manager=mock_context
)
assert len(system_prompt) > 0
See also
- Prompt System
API reference for prompt system classes and functions
- Registry and Discovery
Component registration and discovery patterns
- Convention over Configuration: Configuration-Driven Registry Patterns
Framework conventions and patterns