Understanding the Framework#
The Alpha Berkeley Framework is a production-ready conversational agentic system built on LangGraph’s StateGraph foundation. Its distinctive architecture centers on Classification and Orchestration - capability selection followed by upfront execution planning - providing reliable, scalable multi-step operations with human oversight that scales well to a large number of domain specific tools.

Framework Architecture Overview#
The framework processes every conversation through a structured pipeline that transforms natural language into reliable, orchestrated execution plans:
- 1. Unified Entry Point
All user interactions flow through a single Gateway that normalizes input from CLI, web interfaces, and external integrations.
- 2. Comprehensive Task Extraction
Transforms arbitrarily long chat history and external data sources into a well-defined current task with actionable requirements and context.
- 3. Task Classification System
LLM-powered classification for each capability ensures only relevant prompts are used in downstream processes, providing efficient prompt management.
- 4. Upfront Orchestration
Complete execution plans are generated before any capability execution begins, preventing hallucination and ensuring reliable outcomes.
- 5. Controlled Execution
Plans execute step-by-step with checkpoints, human approval workflows, and comprehensive error handling.
Framework Functions#
Converts conversational input into structured, actionable tasks:
# Chat history becomes focused task
current_task = await _extract_task(
messages=state["messages"],
retrieval_result=data_manager.retrieve_all_context(request),
)
Context Compression: Refines lengthy conversations into precise, actionable tasks
Datasource Integration: Enhances tasks with structured data from external sources
Self-Contained Output: Produces tasks that are executable without relying on prior conversation history
Task classification determines which capabilities are relevant:
# Each capability gets yes/no decision
active_capabilities = await classify_task(
task=state.current_task,
available_capabilities=registry.get_all_capabilities()
)
Binary Decisions: Yes/no for each capability
Prompt Efficiency: Only relevant capabilities loaded
Parallel Processing: Independent classification decisions
Creates complete execution plans before any capability runs:
# Generate validated execution plan
execution_plan = await create_execution_plan(
task=state.current_task,
capabilities=state.active_capabilities
)
Upfront Planning: Complete plans before execution
Plan Validation: Prevents capability hallucination
Deterministic Execution: Router follows predetermined steps
Manages conversation context and execution state:
# Persistent context across conversations
StateManager.store_context(
state, "PV_ADDRESSES", context_key, pv_data
)
Conversation Persistence: Context survives restarts
Execution Tracking: Current step and progress
Context Isolation: Capability-specific data storage
Human oversight through LangGraph interrupts:
# Request human approval
if requires_approval:
interrupt(approval_data)
# Execution pauses for human decision
Planning Approval: Review execution plans before running
Code Approval: Human review of generated code
Native Integration: Built on LangGraph interrupts
🚀 Next Steps
Now that you understand the framework’s core concepts, explore the architectural principles that make it production-ready and scalable:
Gateway-driven pipeline, component coordination, and the three-pillar processing architecture
Configuration-driven component loading, decorator-based registration, and eliminating boilerplate
StateGraph workflows, native checkpointing, interrupts, and streaming support
Why upfront planning outperforms reactive tool calling and improves reliability