Container Deployment#
What youβll learn: How to deploy and manage containerized services using the Alpha Berkeley Frameworkβs container management system
π What Youβll Learn
Key Concepts:
Using
container_manager.py
for service deployment and orchestrationConfiguring hierarchical services (
framework.*
andapplications.*
)Managing Jinja2 template rendering with
docker-compose.yml.j2
filesUnderstanding build directory management and source code copying
Implementing development vs production deployment patterns
Prerequisites: Understanding of Docker/container concepts and Configuration System
Time Investment: 30-45 minutes for complete understanding
Overview#
The Alpha Berkeley Framework provides a container management system for deploying framework and application services. The system handles service discovery, Docker Compose template rendering, and container orchestration through Podman Compose.
Core Features:
Hierarchical Service Discovery: Framework services (
framework.*
) and application services (applications.app.*
)Template Rendering: Jinja2 processing of Docker Compose templates with configuration context
Build Management: Automated build directory creation with source code and configuration copying
Container Orchestration: Podman Compose integration for service deployment
Architecture#
The container management system supports two service categories:
- Framework Services
Core infrastructure services defined in
src/framework/config.yml
:jupyter
: Python execution environment with EPICS supportopen-webui
: Web interface for agent interactionpipelines
: Processing pipeline infrastructurelangfuse
: Observability and tracing
- Application Services
Domain-specific services defined in
src/applications/{app}/config.yml
:mongo
: Database servicesneo4j
: Graph databasesqdrant
: Vector databasespv_finder
: EPICS Process Variable discoverylogbook
: Electronic logbook integration
Service Configuration#
Configure services in your configuration files using the hierarchical structure.
Framework Services Configuration#
Framework services are configured in src/framework/config.yml
:
# Framework service deployment control
deployed_services:
- framework.jupyter
- framework.pipelines
- framework.open-webui
framework:
services:
jupyter:
path: ./services/framework/jupyter
containers:
read:
name: jupyter-read
port_host: 8088
port_container: 8088
write:
name: jupyter-write
port_host: 8089
port_container: 8088
copy_src: true
render_kernel_templates: true
pipelines:
path: ./services/framework/pipelines
port_host: 9099
port_container: 9099
copy_src: true
additional_dirs:
- interfaces
Application Services Configuration#
Application services are configured in src/applications/{app}/config.yml
:
# ALS Expert service deployment control
deployed_services:
- applications.als_expert.mongo
- applications.als_expert.pv_finder
services:
mongo:
name: mongo
path: ./services/applications/als_expert/mongo
port_host: 27017
port_container: 27017
copy_src: true
pv_finder:
path: ./services/applications/als_expert/pv_finder
name: pv-finder
port_host: 8051
port_container: 8051
copy_src: true
Configuration Options:
path
: Directory containing the serviceβs Docker Compose templatename
: Container name for the serviceport_host/port_container
: Port mapping between host and containercopy_src
: Whether to copy source code into the build directoryadditional_dirs
: Extra directories to copy to build environmentrender_kernel_templates
: Process Jupyter kernel templates (for Jupyter services)
Deployment Control#
Control which services are deployed using the deployed_services
configuration. The main config.yml
can override framework and application settings:
# Main config.yml - override deployed services
deployed_services:
# Framework services
- framework.jupyter
- framework.pipelines
# Application services
- applications.als_expert.mongo
- applications.als_expert.pv_finder
Service Naming Patterns:
Framework services:
framework.{service_name}
or short name{service_name}
Application services:
applications.{app}.{service_name}
(full path required)
Deployment Workflow#
The container management system supports both development and production deployment patterns.
Development Pattern#
For development and debugging, start services incrementally:
Configure services incrementally in
config.yml
:deployed_services: - framework.pipelines # Start with one service
Start in non-detached mode to monitor logs:
python3 deployment/container_manager.py config.yml up
Add additional services after verifying each one works correctly
Production Pattern#
For production deployment:
Configure all required services in
config.yml
:deployed_services: - framework.jupyter - framework.open-webui - framework.pipelines - applications.als_expert.mongo
Start all services in detached mode:
python3 deployment/container_manager.py config.yml up -d
Verify services are running:
podman ps
Docker Compose Templates#
Services use Jinja2 templates for Docker Compose file generation.
Template Structure#
Templates are located at {service_path}/docker-compose.yml.j2
and have access to the complete configuration context:
# services/framework/jupyter/docker-compose.yml.j2
services:
jupyter-read:
container_name: jupyter-read
build:
context: ./framework/jupyter
dockerfile: Dockerfile
ports:
- "{{framework.services.jupyter.containers.read.port_host}}:{{framework.services.jupyter.containers.read.port_container}}"
volumes:
- {{project_root}}/{{file_paths.agent_data_dir}}/{{file_paths.executed_python_scripts_dir}}:/home/jovyan/work/executed_scripts/
environment:
- PYTHONPATH=/jupyter/repo_src
- HTTP_PROXY=${HTTP_PROXY}
networks:
- alpha-berkeley-network
Template Features:
Configuration Access: Full configuration available as Jinja2 variables
Environment Variables: Access to environment variables via
${VAR_NAME}
Networking: Automatic network configuration
Volume Management: Dynamic volume mounting based on configuration
Container Manager Usage#
Deploy services using the container manager script.
Basic Commands#
# Generate compose files only (for review)
python3 deployment/container_manager.py config.yml
# Start services in foreground
python3 deployment/container_manager.py config.yml up
# Start services in background
python3 deployment/container_manager.py config.yml up -d
# Stop services
python3 deployment/container_manager.py config.yml down
Deployment Workflow#
The container manager follows this workflow:
Configuration Loading: Load and merge configuration files with imports
Service Discovery: Process
deployed_services
list to identify active servicesTemplate Processing: Render Jinja2 templates with configuration context
Build Directory Setup: Create build directories and copy necessary files
Container Orchestration: Execute Podman Compose with generated files
Generated Files:
build/services/
βββ docker-compose.yml # Root network configuration
βββ framework/
β βββ jupyter/
β βββ docker-compose.yml # Jupyter service
β βββ repo_src/ # Copied source code
β βββ config.yml # Flattened configuration
βββ applications/
βββ als_expert/
βββ mongo/
βββ docker-compose.yml # MongoDB service
βββ repo_src/ # Copied source code
Container Networking#
Service Communication#
Services communicate through container networks using service names as hostnames:
OpenWebUI to Pipelines:
http://pipelines:9099
Framework to Databases:
mongodb://mongo:27017
,http://neo4j:7474
Host to Services:
http://localhost:<mapped_port>
Host Access from Containers#
For containers to access services running on the host (like Ollama):
Use
host.containers.internal
instead oflocalhost
Example:
http://host.containers.internal:11434
for Ollama
Port Mapping#
Services expose ports to the host system:
OpenWebUI:
8080:8080
Jupyter:
8888:8888
(read-only),8889:8888
(write access)Pipelines:
9099:9099
Check your service configurations for specific port mappings.
Advanced Configuration#
Environment Variables#
The container manager automatically loads environment variables from .env
:
# .env file - Services will have access to these variables
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
Build Directory Customization#
Generated files are placed in the build/
directory by default. This can be configured:
build_dir: "./custom_build"
Source Code Integration#
Services can be configured to include source code:
framework:
services:
pipelines:
copy_src: true # Copies src/ to repo_src/ in container
Additional Directories#
Services can copy additional directories into containers:
framework:
services:
jupyter:
additional_dirs:
- src_dir: "_agent_data"
dest_dir: "agent_data"
- docs # Simple directory copy
Build Directory Management#
The container manager creates complete build environments for each service.
Build Process#
For each deployed service:
Clean Build Directory: Remove existing build directory for clean deployment
Render Templates: Process Docker Compose template with configuration context
Copy Service Files: Copy all service files except templates
Copy Source Code: Copy
src/
directory ifcopy_src: true
Copy Additional Directories: Copy directories specified in
additional_dirs
Create Flattened Configuration: Generate merged configuration file for containers
Process Kernel Templates: Render Jupyter kernel configurations if enabled
Source Code Handling:
Source code is copied to
repo_src/
in the build directoryGlobal
requirements.txt
is automatically copied torepo_src/requirements.txt
PYTHONPATH
is configured to include the copied source code
Working Examples#
Deploy Jupyter Development Environment#
Configure and deploy Jupyter service:
# config.yml
deployed_services:
- framework.jupyter
python3 deployment/container_manager.py config.yml up -d
# Access at http://localhost:8088 (read-only) or http://localhost:8089 (write access)
Deploy Application Services#
Configure and deploy application stack:
# config.yml
deployed_services:
- applications.als_expert.mongo
- applications.als_expert.pv_finder
- applications.als_expert.qdrant
python3 deployment/container_manager.py config.yml up -d
# Services available at: MongoDB (27017), PV Finder (8051), Qdrant (6333)
Troubleshooting#
Common Issues#
Services fail to start:
Check individual service logs:
podman logs <container_name>
Verify configuration syntax in
config.yml
Ensure required environment variables are set in
.env
Try starting services individually to isolate issues
Port conflicts:
Check for processes using required ports:
lsof -i :8080
Update port mappings in service configurations
Ensure no other containers are using the same ports
Container networking issues:
Verify service names match configuration
Use container network names (e.g.,
pipelines
) notlocalhost
Check firewall settings if accessing from external systems
Template rendering errors:
Verify Jinja2 syntax in template files
Check that all required configuration values are provided
Review template paths in error messages
- Service not found in configuration
Verify service is defined in the appropriate config file
Check service naming (framework vs application services)
Ensure
deployed_services
includes the service
- Template file not found
Verify
docker-compose.yml.j2
exists in the service pathCheck that the service
path
configuration is correct
Debugging Commands#
List running containers:
podman ps
View container logs:
podman logs <container_name>
podman logs -f <container_name> # Follow logs
Inspect container configuration:
podman inspect <container_name>
Network inspection:
podman network ls
podman network inspect <network_name>
Generate compose files without starting:
python3 deployment/container_manager.py config.yml
This generates files in build/
for manual inspection.
Check for port conflicts:
lsof -i :8080 # Check specific port
netstat -tulpn | grep :8080 # Alternative method
Test network connectivity:
podman exec <container_name> ping <other_container>
podman exec <container_name> curl http://other_container:port
System Capabilities#
Current Features: - Service discovery and template rendering - Docker Compose orchestration - Build directory management - Configuration flattening
Production Considerations: - Health monitoring and automated recovery - Rolling deployments or blue-green deployments - Service dependency management beyond Docker Compose - Production monitoring and alerting - Automated scaling or load balancing
For production deployments, consider implementing additional monitoring and management tooling.
Best Practices#
Development#
Start with minimal service configurations
Use non-detached mode during development
Test services individually before deploying together
Keep build directory in
.gitignore
Use meaningful service names in logs
Production#
Use detached mode for production deployments
Monitor container resource usage
Implement health checks for critical services
Plan for service restart policies
Regular backup of data volumes
Configuration#
Keep sensitive data in
.env
filesUse meaningful names for custom networks
Document any custom template modifications
Version control configuration files
Test configuration changes in development first
Next Steps#
After setting up container deployment:
Configuration System - Advanced configuration patterns
Related API Reference: - Container Management - Container management API - Configuration System - Configuration system reference