Container Management#

Container orchestration and deployment system for managing framework and application services.

Note

For implementation guides and examples, see Container Deployment.

Core Modules#

container_manager

Container Management and Service Orchestration System.

loader

Configuration Parameter Loading and Management System.

Container Orchestration#

Container Management and Service Orchestration System.

This module provides comprehensive container orchestration capabilities for the deployment framework, handling service discovery, template rendering, build directory management, and Podman Compose integration. The system supports hierarchical service configurations with framework and application-specific services that can be independently deployed and managed.

The container manager implements a sophisticated template processing pipeline that converts Jinja2 templates into Docker Compose files, copies necessary source code and configuration files, and orchestrates multi-service deployments through Podman Compose with proper networking and dependency management.

Key Features:
  • Hierarchical service discovery (framework.service, applications.app.service)

  • Jinja2 template rendering with configuration context

  • Intelligent build directory management with selective file copying

  • Environment variable expansion and configuration flattening

  • Podman Compose orchestration with multi-file support

  • Kernel template processing for Jupyter notebook environments

Architecture:

The system supports two service categories:

  1. Framework Services: Core infrastructure services like databases, web interfaces, and development tools (jupyter, open-webui, pipelines)

  2. Application Services: Domain-specific services tied to particular applications (als_expert.mongo, als_expert.pv_finder)

Examples

Basic service deployment:

$ python container_manager.py config.yml up -d
# Deploys all services listed in deployed_services configuration

Service discovery patterns:

# Framework service (short name)
deployed_services: ["jupyter", "pipelines"]

# Framework service (full path)
deployed_services: ["framework.jupyter", "framework.pipelines"]

# Application service (full path required)
deployed_services: ["applications.als_expert.mongo"]

Template rendering workflow:

1. Load configuration with imports and merging
2. Discover services listed in deployed_services
3. Process Jinja2 templates with configuration context
4. Copy source code and additional directories as specified
5. Flatten configuration files for container consumption
6. Execute Podman Compose with generated files

See also

deployment.loader : Configuration loading system used by this module configs.config.ConfigBuilder : Configuration management find_service_config() : Service discovery implementation render_template() : Template processing engine

deployment.container_manager.find_service_config(config, service_name)[source]#

Locate service configuration and template path for deployment.

This function implements the service discovery logic for the container management system, supporting both hierarchical service naming (full paths) and legacy short names for backward compatibility. The system searches through framework services and application-specific services to find the requested service configuration.

Service naming supports three patterns: 1. Framework services: “framework.service_name” or just “service_name” 2. Application services: “applications.app_name.service_name” 3. Legacy services: “service_name” (deprecated, for backward compatibility)

The function returns both the service configuration object and the path to the Docker Compose template, enabling the caller to access service settings and initiate template rendering.

Parameters:
  • config (dict) – Configuration containing service definitions

  • service_name (str) – Service identifier (short name or full dotted path)

Returns:

Tuple containing service configuration and template path, or (None, None) if service not found

Return type:

tuple[dict, str] or tuple[None, None]

Examples

Framework service discovery:

>>> config = {'framework': {'services': {'jupyter': {'path': 'services/framework/jupyter'}}}}
>>> service_config, template_path = find_service_config(config, 'framework.jupyter')
>>> print(template_path)  # 'services/framework/jupyter/docker-compose.yml.j2'

Application service discovery:

>>> config = {'applications': {'als_expert': {'services': {'mongo': {'path': 'services/applications/als_expert/mongo'}}}}}
>>> service_config, template_path = find_service_config(config, 'applications.als_expert.mongo')
>>> print(template_path)  # 'services/applications/als_expert/mongo/docker-compose.yml.j2'

Legacy service discovery:

>>> config = {'services': {'legacy_service': {'path': 'services/legacy'}}}
>>> service_config, template_path = find_service_config(config, 'legacy_service')
>>> print(template_path)  # 'services/legacy/docker-compose.yml.j2'

Note

Legacy service support (services.* configuration) is deprecated and will be removed in future versions. Use framework.* or applications.* naming patterns for new services.

See also

get_templates() : Uses this function to build template lists setup_build_dir() : Processes discovered services for deployment

deployment.container_manager.get_templates(config)[source]#

Collect template paths for all deployed services in the configuration.

This function builds a comprehensive list of Docker Compose template paths based on the services specified in the deployed_services configuration. It processes both the root services template and individual service templates, providing the complete set of templates needed for deployment.

The function always includes the root services template (services/docker-compose.yml.j2) which defines the shared network configuration and other global service settings. Individual service templates are then discovered through the service discovery system and added to the template list.

Parameters:

config (dict) – Configuration containing deployed_services list

Returns:

List of template file paths for processing

Return type:

list[str]

Raises:

Warning – Prints warning if deployed_services is not configured

Examples

Template collection for mixed services:

>>> config = {
...     'deployed_services': ['framework.jupyter', 'applications.als_expert.mongo'],
...     'framework': {'services': {'jupyter': {'path': 'services/framework/jupyter'}}},
...     'applications': {'als_expert': {'services': {'mongo': {'path': 'services/applications/als_expert/mongo'}}}}
... }
>>> templates = get_templates(config)
>>> print(templates)
['services/docker-compose.yml.j2',
 'services/framework/jupyter/docker-compose.yml.j2',
 'services/applications/als_expert/mongo/docker-compose.yml.j2']

Warning

If deployed_services is not configured or empty, only the root services template will be returned, which may not provide functional services.

See also

find_service_config() : Service discovery used by this function render_template() : Processes the templates returned by this function

deployment.container_manager.render_template(template_path, config, out_dir)[source]#

Render Jinja2 template with configuration context to output directory.

This function processes Jinja2 templates using the configuration as context, generating concrete configuration files for container deployment. The system supports multiple template types including Docker Compose files and Jupyter kernel configurations, with intelligent output filename detection.

Template rendering uses the complete configuration dictionary as Jinja2 context, enabling templates to access any configuration value including environment variables, service settings, and application-specific parameters. Environment variables can be referenced directly in templates using ${VAR_NAME} syntax for deployment-specific configurations like proxy settings. The output directory is created automatically if it doesn’t exist.

Parameters:
  • template_path (str) – Path to the Jinja2 template file to render

  • config (dict) – Configuration dictionary to use as template context

  • out_dir (str) – Output directory for the rendered file

Returns:

Full path to the rendered output file

Return type:

str

Examples

Docker Compose template rendering:

>>> config = {'database': {'host': 'localhost', 'port': 5432}}
>>> output_path = render_template(
...     'services/mongo/docker-compose.yml.j2',
...     config,
...     'build/services/mongo'
... )
>>> print(output_path)  # 'build/services/mongo/docker-compose.yml'

Jupyter kernel template rendering:

>>> config = {'project_root': '/home/user/project'}
>>> output_path = render_template(
...     'services/jupyter/python3-epics/kernel.json.j2',
...     config,
...     'build/services/jupyter/python3-epics'
... )
>>> print(output_path)  # 'build/services/jupyter/python3-epics/kernel.json'

Note

The function automatically determines output filenames based on template naming conventions: .j2 extension is removed, and specific patterns like docker-compose.yml.j2 and kernel.json.j2 are recognized.

See also

setup_build_dir() : Uses this function for service template processing render_kernel_templates() : Batch processing of kernel templates

deployment.container_manager.render_kernel_templates(source_dir, config, out_dir)[source]#

Process all Jupyter kernel templates in a service directory.

This function provides batch processing for Jupyter kernel configuration templates, automatically discovering all kernel.json.j2 files within a service directory and rendering them with the current configuration context. This is particularly useful for Jupyter services that provide multiple kernel environments with different configurations.

The function recursively searches the source directory for kernel template files and processes each one, maintaining the relative directory structure in the output. This ensures that kernel configurations are placed in the correct locations for Jupyter to discover them.

Parameters:
  • source_dir (str) – Source directory to search for kernel templates

  • config (dict) – Configuration dictionary for template rendering

  • out_dir (str) – Base output directory for rendered kernel files

Examples

Kernel template processing for Jupyter service:

>>> # Source structure:
>>> # services/jupyter/
>>> #   ├── python3-epics-readonly/kernel.json.j2
>>> #   └── python3-epics-write/kernel.json.j2
>>>
>>> render_kernel_templates(
...     'services/jupyter',
...     {'project_root': '/home/user/project'},
...     'build/services/jupyter'
... )
>>> # Output structure:
>>> # build/services/jupyter/
>>> #   ├── python3-epics-readonly/kernel.json
>>> #   └── python3-epics-write/kernel.json

Note

This function is typically called automatically by setup_build_dir when a service configuration includes ‘render_kernel_templates: true’.

See also

render_template() : Core template rendering used by this function setup_build_dir() : Calls this function for kernel template processing

deployment.container_manager.setup_build_dir(template_path, config, container_cfg)[source]#

Create complete build environment for service deployment.

This function orchestrates the complete build directory setup process for a service, including template rendering, source code copying, configuration flattening, and additional directory management. It creates a self-contained build environment that contains everything needed for container deployment.

The build process follows these steps: 1. Create clean build directory for the service 2. Render the Docker Compose template with configuration context 3. Copy service-specific files (excluding templates) 4. Copy source code if requested (copy_src: true) 5. Copy additional directories as specified 6. Create flattened configuration file for container use 7. Process kernel templates if specified

Source code copying includes intelligent handling of requirements files, automatically copying global requirements.txt to the container source directory to ensure dependency management works correctly in containers.

Parameters:
  • template_path (str) – Path to the service’s Docker Compose template

  • config (dict) – Complete configuration dictionary for template rendering

  • container_cfg (dict) – Service-specific configuration settings

Returns:

Path to the rendered Docker Compose file

Return type:

str

Examples

Basic service build directory setup:

>>> container_cfg = {
...     'copy_src': True,
...     'additional_dirs': ['docs', 'scripts'],
...     'render_kernel_templates': False
... }
>>> compose_path = setup_build_dir(
...     'services/framework/jupyter/docker-compose.yml.j2',
...     config,
...     container_cfg
... )
>>> print(compose_path)  # 'build/services/framework/jupyter/docker-compose.yml'

Advanced service with custom directory mapping:

>>> container_cfg = {
...     'copy_src': True,
...     'additional_dirs': [
...         'docs',  # Simple directory copy
...         {'src': 'external_data', 'dst': 'data'}  # Custom mapping
...     ],
...     'render_kernel_templates': True
... }
>>> compose_path = setup_build_dir(template_path, config, container_cfg)

Note

The function automatically handles build directory cleanup, removing existing directories to ensure clean builds. Global requirements.txt is automatically copied to container source directories when present.

Warning

This function performs destructive operations on build directories. Ensure build_dir is properly configured to avoid data loss.

See also

render_template() : Template rendering used by this function render_kernel_templates() : Kernel template processing configs.config.ConfigBuilder : Configuration flattening

deployment.container_manager.parse_args()[source]#

Parse command-line arguments for container management operations.

This function defines and processes the command-line interface for the container management system, supporting configuration file specification, deployment commands (up/down), and operational flags like detached mode.

The argument parser enforces logical constraints, such as requiring the ‘up’ command when using detached mode, and provides clear error messages for invalid argument combinations.

Returns:

Parsed command-line arguments

Return type:

argparse.Namespace

Raises:

SystemExit – If invalid argument combinations are provided

Command-line Interface:

python container_manager.py CONFIG [COMMAND] [OPTIONS]

Positional Arguments:

CONFIG: Path to the configuration file (required) COMMAND: Deployment command - ‘up’ or ‘down’ (optional)

Options:

-d, –detached: Run in detached mode (only with ‘up’)

Examples

Generate compose files only:

$ python container_manager.py config.yml
# Creates build directory and compose files without deployment

Deploy services in foreground:

$ python container_manager.py config.yml up
# Deploys services and shows output

Deploy services in background:

$ python container_manager.py config.yml up -d
# Deploys services in detached mode

Stop services:

$ python container_manager.py config.yml down
# Stops and removes deployed services

Clean deployment (remove images/volumes):

$ python container_manager.py config.yml clean
# Removes containers, images, volumes, and networks

Rebuild from scratch:

$ python container_manager.py config.yml rebuild -d
# Clean + rebuild + start in detached mode

See also

main execution block() : Uses parsed arguments for deployment operations

deployment.container_manager.clean_deployment(compose_files)[source]#

Clean up containers, images, volumes, and networks for a fresh deployment.

This function provides comprehensive cleanup capabilities for container deployments, removing containers, images, volumes, and networks to enable fresh rebuilds. It’s particularly useful when configuration changes require complete environment reconstruction.

Parameters:

compose_files (list[str]) – List of Docker Compose file paths for the deployment

Configuration Loading#

Configuration Parameter Loading and Management System.

This module provides a sophisticated parameter loading system that supports YAML configuration files with import directives, hierarchical parameter access, and environment variable expansion. The system is designed for flexible deployment configurations where complex service arrangements need robust parameter management.

The module implements a type-safe parameter access pattern through the Params class, which provides dot-notation access to nested configuration data while maintaining validation and error handling. Invalid parameter access returns InvalidParam objects rather than raising exceptions, enabling graceful degradation in configuration scenarios.

Key Features:
  • Recursive YAML file imports with circular dependency detection

  • Environment variable expansion in string values

  • Type-safe parameter access with validation

  • Graceful error handling for missing configuration keys

  • Deep dictionary merging for configuration composition

Examples

Basic parameter loading:

>>> params = load_params('config.yml')
>>> database_host = params.database.host
>>> timeout = params.services.timeout

Configuration with imports:

# base_config.yml
database:
  host: localhost
  port: 5432

# app_config.yml
import: base_config.yml
database:
  name: myapp  # Merged with base config
services:
  timeout: 30

>>> params = load_params('app_config.yml')
>>> print(params.database.host)  # 'localhost' from base
>>> print(params.database.name)  # 'myapp' from app config

Environment variable expansion:

# config.yml
project_root: ${PROJECT_ROOT}
data_dir: ${PROJECT_ROOT}/data

>>> os.environ['PROJECT_ROOT'] = '/home/user/project'
>>> params = load_params('config.yml')
>>> print(params.data_dir)  # '/home/user/project/data'

See also

Params : Main parameter container with hierarchical access InvalidParam : Error handling for missing configuration keys _load_yaml() : Core YAML loading with import processing deployment.container_manager : Uses this system for service configuration

deployment.loader.load_params(file_path)[source]#

Load configuration parameters from YAML file into a Params object.

This is the main entry point for the configuration loading system. It loads a YAML configuration file (processing any import directives) and wraps the resulting data in a Params object that provides type-safe, hierarchical access to configuration values.

The function handles the complete configuration loading pipeline including import processing, environment variable expansion, and parameter object creation. The resulting Params object provides dot-notation access to nested configuration data with built-in validation and error handling.

Parameters:

file_path (str) – Path to the YAML configuration file to load

Raises:
  • FileNotFoundError – If the configuration file cannot be found

  • yaml.YAMLError – If YAML parsing fails

  • ValueError – If circular imports are detected

Returns:

Parameter container with hierarchical access to configuration data

Return type:

Params

Examples

Basic configuration loading:

# config.yml
database:
  host: localhost
  port: 5432
services:
  timeout: 30
  retry_count: 3

>>> params = load_params('config.yml')
>>> db_host = params.database.host  # 'localhost'
>>> timeout = params.services.timeout  # 30
>>> invalid = params.nonexistent.key  # InvalidParam, not exception

Environment variable expansion:

# config.yml
project_root: ${PROJECT_ROOT}
data_directory: ${PROJECT_ROOT}/data

>>> import os
>>> os.environ['PROJECT_ROOT'] = '/home/user/project'
>>> params = load_params('config.yml')
>>> print(params.data_directory)  # '/home/user/project/data'

See also

_load_yaml() : Core YAML loading implementation Params : Return type providing parameter access InvalidParam : Error handling for missing configuration keys deployment.container_manager : Primary consumer of this functionality

class deployment.loader.AbstractParam(name, parent=None)[source]#

Bases: object

__init__(name, parent=None)[source]#
get_path()[source]#
copy()[source]#
is_valid()[source]#
class deployment.loader.InvalidParam(name, parent=None)[source]#

Bases: AbstractParam

Parameter object representing missing or invalid configuration data.

This class provides graceful error handling for configuration access patterns where requested parameters don’t exist. Instead of raising exceptions immediately, the system returns InvalidParam objects that maintain the access chain and provide meaningful error messages when finally used.

InvalidParam objects support continued dot-notation access, allowing code to chain parameter lookups naturally even when intermediate parameters are missing. The error is only raised when the parameter is actually used (e.g., in a boolean context or when converted to a string).

This approach enables defensive programming patterns where configuration access can be attempted optimistically, with errors handled at the point of actual use.

Parameters:
  • name (str) – Name of the missing parameter

  • parent (AbstractParam, optional) – Parent parameter object in the access chain

Examples

Graceful error handling:

>>> params = load_params('config.yml')  # Missing 'database.timeout'
>>> timeout = params.database.timeout  # Returns InvalidParam, no exception
>>> if timeout:  # Now evaluates to False
...     print(f"Timeout: {timeout}")
... else:
...     print("Using default timeout")

Error chain preservation:

>>> missing = params.nonexistent.deeply.nested.value
>>> print(missing)  # Shows path to first missing parameter
<InvalidParam: 'root.nonexistent'>

See also

Params : Valid parameter container that returns InvalidParam for missing keys AbstractParam.get_path() : Path construction used in error messages

__init__(name, parent=None)[source]#
is_valid()[source]#

Check if this parameter is valid.

Returns:

Always False for InvalidParam objects

Return type:

bool

__getattr__(key)[source]#

Support continued dot-notation access on invalid parameters.

Allows chaining of parameter access even when intermediate parameters are missing, maintaining the error state through the access chain.

Parameters:

key (str) – Attribute name being accessed

Returns:

New InvalidParam representing the continued invalid access

Return type:

InvalidParam

Examples

Continued access on missing parameters:

>>> invalid = params.missing.parameter
>>> still_invalid = invalid.more.nested.access
>>> print(still_invalid.is_valid())  # False
__getitem__(key)[source]#

Raise error when bracket notation is used on invalid parameters.

Parameters:

key (str or int) – Key being accessed

Raises:

TypeError – Always raises with error message showing the invalid path

Note

Unlike dot notation (__getattr__), bracket notation immediately raises an error to provide clear feedback about the invalid parameter access.

__bool__()[source]#

Evaluate InvalidParam objects as False in boolean contexts.

Returns:

Always False

Return type:

bool

Examples

Boolean evaluation for error handling:

>>> param = params.possibly.missing.value
>>> if param:
...     process_value(param)
... else:
...     use_default_value()
__repr__()[source]#

Provide clear error message showing the invalid parameter path.

Traces back through the InvalidParam chain to find the first missing parameter and displays its full path for debugging.

Returns:

String representation showing the invalid parameter path

Return type:

str

Examples

Error message generation:

>>> missing = params.database.missing.timeout
>>> print(missing)
<InvalidParam: 'root.database.missing'>
class deployment.loader.Params(data, name, parent=None)[source]#

Bases: AbstractParam

Primary parameter container providing hierarchical access to configuration data.

This class wraps configuration data (dictionaries, lists, or scalar values) and provides type-safe, hierarchical access through dot notation and bracket notation. The class handles environment variable expansion, supports deep copying, and provides graceful error handling through InvalidParam objects.

Params objects automatically detect the data type (dict, list, or scalar) and provide appropriate access methods. Nested structures are recursively wrapped in Params objects, creating a complete hierarchy that maintains parent-child relationships for path tracking and error reporting.

Environment variable expansion is performed on string values using os.expandvars, allowing configuration files to reference environment variables with ${VAR} syntax.

Parameters:
  • data (dict or list or Any) – Configuration data to wrap (dict, list, or scalar value)

  • name (str) – Name of this parameter within its parent container

  • parent (AbstractParam, optional) – Parent parameter object, None for root parameters

Examples

Dictionary access patterns:

>>> config_data = {'database': {'host': 'localhost', 'port': 5432}}
>>> params = Params(config_data, 'root')
>>> host = params.database.host  # 'localhost'
>>> port = params.database['port']  # 5432
>>> missing = params.cache.timeout  # InvalidParam, not exception

List access patterns:

>>> config_data = {'servers': ['web1', 'web2', 'api1']}
>>> params = Params(config_data, 'root')
>>> first_server = params.servers[0]  # 'web1'
>>> server_count = len(params.servers)  # 3

Environment variable expansion:

>>> config_data = {'path': '${HOME}/data', 'url': '${API_HOST}:${API_PORT}'}
>>> params = Params(config_data, 'root')
>>> # Environment variables are expanded when values are accessed
>>> data_path = params.path  # '/home/user/data' (if HOME is set)

See also

InvalidParam : Error handling for missing configuration keys AbstractParam : Base class defining the parameter interface load_params() : Main entry point for creating Params from YAML files

__init__(data, name, parent=None)[source]#
is_valid()[source]#
keys()[source]#
values()[source]#
items()[source]#
get(key, default=None)[source]#
__deepcopy__(memo)[source]#

Copies the data into its native format, without Param objects.

Module Constants#

deployment.container_manager.SERVICES_DIR#

Directory name for service configurations.

Type:

str

Value:

“services”

deployment.container_manager.TEMPLATE_FILENAME#

Standard filename for Docker Compose templates.

Type:

str

Value:

“docker-compose.yml.j2”

deployment.container_manager.COMPOSE_FILE_NAME#

Standard filename for rendered Docker Compose files.

Type:

str

Value:

“docker-compose.yml”

See also

Container Deployment

Complete guide to container deployment patterns

Configuration System

Framework configuration system integration