Installation & Setup#
Alpha-Berkeley - Getting Started Notes#
Prerequisites#
Install Podman
Podman is a daemonless container engine that serves as an alternative to Docker. Unlike Docker, which requires a privileged daemon running as root, Podman can run containers without a daemon and supports true rootless operation. This architecture provides several security advantages: reduced attack surface, better privilege separation, and no need for a constantly running root process. While Docker can be complex to secure in enterprise environments, Podman’s design makes it inherently more secure and easier to integrate into systems with strict security requirements. In a nutshell, if you want to make your sysadmin happy(ier!), use Podman for security reasons.
To get the latest installation instructions for Podman, visit the official Podman installation guide.
After you finish the installation, check if your Podman version is at least 5.0.0 and run the “hello-world” Podman container:
podman --version
podman run hello-world
If the “hello-world” container runs successfully and displays a welcome message, you can proceed to the next step.
Running Podman Machine (macOS/Windows only)
After the successful installation, if you’re on macOS or Windows, you need to initialize and start the podman machine:
podman machine init
podman machine start
Note: Linux users can skip this step as Podman runs natively on Linux.
Environment Setup
Python 3.11 Requirement
This framework requires Python 3.11. Verify you have the correct version:
python3.11 --version
Virtual Environment Setup
To avoid conflicts with your system Python packages, create a virtual environment with Python 3.11:
python3.11 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Installing Dependencies
After creating and activating the virtual environment, install the required packages:
# Upgrade pip to latest version
pip install --upgrade pip
# Install main framework requirements (includes LangGraph, AI/ML tools, scientific computing)
pip install -r requirements.txt
# Optional: Install documentation requirements (only needed if building docs)
# pip install -r docs/requirements.txt
The main requirements.txt file includes:
Configuration#
Update config.yml
Modify the following settings in config.yml
:
Agent Data Root Path: Update
agent_data_root_path
to your actual cloned repository path. You can usepwd
to see the current folder path.Ollama Base URL: Set the base URL for Ollama
For direct host access:
localhost:11434
For container-based agents (like OpenWebUI pipelines):
host.containers.internal:11434
See Ollama Connection for OpenWebUI-specific configuration
Deployed Services: In the deployed services section, ensure the following are uncommented:
framework-jupyter
- this environment is intended to give users the capability to edit and run the alpha-berkeley generated codesframework.open_webui
- this is the entry point for the user, where you communicate interactively through OpenWebUI, a convenient web-based chat interface for LLMsframework.pipelines
- this is the core environment
API URL: If you are using CBORG as your model provider (LBNL internal only), set the CBORG API URL to either:
Global API URL:
https://api.cborg.lbl.gov/v1
Local API URL:
https://api-local.cborg.lbl.gov/v1
(requires local network connection)
In
./config.yml
, update:api: providers:cborg:base_url: https://api-local.cborg.lbl.gov/v1
For External Users (Non-LBNL): If you don’t have access to CBORG, you’ll need to configure alternative model providers in
config.yml
andsrc/framework/config.yml
. Update theprovider
fields under themodels
section to use providers likeopenai
,anthropic
,ollama
, or others you have access to. Ensure corresponding API keys are set in your.env
file.
Environment Variables
Create a .env
file with API keys:
cp env.example .env
Edit the .env
file and provide the API keys for the model providers you are using.
Documentation#
Compile Documentation (Optional)
If you want to build and serve the documentation locally:
# Install documentation dependencies
pip install -r docs/requirements.txt
# Build and serve documentation
cd docs/
python launch_docs.py
Once running, you can view the documentation at http://localhost:8082
Building and Running#
Once you have installed everything and compiled documentation, you can execute the build and run script. This will download all the necessary packages, run them as safe Podman containers and secure the communication between them.
Start Services
The framework uses a container manager to orchestrate all services. For detailed information about all container management options, see Container Deployment.
For initial setup and debugging, start services one by one in non-detached mode:
Comment out all services except one in your
config.yml
underdeployed_services
Start the first service:
python3 ./deployment/container_manager.py config.yml up
Monitor the logs to ensure it starts correctly
Once stable, stop with
Ctrl+C
and uncomment the next serviceRepeat until all services are working
This approach helps identify issues early and ensures each service is properly configured before proceeding.
Once all services are tested individually, start everything together in detached mode:
python3 ./deployment/container_manager.py config.yml up -d
This runs all services in the background, suitable for production deployments where you don’t need to monitor individual service logs.
Verify Services are Running
Check that services are running properly:
podman ps
Access OpenWebUI
Once services are running, access the web interface at:
OpenWebUI: http://localhost:8080
OpenWebUI Configuration#
OpenWebUI is a feature-rich, self-hosted web interface for Large Language Models that provides a ChatGPT-like experience with extensive customization options.
Ollama Connection:
For Ollama running on localhost, use http://host.containers.internal:11434
instead of http://localhost:11434
because Podman containers cannot access the host’s localhost directly. This should match your config.yml
Ollama base URL setting (see Configuration section above).
Once the correct URL is configured and Ollama is serving, OpenWebUI will automatically discover all models currently available in your Ollama installation.
Pipeline Connection:
The Alpha Berkeley framework provides a pipeline connection to the OpenWebUI service.
Understanding Pipelines
OpenWebUI Pipelines are a powerful extensibility system that allows you to customize and extend OpenWebUI’s functionality. Think of pipelines as plugins that can:
Filter: Process user messages before they reach the LLM and modify responses after they return
Pipe: Create custom “models” that integrate external APIs, build workflows, or implement RAG systems
Integrate: Connect with external services, databases, or specialized AI providers
Pipelines appear as models with an “External” designation in your model selector and enable advanced functionality like real-time data retrieval, custom processing workflows, and integration with external AI services.
Go to Admin Panel → Settings (upper panel) → Connections (left panel)
Click the (+) button in Manage OpenAI API Connections
Configure the pipeline connection with these details:
URL:
http://pipelines:9099
(if using default configuration)API Key: Found in
services/framework/pipelines/docker-compose.yml.j2
underPIPELINES_API_KEY
(default0p3n-w3bu!
)
Note: The URL uses
pipelines:9099
instead oflocalhost:9099
because OpenWebUI runs inside a container and communicates with the pipelines service through the container network.
Additional OpenWebUI Configuration:
For optimal performance and user experience, consider these additional configuration settings:
OpenWebUI automatically generates titles and tags for conversations, which can interfere with your main agent’s processing. It’s recommended to use a dedicated local model for this:
Go to Admin Panel → Settings → Interface
Find Task Model setting
Change from Current Model to any local Ollama model (e.g.,
mistral:7b
,llama3:8b
)This prevents title generation from consuming your main agent’s resources
Deactivating Unused Models:
Deactivate unused (Ollama-)models in Admin Panel → Settings → Models to reduce clutter
This helps keep your model selection interface clean and focused on the models you actually use
You can always reactivate models later if needed
Adding Custom Function Buttons:
OpenWebUI allows you to add custom function buttons to enhance the user interface. For comprehensive information about functions, see the official OpenWebUI functions documentation.
Installing Functions:
Navigate to Admin Panel → Functions
Add a function using the plus sign (UI details may vary between OpenWebUI versions)
Copy and paste function code from our repository’s pre-built functions
Available Functions in Repository:
The framework includes several pre-built functions located in services/framework/open-webui/functions/
:
execution_history_button.py
- View and manage execution historyagent_context_button.py
- Access agent context informationmemory_button.py
- Memory management functionalityexecution_plan_editor.py
- Edit and manage execution plans
Activation Requirements:
After adding a function:
Enable the function - Activate it in the functions interface
Enable globally - Use additional options to enable the function globally
Refresh the page - The button should appear on your OpenWebUI interface after refresh
These buttons provide quick access to advanced agent capabilities and debugging tools.
Customizing Default Prompt Suggestions:
OpenWebUI provides default prompt suggestions that you can customize for your specific use case:
Accessing Default Prompts:
Go to Admin Panel → Settings → Interface
Scroll down to find Default Prompt Suggestions section
Here you can see the built-in OpenWebUI prompt suggestions
Customizing Prompts:
Remove Default Prompts: Clear the existing default prompts if they don’t fit your workflow
Add Custom Prompts: Replace them with prompts tailored to your agent’s capabilities
Use Cases:
Production: Set prompts that guide users toward your agent’s core functionalities
R&D Testing: Create prompts that help test specific features or edge cases
Domain-Specific: Add prompts relevant to your application domain (e.g., ALS operations, data analysis)
Example Custom Prompts:
“Analyze the recent beam performance data from the storage ring”
“Find PV addresses related to beam position monitors”
“Generate a summary of today’s logbook entries”
“Help me troubleshoot insertion device issues”
Benefits:
Guides users toward productive interactions with your agent
Reduces cognitive load for new users
Enables consistent testing scenarios during development
Improves user adoption by showcasing agent capabilities
Troubleshooting#
Common Issues:
If you encounter connection issues with Ollama, ensure you’re using
host.containers.internal
instead oflocalhost
when connecting from containersVerify that all required services are uncommented in
config.yml
Check that API keys are properly set in the
.env
fileEnsure podman machine is running before starting services (macOS/Windows)
If containers fail to start, check logs with:
podman logs <container_name>
Verification Steps:
Check Python version:
python --version
(should be 3.11.x)Check Podman version:
podman --version
(should be 5.0.0+)Verify virtual environment is active (should see
(venv)
in your prompt)Test core framework imports:
python -c "import langgraph; print('LangGraph installed successfully')"
Test container connectivity:
podman run --rm alpine ping -c 1 host.containers.internal
Check service status:
podman ps
Common Installation Issues:
Python version mismatch: Ensure you’re using Python 3.11 with
python3.11 -m venv venv
Package conflicts: If you get dependency conflicts, try creating a fresh virtual environment
Missing dependencies: The main requirements.txt should install everything needed; avoid mixing with system packages