framework.models.get_model#
- framework.models.get_model(provider=None, model_config=None, model_id=None, api_key=None, base_url=None, timeout=None, max_tokens=100000)[source]#
Create a configured LLM model instance for structured generation with PydanticAI.
This factory function creates and configures LLM model instances optimized for structured generation workflows using PydanticAI agents. It handles provider-specific initialization, credential validation, HTTP client configuration, and proxy setup automatically based on environment variables and configuration files.
The function supports flexible configuration through multiple approaches: - Direct parameter specification for programmatic use - Model configuration dictionaries from YAML files - Automatic credential loading from configuration system - Environment-based HTTP proxy detection and configuration
Provider-specific behavior: - Anthropic: Requires API key and model ID, supports HTTP proxy - Google: Requires API key and model ID, supports HTTP proxy - OpenAI: Requires API key and model ID, supports HTTP proxy and custom base URLs - Ollama: Requires model ID and base URL, no API key needed, no proxy support - CBORG: Requires API key, model ID, and base URL, supports HTTP proxy
- Parameters:
provider (str, optional) – AI provider name (‘anthropic’, ‘google’, ‘openai’, ‘ollama’, ‘cborg’)
model_config (dict, optional) – Configuration dictionary with provider, model_id, and other settings
model_id (str, optional) – Specific model identifier recognized by the provider
api_key (str, optional) – API authentication key, auto-loaded from config if not provided
base_url (str, optional) – Custom API endpoint URL, required for Ollama and CBORG
timeout (float, optional) – Request timeout in seconds, defaults to provider configuration
max_tokens (int) – Maximum tokens for generation, defaults to 100000
- Raises:
ValueError – If required provider, model_id, api_key, or base_url are missing
ValueError – If provider is not supported
- Returns:
Configured model instance ready for PydanticAI agent integration
- Return type:
Union[OpenAIModel, AnthropicModel, GeminiModel]
Note
HTTP proxy configuration is automatically detected from the HTTP_PROXY environment variable for supported providers. Timeout and connection pooling are managed through shared HTTP clients when proxies are enabled.
Warning
API keys and base URLs are validated before model creation. Ensure proper configuration is available through the config system or direct parameter specification.
Examples
Basic model creation with direct parameters:
>>> from framework.models import get_model >>> model = get_model( ... provider="anthropic", ... model_id="claude-3-sonnet-20240229", ... api_key="your-api-key" ... ) >>> # Use with PydanticAI Agent >>> agent = Agent(model=model, output_type=YourModel)
Using configuration dictionary from YAML:
>>> model_config = { ... "provider": "cborg", ... "model_id": "anthropic/claude-sonnet", ... "max_tokens": 4096, ... "timeout": 30.0 ... } >>> model = get_model(model_config=model_config)
Ollama local model setup:
>>> model = get_model( ... provider="ollama", ... model_id="llama3.1:8b", ... base_url="http://localhost:11434" ... )
See also
get_chat_completion()
: Direct chat completion without structured outputconfigs.config.get_provider_config()
: Provider configuration loadingpydantic_ai.Agent
: PydanticAI agent that uses these models Convention over Configuration: Configuration-Driven Registry Patterns : Complete model setup guide