Deployments
Launch, monitor, and manage model endpoints for real-time inference.
Deployments create and manage model endpoints for real-time inference. A deployment hosts a model (base or fine-tuned) on dedicated compute infrastructure, making it available for API requests, agent usage, or application integration.
How deployments work
Deployments provision model endpoints with specified compute resources. Once active, deployed models serve inference requests through the SeekrFlow API. Deployments can be paused to stop serving traffic or deleted to free resources.
Deployment configuration
When creating a deployment, you configure:
- Model selection – Choose a base model or fine-tuned model checkpoint
- Compute resources – Specify instance count and hardware allocation
- Endpoint details – Define deployment name and description for identification
Deployment status
Each deployment has a status indicating its current state:
| Status | Description |
|---|---|
| Pending | Deployment requested, provisioning infrastructure |
| Active | Serving inference traffic |
| Inactive | Paused, not serving requests |
| Failed | Error during startup or runtime |
Using deployed models
Active deployments are accessed through two methods:
- Agent integration – Reference deployed models in agent configurations to give agents access to specific model capabilities
- API endpoints – Make direct inference calls via deployment endpoints to integrate models into custom applications or workflows
Deployment management
Deployments support lifecycle operations:
- Pause – Stop serving traffic while preserving the endpoint
- Resume – Reactivate an inactive deployment
- Delete – Remove the deployment and free allocated resources
Monitoring
Track deployment performance through:
- Token usage metrics (input and output)
- Active hours and uptime
- Event timeline showing deployment lifecycle events
Updated 8 days ago
