Release Notes | December 2025
December 31st, 2025 by Brandi Hopkins
LoRA fine-tuning
FeatureSeekr-managed instance
Supported on
UI
API
SDK
SeekrFlow now supports low-rank adaptation (LoRA) support for parameter-efficient fine-tuning. LoRA trains small adapter layers while keeping the base model frozen, enabling faster training cycles and reduced computational costs compared to full fine-tuning.
Capabilities:
- Fine-tune models using LoRA with SFT and GRPO workflows
- Lower GPU memory requirements enable fine-tuning of larger models
- Reduced risk of catastrophic forgetting
- Configure via
lora_configparameter inTrainingConfig(includesrank,alpha, and advanced fields)
User interface parity with Seekr-managed instance
ImprovementSelf-hosted instance
Self-hosted instances now include all features and improvements released to Seekr-managed instances through September 2025. This release brings self-hosted deployments to feature parity with the managed interface, including SeekrFlow Explainability, Agent Observability Platform, Docstring Validation, AI Ready Data Engine quality enhancements, and SeekrFlow Helm Chart v2.0.
