Release Notes | September 2025

🟢

New Feature

SeekrFlow Explainability

SeekrFlow now brings transparency to model outputs by surfacing the five most influential Q&A pairs that shaped each response. Generated by the AI-Ready Data Engine, these pairs can be traced directly without relying on file downloads or custom scripts. This gives users immediate visibility into the training data behind their fine-tuned models, supporting validation, debugging, and trust in model behavior. The feature is available in the SDK through get_influential_finetuning_data() and in Model Chat within the SeekrFlow UI.

View documentation here

Agent Observability Platform

SeekrFlow now includes an observability platform that gives developers full visibility into agent execution. Through detailed traces, spans, and performance metrics, users can see how agents process inputs, make decisions, and produce outputs—step by step.

Key Capabilities

  • Automatic trace and span collection for all agent operations.
  • Capture of tool calls, LLM inferences, and execution flows.
  • SDK access for querying and analyzing agent runs.

This update removes the “black box” from agent behavior, enabling validation, auditability, and performance optimization. With complete observability, teams can deploy Seekr agents in production with confidence and transparency.

Docstring Validation for Custom Functions

This update enhances the performance and reliability of SeekrFlow Agent Custom Tools by introducing docstring validation, a system that automatically checks function documentation before deployment.

Key capabilities

  • Validates Python function docstrings against the Google format during create() or update().
  • Ensures all parameters, types, and return values are correctly defined.
  • Consolidates documentation into a single source of truth, removing duplicate or conflicting descriptions.

Docstring validation improves both development clarity and runtime performance. Clear, validated function definitions help agents determine when to invoke tools, accurately map parameters, and reduce execution errors—resulting in faster, more predictable performance and a smoother user experience.

View documentation here

SeekrFlow Now Available in AWS GovCloud (US)

SeekrFlow™ is now available in AWS GovCloud (US), joining its existing AWS Marketplace listing to give federal agencies and regulated customers one-click, subscription-based access to agentic AI in a secure, sovereign cloud environment.

This deployment enables organizations including the U.S. Department of War to run AI workflows that protect Controlled Unclassified Information (CUI) and other high-value assets while accelerating modernization and mission-critical outcomes.

Federal customers can now procure SeekrFlow faster and deploy it for FedRAMP High and DoW SRG IL5 workloads, all while leveraging existing AWS Enterprise Discount Program (EDP) credits. This simplifies procurement, optimizes costs, and allows agencies to adopt AI with full confidence in compliance, data protection, and security.

Learn more here

🔵

Improvements & Bug Fixes

SeekrFlow Helm Chart v2.0

This release introduces major upgrades to the SeekrFlow Helm Chart, bringing unified MLOps deployment, expanded environment support, and stronger compliance for enterprise and government users.

Supported Environments

  • AWS Commercial
  • AWS GovCloud

Key Capabilities

  • Unified serving, ingestion, and training in one deployment.
  • Built-in tools for Web Search, File/Vector Search, and Code Execution.
  • vLLM-powered model serving for faster inference and better GPU efficiency.
  • FIPS-enabled containers, hardened security baselines, and upgraded secrets management.
  • Simplified configuration via refactored values.yaml and improved observability across all components.

This update strengthens deployment flexibility and compliance posture especially for customers operating in regulated and government environments, while simplifying operations and improving performance across the entire MLOps stack.

AI Ready Data Engine Quality Enhancements

We’ve made several improvements to question-generation and evaluation to increase data diversity and overall fine-tuning quality.

Key enhancements

  • Expanded question coverage by generating from summaries, extracted facts, and source text.
  • Introduced question evolution, producing multiple forms of similar questions for richer variation.
  • Implemented explicit instruction-based evaluation for each generated pair.
  • Shifted from a fixed number of questions per node to dynamic generation determined at runtime.

These updates produce more accurate and contextually varied Q/A pairs, improving both data quality and model alignment for fine-tuning workflows.