🟢

New Feature

SeekrFlow Explainability

SeekrFlow now brings transparency to model outputs by surfacing the five most influential Q&A pairs that shaped each response. Generated by the AI-Ready Data Engine, these pairs can be traced directly without relying on file downloads or custom scripts. This gives users immediate visibility into the training data behind their fine-tuned models, supporting validation, debugging, and trust in model behavior. The feature is available in the SDK through get_influential_finetuning_data() and in Model Chat within the SeekrFlow UI.

View documentation here

Agent Observability Platform

SeekrFlow now includes an observability platform that gives developers full visibility into agent execution. Through detailed traces, spans, and performance metrics, users can see how agents process inputs, make decisions, and produce outputs—step by step.

Key Capabilities

  • Automatic trace and span collection for all agent operations.
  • Capture of tool calls, LLM inferences, and execution flows.
  • SDK access for querying and analyzing agent runs.

This update removes the “black box” from agent behavior, enabling validation, auditability, and performance optimization. With complete observability, teams can deploy Seekr agents in production with confidence and transparency.

Docstring Validation for Custom Functions

This update enhances the performance and reliability of SeekrFlow Agent Custom Tools by introducing docstring validation, a system that automatically checks function documentation before deployment.

Key capabilities

  • Validates Python function docstrings against the Google format during create() or update().
  • Ensures all parameters, types, and return values are correctly defined.
  • Consolidates documentation into a single source of truth, removing duplicate or conflicting descriptions.

Docstring validation improves both development clarity and runtime performance. Clear, validated function definitions help agents determine when to invoke tools, accurately map parameters, and reduce execution errors—resulting in faster, more predictable performance and a smoother user experience.

View documentation here

SeekrFlow Now Available in AWS GovCloud (US)

SeekrFlow™ is now available in AWS GovCloud (US), joining its existing AWS Marketplace listing to give federal agencies and regulated customers one-click, subscription-based access to agentic AI in a secure, sovereign cloud environment.

This deployment enables organizations including the U.S. Department of War to run AI workflows that protect Controlled Unclassified Information (CUI) and other high-value assets while accelerating modernization and mission-critical outcomes.

Federal customers can now procure SeekrFlow faster and deploy it for FedRAMP High and DoW SRG IL5 workloads, all while leveraging existing AWS Enterprise Discount Program (EDP) credits. This simplifies procurement, optimizes costs, and allows agencies to adopt AI with full confidence in compliance, data protection, and security.

Learn more here

🔵

Improvements & Bug Fixes

SeekrFlow Helm Chart v2.0

This release introduces major upgrades to the SeekrFlow Helm Chart, bringing unified MLOps deployment, expanded environment support, and stronger compliance for enterprise and government users.

Supported Environments

  • AWS Commercial
  • AWS GovCloud

Key Capabilities

  • Unified serving, ingestion, and training in one deployment.
  • Built-in tools for Web Search, File/Vector Search, and Code Execution.
  • vLLM-powered model serving for faster inference and better GPU efficiency.
  • FIPS-enabled containers, hardened security baselines, and upgraded secrets management.
  • Simplified configuration via refactored values.yaml and improved observability across all components.

This update strengthens deployment flexibility and compliance posture especially for customers operating in regulated and government environments, while simplifying operations and improving performance across the entire MLOps stack.

AI Ready Data Engine Quality Enhancements

We’ve made several improvements to question-generation and evaluation to increase data diversity and overall fine-tuning quality.

Key enhancements

  • Expanded question coverage by generating from summaries, extracted facts, and source text.
  • Introduced question evolution, producing multiple forms of similar questions for richer variation.
  • Implemented explicit instruction-based evaluation for each generated pair.
  • Shifted from a fixed number of questions per node to dynamic generation determined at runtime.

These updates produce more accurate and contextually varied Q/A pairs, improving both data quality and model alignment for fine-tuning workflows.

🟢

New Feature

GRPO Fine-Tuning

We’ve introduced Group Relative Policy Optimization (GRPO) Fine-Tuning in SeekrFlow, a powerful reinforcement learning technique that unlocks advanced reasoning capabilities in Large Language Models (LLMs). GRPO fine-tuning transforms models from passive information retrievers into active problem-solvers, capable of handling complex, verifiable tasks with greater precision and reliability.

This method is effective in domains where answers can be definitively validated, such as mathematics, coding, and other structured problem-solving scenarios. GRPO training follows a similar process to standard fine-tuning, with a few targeted modifications, and is now fully supported in SeekrFlow.

Read the blog here

View Documentation here

Custom Tools for SeekrFlow Agents

SeekrFlow Agents now support Custom Tools, enabling developers to extend agent capabilities with purpose-built logic tailored to their specific workflows. Custom Tools are developer-defined Python functions that can be seamlessly integrated alongside built-in tools like Web Search or File Search. Agents dynamically reason over tool descriptions, their instructions, and user input to decide when to invoke them to automatically map context to parameters, execute the function, and return results.

Key capabilities:

  • Custom business logic: Implement organization-specific workflows and operations.
  • Data transformation: Clean, format, and enrich text, numerical, or structured data.
  • External integrations: Connect agents to APIs, databases, Slack, Outlook, and other business systems.
  • Proprietary data access: Leverage internal knowledge stores and company-specific repositories.

View documentation here


🔵

Improvements & Bug Fixes

Model Library Super-Size

The SeekrFlow Model Library just got a major expansion—15 new models are now available for use.

This update significantly broadens your model options, offering a mix of small, mid, and large-scale LLMs across multiple providers. Whether you’re experimenting with reasoning, vision tasks, or enterprise-scale workloads, SeekrFlow now gives you more flexibility to choose the right model for the job.

New Models Added:

  • meta-llama/Llama-3.2-90B-Vision-Instruct
  • NousResearch/Yarn-Mistral-7B-128k
  • mistralai/Mistral-7B-Instruct-v0.2
  • mistralai/Mistral-Small-24B-Instruct-2501
  • google/gemma-2b
  • google/gemma-2-9b
  • google/gemma-3-27b-it
  • Qwen/Qwen3-8B-FP8
  • Qwen/Qwen3-32B-FP8
  • Qwen/Qwen3-30B-A3B-FP8
  • Qwen/Qwen3-235B-A22B-FP8
  • Qwen/Qwen2-72B
  • mistralai/Mamba-Codestral-7B-v0.1
  • microsoft/Phi-3-mini-4k-instruct
  • meta-llama/Llama-4-Scout-17B-16E
  • meta-llama/Llama-4-Scout-17B-16E-Instruct

Refreshed Models:

  • meta-llama/Llama-3.2-1B-Vision-Instruct
  • meta-llama/Llama-3.2-3B-Vision-Instruct

Where to use them:

  • API / SDK: Call the new models directly for inference.
  • Playground > Model Chat: Pick any of the new additions and start chatting immediately.

Content Moderation Models in SeekrFlow

SeekrFlow now includes content moderation models to help teams build safety and scoring pipelines directly into their applications.

Available Models:

Seekr ContentGuard

  • Purpose-built for podcast moderation, using the same scoring system that powered SeekrAlign.
  • Includes GARM category classification and the Seekr Civility Score™.
  • Ideal for scoring podcast transcripts, detecting attacks, labeling tone, and surfacing ad risk.

Learn more here

Meta Llama Guard 3

  • General-purpose moderation model that classifies unsafe content across 22 MLCommons taxonomy categories.
  • Best for moderating LLM outputs, chat messages, or any user-generated text.
  • With these models, users can design their own safety and scoring apps or pipelines that can:

Learn more here

With these models, users can design their own safety and scoring apps or pipelines that can:

  • Detect unsafe or brand-unsafe content.
  • Score tone and civility.
  • Add guardrails for LLM agents and assistants.
  • Apply moderation filters at scale.

This update expands SeekrFlow into media, safety, guardrails, and governance use cases—broadening how teams can ensure responsible, brand-safe, and transparent AI deployments.

View documentation here

Built-in Tools for SeekrFlow Agents (Now in the SeekrFlow UI)

You can now configure agent tools directly from the SeekrFlow UI, making it easier to extend agent capabilities.

Available Tools:

  • File Search: Let agents retrieve and reason over files ingested into SeekrFlow.
  • Web Search: Enable agents to pull in fresh information from the web to complement their knowledge base.

This update gives you more control and flexibility in building agents straight from the browser. With built-in File Search and Web Search, teams can quickly prototype, evaluate, and operationalize agents that connect to both internal and external sources.

AI-Ready Data Engine: New Ingestion & Chunking Methods

This release introduces powerful new ingestion and chunking options, giving users more control over speed, accuracy, and structure when preparing data for AI workflows. Available in UI, API, and SDK.

Ingestion Methods:

Accuracy-Optimized (API/SDK default)

  • Highest-fidelity output with exact hierarchy and table preservation.
  • Best for long technical docs, contracts, research papers, and compliance data where precision is critical.
  • May take more time to process very large files (100+ pages).

Speed-Optimized (SaaS UI default)

  • Delivers fast results with strong accuracy—ideal for everyday files, batch runs, or time-sensitive workflows.
  • Handles large files efficiently while maintaining usability.

View documentation here

Chunking Methods:

Manual Window Chunking (API/SDK)

  • Fixed-size sliding window with overlap and support for ---DOCUMENT_BREAK--- markers.
  • Predictable, user-controlled splits for more consistent chunking.
  • Ideal for resumes, multi-doc compilations, unstructured PDFs, narratives, or evaluation workflows.

View documentation here

With these new options, you can adapt ingestion to the needs of each workflow:

  • Speed-Optimized mode processes massive batches of files, RFPs, or discovery documents in just minutes, perfect when turnaround time matters most.
  • Accuracy-Optimized mode preserves every detail in contracts, compliance records, or research papers, ensuring the cleanest structure and hierarchy extraction.
  • Manual Chunking gives you complete control over splitting less structured content, including resumes, multi-document compilations, and narratives. This ensures consistent chunks for vector databases and downstream workflows.

Minor Bug Fixes & Stability Improvements

This release includes a series of behind-the-scenes updates to improve overall platform performance, usability, and reliability. From small interface refinements to enhanced error handling and support resources, these updates ensure a smoother and more consistent experience across SeekrFlow.

🟢

New Feature

Agent Structured Outputs

You can now define structured response formats for SeekrFlow™ agents, available through the API and SDK. This capability allows developers to enforce specific schemas for agent outputs, ensuring integration-ready responses from the first generation. By eliminating the need for post-processing or custom parsers, structured outputs simplify development and enable more reliable downstream workflows

View Documentation here

🔵

Improvements & Bug Fixes

New SeekrFlow UI

We’ve launched one of the biggest SeekrFlow updates to date: a redesigned UI that makes agent creation and experimentation more accessible, and intuitive.

  • Agents for Everyone: You can now create, configure, and chat with agents directly in the UI.
    • Guided forms simplify model, tool, and policy selection.
    • Built-in conversation scaffolding means no manual thread setup.
    • Test agents instantly in the new Agent Chat sandbox.

New SeekrFlow Dashboard::

  • Quick-Start tiles let you jump directly into creation: AI-Ready Data, Fine-Tune, Create an Agent, and Playground.
  • Tabbed Recent Work view for Fine-Tuning and Agents, complete with status badges.
  • Deployment snapshot panel showing current deployment counts and live token usage meters.

UI Enhancements:

  • A complete visual reskin for a modern, polished look.
  • Reorganized left navigation aligned to real workflows:
    • Data Engine: New Storage section for managing files and vector stores
    • Development: Fine-Tuning, Agents, Deployments, and Model Library
    • Playground
  • Refreshed layout and graphics to improve clarity and usability.
  • Inline “Learn More” links for quick, contextual guidance.

This update makes SeekrFlow more accessible to both technical and non-technical users, enabling fast agent prototyping and evaluation straight from the browser. The new dashboard supports faster onboarding, agent-first workflows, and one-screen deployment health checks. It also lays the foundation for future platform capabilities.


Minor Bug Fixes & Stability Improvements

This release includes a number of behind-the-scenes updates, bug fixes, performance tuning, and workflow stability improvements to ensure a smoother, more reliable experience across SeekrFlow.

🟢

New Feature

SeekrFlow™ Agents

Today, we’re launching SeekrFlow™ Agents, a full-stack solution for building and deploying intelligent agents that operate autonomously, securely, and at scale. With SeekrFlow Agents, enterprises can deploy agents that are secure, explainable, and optimized for their specific tasks and industry. With SeekrFlow, you get the flexibility to run agents across cloud and on-premises—delivering the infrastructure and control needed to embed agentic AI into real-world workflows.​

Read the full blog here
View documentation here

Seekr partners with Oracle Cloud Infrastructure

Seekr has signed a multi-year agreement with Oracle Cloud Infrastructure (OCI) to accelerate AI deployments and support the development of next-gen models and agents. This collaboration brings together Seekr’s secure AI platform, OCI’s high-performance infrastructure, and AMD Instinct™ MI300X GPUs to enable faster model training, scalable deployment, and support for mission-critical environments.

Learn more here

🟢

New Features

SeekrFlow™ Available in AWS Marketplace

We’re excited to announce that SeekrFlow™, is now available on AWS Marketplace. This listing gives AWS customers a streamlined way to deploy SeekrFlow directly within their environments, accelerating AI adoption while maintaining full control over security, compliance, and procurement.

Read the launch blog here


🔵

Improvements & Bug Fixes

General Improvements and Workflow Stability

We’ve made several updates to improve platform reliability and user experience. This includes enhanced model cards that now display model-specific information, and updated links to our Privacy Policy and Terms & Conditions to ensure greater transparency. These refinements, along with ongoing stability and performance enhancements, help ensure a smoother and more trustworthy workflow across SeekrFlow.

🔵

Improvements & Bug Fixes

Multi-Factor Authentication (MFA)

Seekr now supports MFA for all user sign-ins. After entering their credentials, users are guided to select and complete a second-factor verification via:

  • TOTP (Time-based one-time password) through authenticator apps
  • SMS one-time codes
  • Email one-time codes

This adaptive login flow automatically directs users to their configured method (or lets them choose if multiple factors are set up), ensuring a seamless yet secure sign-in experience.


In-App Help & Support Access

We have made it easier for users to access help right when they need it. A new “Help” label now appears next to the question icon in the top navigation bar.


Editable Project Titles and Descriptions

We have introduced inline editing for project titles and descriptions on the Project Details page, making it faster and easier to manage key project information.

  • Titles can now be updated inline and are limited to 100 characters.
  • Descriptions support multiline editing and are capped at 1,000 characters.

These updates provide a more flexible and user-friendly way to keep project details accurate and up to date.

🟢

New Feature

Multi-file Ingestion

Today, we’re introducing multi-file ingestion, a new feature within SeekrFlow™. This capability simplifies and accelerates the process of automatic generation of a fine-tuning dataset. With multi-file ingestion, you can now upload a set of files across multiple formats, all in one step. SeekrFlow automatically converts, merges, and structures the content into markdown (.md) that is AI-ready, eliminating the need for manual formatting or stitching files together.

To learn more, read the launch blog.

🔵

Improvements & Bug Fixes

General Improvements and Workflow Stability

We have implemented several small bug fixes and performance enhancements to improve the data creation workflow, including updated instructional copy, clearer conversion progress indicators, refined exit warnings, and a more streamlined experience from upload to completion.

🟢

New Feature

AI Ready Data Engine

We have launched the Seekr AI-Ready Data Engine, a powerful new system within the SeekrFlow™ platform that transforms diverse enterprise data into a structured, AI-ready format—faster, more accurately, and at a lower cost. This intelligent, end-to-end pipeline eliminates manual bottlenecks, ensuring seamless integration and deployment of high-quality, trustworthy AI applications.

To learn more, read the launch blog.

API Key Management

Users can generate their own API keys directly from their profile. This enhancement streamlines access management and improves user autonomy and security, ensuring a seamless developer experience.

How to create your API Key

  • Users can generate an API key by clicking the “+ Generate Key” button in their profile.
  • The system will securely generate and return a unique API key.
  • Once generated, users will be prompted to save their key before exiting.

🔵

Improvements & Bug Fixes

Updated User Profile Page with Navigation

The SeekrFlow user profile page has been redesigned with a new sub-navigation component, improving accessibility and ease of use. The page is now labeled "My Account" and includes two sections:

  • Profile: which serves as the default landing page
  • API Key: where users can generate and manage their API access.

This update provides a more intuitive way to navigate account settings.


Bug Fixes and Stability Improvements

We have implemented several small bug fixes and performance enhancements to further optimize SeekrFlow. These updates improve the platform’s overall stability and ensure a smoother, more reliable experience for users.


🟣

UI/UX Enhancements

Improved User Profile Experience

Enhanced the User Profile experience with a new User Details section, including profile editing, password updates, sign-out functionality, and cookie settings management.

🟢

New Feature

Training Loss Chart in Run Details

We have added a dynamic training loss chart to the Run Details page, providing users with critical insights into model performance during fine-tuning:

  • Y-Axis: Displays loss values to indicate model performance trends.
  • Upper X-Axis: Shows the number of epochs selected during run creation.
  • Lower X-Axis: Represents "steps," calculated as:
    Total Steps = (Total Number of Samples ÷ number of instances * Batch Size) × Number of Epochs

This new feature enables users to track loss convergence and make informed adjustments, reducing the number of repeated fine-tuning cycles needed to optimize results. By minimizing fine-tuning runs, it saves time, compute resources, and associated costs.


🔵

Improvements & Bug Fixes

Enhanced Data Ingestion

Users can now upload Markdown, PDF, and Word files as valid file types for data ingestion during data creation. This update, available on API and SDK, makes working with diverse data formats easier, streamlining the ingestion process and reducing the need for file conversions.

Expanded Base Model Library

The base model library now includes the Mistral-7B-Instruct-v0.2 model from Mistral AI. This model is available for fine-tuning, inference, and testing in the Sandbox, offering more options to tailor your AI solutions.

Bug Fixes and Stability Improvements

Small bug fixes and performance updates to make SeekrFlow™ smoother and more reliable for users.

🔵

Improvements & Bug Fixes

Tooltip for Icon Buttons in Model Library

We’ve improved the user experience in the Model Library. Icon buttons in base model cards now display a hover tooltip with descriptive text, ensuring users clearly understand the available actions.


Faster Response Rendering in Sandbox

The simulated typing animation for sandbox responses now matches the speed of token count updates. This enhancement delivers quicker rendering, offering a more seamless and accurate representation of response generation.

Run Description Added to Run Summary

The run summary now displays the description entered during the run creation wizard, enhancing clarity and alignment with the design specifications.


Minor Bug Fixes and Stability Improvements

This release includes several minor bug fixes and performance enhancements across SeekrFlow™ to ensure a smoother and more reliable user experience.