Release Notes | January 2026

Vision-language fine-tuning

FeatureSeekr-managed instance
Supported on
UI
API
SDK

Instruction fine-tuning for vision-language models (VLMs) that reason jointly over images and text. Users can now fine-tune multimodal models using datasets combining visual inputs and natural language through the same fine-tuning workflows used for text-only models.

Capabilities:

  • Upload and validate vision-language datasets with messages-based schema and image support
  • Fine-tune supported VLMs: Qwen2.5-VL-7B-Instruct and Llama-3.2-11B-Vision-Instruct
  • Automatic schema validation and seamless job creation
  • Standard instruction tuning workflow with no separate multimodal configuration required
ℹ️

This release supports instruction tuning only. Reinforcement learning and preference tuning are not yet supported for vision-language models.

Tools library

FeatureSeekr-managed instance
Supported on
UI
API
SDK

Centralized workspace for managing agent tools across your organization. The tool library serves as a single source of truth where users can create, update, delete, and duplicate tools independently of agent configuration.

Capabilities:

  • Create and manage web search and file search tools with custom configurations.
  • View code interpreter and custom function tools (read-only).
  • Duplicate existing tools to create variations with modified configurations.
  • Automatic propagation of tool updates to all agents using that tool.
  • Select pre-created tools during agent creation or create new tools inline.
  • Query tools by type and status, and view which agents are using specific tools.

Tool changes automatically redeploy linked active agents to reflect updates across your organization.

Ingestion insights

FeatureSeekr-managed instance
Supported on
UI
API
SDK

Real-time visibility into file ingestion status across Alignment and VectorDB endpoints. Every uploaded file receives its own persistent ingestion record that tracks progress through conversion, alignment, and vectorization.

Capabilities:

  • Per-file status tracking (queued, running, completed, failed) with queue position and timestamps.
  • Plain-language error messages with suggested fixes.
  • Structured metadata through file_records view showing which files are processing, completed, or blocked.
  • Consistent response schema across Alignment and VectorDB endpoints.

This visibility layer enables faster self-service debugging and reduces support escalations during onboarding, proofs of concept, and production workflows.

Multi-node fine-tuning

FeatureSeekr-managed instance
Supported on
UI
API
SDK

Distributed training across multiple compute nodes for improved fine-tuning performance and scalability. Multi-node fine-tuning distributes training jobs across multiple physical nodes (each with 8 GPUs), enabling parallel execution rather than single-machine training.

Capabilities:

  • Support for 1–4 nodes (up to 32 GPUs total)
  • Configure node count via n_node parameter in InfrastructureConfig
  • Automatic distributed orchestration and synchronization
  • Immediate validation of supported n_node / n_accel combinations

This enables reduced training time for larger datasets, better compute allocation matching dataset size and training needs, and establishes the foundation for future support of larger models.