Explainability

Understand model outputs by tracing responses to the sources and training data that influenced them.

Supported on
Platform
API
SDK

Explainability traces model responses back to their origins—whether retrieved context in an agentic workflow or training examples from fine-tuning. This allows you to see which inputs influenced specific outputs.

Context attribution is available in Agent Chat, the API, and the SDK. Training data attribution is available in the API and SDK only.

Use cases

Debugging – Identify which inputs drove unexpected or incorrect outputs. This helps locate problematic data sources or training examples.

Auditing – Document how AI systems reach decisions. Explainability provides traceable links between outputs and source materials for compliance and verification.

Dataset refinement – Analyze influential inputs to identify gaps or issues in your retrieval sources or training data.

Attribution capabilities

SeekrFlow provides two attribution capabilities for different workflows:

Choose the right capability

WorkflowAttribution typeWhat it traces
Agents with RAG, web search, or toolscontext attributionRetrieved sources that influenced each statement
Fine-tuned modelstraining data attributionTraining examples that shaped model behavior

If you use a fine-tuned model within an agent that also retrieves context, both capabilities can apply—context attribution for retrieval influence, training data attribution for fine-tuning influence.