Explainability
Understand model outputs by tracing responses to the sources and training data that influenced them.
Explainability traces model responses back to their origins—whether retrieved context in an agentic workflow or training examples from fine-tuning. This allows you to see which inputs influenced specific outputs.
Context attribution is available in Agent Chat, the API, and the SDK. Training data attribution is available in the API and SDK only.
Use cases
Debugging – Identify which inputs drove unexpected or incorrect outputs. This helps locate problematic data sources or training examples.
Auditing – Document how AI systems reach decisions. Explainability provides traceable links between outputs and source materials for compliance and verification.
Dataset refinement – Analyze influential inputs to identify gaps or issues in your retrieval sources or training data.
Attribution capabilities
SeekrFlow provides two attribution capabilities for different workflows:
Context attribution
Identifies which retrieved sources influenced an agent's response. Traces statements back to vector database content, web search results, tool outputs, and conversation context.
When to use: Agents with RAG, web search, or tools.
Training data attribution
Identifies which training examples influenced a fine-tuned model's response. Traces outputs back to specific question-answer pairs from the training dataset.
When to use: Fine-tuned models.
Choose the right capability
| Workflow | Attribution type | What it traces |
|---|---|---|
| Agents with RAG, web search, or tools | context attribution | Retrieved sources that influenced each statement |
| Fine-tuned models | training data attribution | Training examples that shaped model behavior |
If you use a fine-tuned model within an agent that also retrieves context, both capabilities can apply—context attribution for retrieval influence, training data attribution for fine-tuning influence.
Updated about 1 month ago
