Training data attribution
Identify which training examples influenced a fine-tuned model's response.
Training data attribution surfaces the training data that influenced fine-tuned model outputs. By tracing model responses back to specific question-answer pairs from the training dataset, it helps debug model behavior and audit responses.
How training data attribution works
When a fine-tuned model generates a response, training data attribution identifies the most influential training examples that shaped that output. Each influential example receives an influence level (high, medium, or low) indicating its contribution to the model's response.
Requirements
Training data attribution is available for:
- Fine-tuned models created through SeekrFlow
- Models trained after September 22, 2025
- Deployed models with active endpoints
Influence levels
Training examples are ranked by their influence on model outputs:
| Level | Description |
|---|---|
| High | Training example strongly shaped the model response |
| Medium | Training example had moderate impact on output |
| Low | Training example contributed minimally to response |
Irrelevant training examples are filtered out and not returned.
When to use training data attribution
Debugging model behavior – Identify which training examples drive unexpected or incorrect responses
Auditing outputs – Trace model decisions back to source training data for compliance and verification
Dataset refinement – Discover patterns in influential training examples to improve fine-tuning datasets
Traceability
Training data attribution responses include file identifiers linking back to source documents. This connects model outputs to original training materials, supporting debugging and dataset updates.
Updated 1 day ago
