Explainability
Trace model outputs back to the sources and training examples that influenced them.
Explainability traces model responses back to their origins—whether retrieved context in an agentic workflow or training examples from fine-tuning. This allows you to see which inputs influenced specific outputs.
The Explainability SDK provides two capabilities for understanding model outputs. For conceptual background on use cases and when to use each capability, see Explainability.
Explainability workflows
Updated about 2 months ago
