Zest AI actually gets into this in their explainability whitepaper. They walk through how decision-level reason codes are generated using Shapley-based methods and validated to make sure those factors truly drove the outcome, so adverse action notices and regulator reviews hold up. https://www.zest.ai/learn/resources/model-explainability-reexplained/
Zest AI actually gets into this in their explainability whitepaper. They walk through how decision-level reason codes are generated using Shapley-based methods and validated to make sure those factors truly drove the outcome, so adverse action notices and regulator reviews hold up. https://www.zest.ai/learn/resources/model-explainability-reexplained/