Avatar
Year: 2025
Project Name: Ask·Me·Why
Category: Research
Screenshots:
One Liner:

Explain AI decisions by finding what makes each case unique within its neighborhood of similar instances, without ever peeking inside the model.

Abstract:

As machine learning models increasingly influence critical decisions in healthcare, finance, and legal domains, the need for transparent explanations has become paramount. Traditional explainability tools like SHAP, LIME, and XGBoost require direct model access and can produce inconsistent explanations across different models. We present Ask·Me·Why, a novel model-agnostic framework that explains individual predictions by analyzing feature distribution differences between local neighborhoods and global patterns. Our approach uniquely leverages embeddings to identify semantically similar instances and applies proximity-weighted analysis to identify the most distinctive features for each prediction. By comparing how features behave in an instance's neighborhood versus the entire dataset, Ask·Me·Why reveals why specific predictions occur without requiring access to model internals. Crucially, we weight instances by proximity, the closer they are to the target instance the more they influence the explanation, mirroring how domain experts prioritize the most similar cases.

Description:

Ask·Me·Why is an innovative framework that explains why AI systems make specific predictions by analyzing patterns in the data itself, rather than looking inside the model. When faced with critical decisions in healthcare, finance, or legal settings, users need to understand not just what the prediction is, but why it was made. Our approach finds similar cases to the instance being explained, then identifies which features behave differently in this local neighborhood compared to the overall dataset. By weighting these similar cases by proximity, giving more importance to the most similar instances, we mirror how human experts actually reason about cases. Unlike existing explainability tools like SHAP, LIME, or XGBoost, Ask·Me·Why doesn't require access to model internals, works across any model type, and provides instance-specific explanations that align with domain knowledge. For example, in healthcare applications, our framework might reveal that a patient's specific combination of age, white blood cell count, and family history creates a distinctive pattern among similar patients that drives a particular diagnosis prediction. By providing these transparent, contextual explanations, Ask·Me·Why builds trust in AI systems, enables expert validation of model reasoning, and facilitates adoption in high-stakes environments where understanding "why" is just as important as knowing "what." The framework bridges the gap between powerful but opaque AI and the human experts who need to make sense of these predictions in real-world contexts.

Video: https://1513041.mediaspace.kaltura.com/media/Ask_Me_Why2/1_m5hzrns1
Print Poster: View Poster
Digital: View Poster

Team Members

Avatar
Jonathan Lai

jonathan.quoc.lai@drexel.edu

Advisors

Avatar
Hegler Tissot

hegler.tissot@drexel.edu