Learning To Explain (LTX)

20 June 2023, 14:00 
zoom & Room 206 
Learning To Explain (LTX)

Yuval Asher, M.Sc. student at the department of Industrial Engineering 
Advisor: Dr. Noam Koenigstien 

Via Zoom click here

Abstract:
In this work, we introduce Learning to Explain (LTX) - a novel framework for explaining vision models. LTX incorporates an explainer model designed to generate explanation maps that emphasize the most crucial regions that justify the predictions of the model being explained. The training regimen for the explainer involves an initial pre-training stage, followed by a per-instance finetuning stage. The optimization during both stages employs a unique configuration in which the explained model's prediction for a masked input is compared to its original prediction for the unmasked input. This approach enables a novel counterfactual objective, aiming to anticipate the model's result using masked versions of the input image. Notably, LTX is model-agnostic and showcases its capacity to yield explanations for both Transformer-based and convolutional models. Our evaluations show that LTX substantially surpasses the current state-of-the-art in explainability for vision Transformers while delivering competitive results in explaining convolutional models

Bio:

Yuval Asher is an M.Sc student at the Department of Industrial Engineering at Tel Aviv University. Yuval worked for several years in the Intelligence corps as a NLP data scientist and today works at Verbit as Speech & NLP Researcher. His research, supervised by Dr. Noam Koenigstein, focused on explainability of vision models (XAI) in order to create a generic framework which can be applied for explaining different models (Transformers, ConvNets) in a self-supervised manner.

E-Mail: asheryuvala@gmail.com

Linkedin: https://www.linkedin.com/in/yuval-asher/

Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing Contact us as soon as possible >>