Search Results for author: Xiaotian Lu

Found 5 papers, 2 papers with code

Evaluating Saliency Explanations in NLP by Crowdsourcing

no code implementations17 May 2024 Xiaotian Lu, Jiyi Li, Zhen Wan, Xiaofeng Lin, Koh Takeuchi, Hisashi Kashima

The development of methods to explain models has become a key issue in the reliability of deep learning models in many important applications.

Estimating Treatment Effects Under Heterogeneous Interference

1 code implementation25 Sep 2023 Xiaofeng Lin, Guoxi Zhang, Xiaotian Lu, Han Bao, Koh Takeuchi, Hisashi Kashima

One popular application of this estimation lies in the prediction of the impact of a treatment (e. g., a promotion) on an outcome (e. g., sales) of a particular unit (e. g., an item), known as the individual treatment effect (ITE).

Decision Making

Taming Small-sample Bias in Low-budget Active Learning

no code implementations19 Jun 2023 Linxin Song, Jieyu Zhang, Xiaotian Lu, Tianyi Zhou

Instead of tuning the coefficient for each query round, which is sensitive and time-consuming, we propose the curriculum Firth bias reduction (CHAIN) that can automatically adjust the coefficient to be adaptive to the training process.

Active Learning

Crowdsourcing Evaluation of Saliency-based XAI Methods

no code implementations27 Jun 2021 Xiaotian Lu, Arseny Tolmachev, Tatsuya Yamamoto, Koh Takeuchi, Seiji Okajima, Tomoyoshi Takebayashi, Koji Maruhashi, Hisashi Kashima

In order to compare various saliency-based XAI methods quantitatively, several approaches for automated evaluation schemes have been proposed; however, there is no guarantee that such automated evaluation metrics correctly evaluate explainability, and a high rating by an automated evaluation scheme does not necessarily mean a high explainability for humans.

Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.