no code implementations • Findings (ACL) 2022 • Junhao Zheng, Haibin Chen, Qianli Ma
Cross-domain NER is a practical yet challenging problem since the data scarcity in the real-world scenario.
1 code implementation • 26 Mar 2024 • Junhao Zheng, Chenhao Lin, Jiahao Sun, Zhengyu Zhao, Qian Li, Chao Shen
Deep learning-based monocular depth estimation (MDE), extensively applied in autonomous driving, is known to be vulnerable to adversarial attacks.
no code implementations • 23 Feb 2024 • Junlong Liu, Xichen Shang, Huawen Feng, Junhao Zheng, Qianli Ma
However, due to the token bias in pretrained language models, the models can not capture the fine-grained semantics in sentences, which leads to poor predictions.
no code implementations • 20 Feb 2024 • Chongzhi Zhang, Zhiping Peng, Junhao Zheng, Qianli Ma
In this paper, we propose Conditional Logical Message Passing Transformer (CLMPT), which considers the difference between constants and variables in the case of using pre-trained neural link predictors and performs message passing conditionally on the node type.
no code implementations • 16 Feb 2024 • Shengjie Qiu, Junhao Zheng, Zhen Liu, Yicheng Luo, Qianli Ma
As for the E2O problem, we use knowledge distillation to maintain the model's discriminative ability for old entities.
no code implementations • 15 Feb 2024 • Junhao Zheng, Ruiyan Wang, Chongzhi Zhang, Huawen Feng, Qianli Ma
In this way, the model is encouraged to adapt to all classes with causal effects from both new and old data and thus alleviates the causal imbalance problem.
Class Incremental Learning Continual Named Entity Recognition +6
1 code implementation • 13 Feb 2024 • Junhao Zheng, Shengjie Qiu, Qianli Ma
However, existing IL scenarios and datasets are unqualified for assessing forgetting in PLMs, giving an illusion that PLMs do not suffer from catastrophic forgetting.
no code implementations • 17 Jan 2024 • Junhao Zheng, Qianli Ma, Zhen Liu, Binquan Wu, Huawen Feng
The discrepancy results in the model learning irrelevant information for old and pre-trained tasks, which leads to catastrophic forgetting and negative forward transfer.
1 code implementation • 13 Dec 2023 • Junhao Zheng, Shengjie Qiu, Qianli Ma
Most assume that catastrophic forgetting is the biggest obstacle to achieving superior IL performance and propose various techniques to overcome this issue.
1 code implementation • 19 Jun 2023 • Junhao Zheng, Qianli Ma, Shengjie Qiu, Yue Wu, Peitian Ma, Junlong Liu, Huawen Feng, Xichen Shang, Haibin Chen
Intriguingly, the unified objective can be seen as the sum of the vanilla fine-tuning objective, which learns new knowledge from target data, and the causal objective, which preserves old knowledge from PLMs.
no code implementations • 7 Dec 2022 • Yinpeng Dong, Peng Chen, Senyou Deng, Lianji L, Yi Sun, Hanyu Zhao, Jiaxing Li, Yunteng Tan, Xinyu Liu, Yangyi Dong, Enhui Xu, Jincai Xu, Shu Xu, Xuelin Fu, Changfeng Sun, Haoliang Han, Xuchong Zhang, Shen Chen, Zhimin Sun, Junyi Cao, Taiping Yao, Shouhong Ding, Yu Wu, Jian Lin, Tianpeng Wu, Ye Wang, Yu Fu, Lin Feng, Kangkang Gao, Zeyu Liu, Yuanzhe Pang, Chengqi Duan, Huipeng Zhou, Yajie Wang, Yuhang Zhao, Shangbo Wu, Haoran Lyu, Zhiyu Lin, YiFei Gao, Shuang Li, Haonan Wang, Jitao Sang, Chen Ma, Junhao Zheng, Yijia Li, Chao Shen, Chenhao Lin, Zhichao Cui, Guoshuai Liu, Huafeng Shi, Kun Hu, Mengxin Zhang
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems.
1 code implementation • 8 Oct 2022 • Junhao Zheng, Zhanxian Liang, Haibin Chen, Qianli Ma
Thanks to the causal inference, we identify that the forgetting is caused by the missing causal effect from the old data.
Ranked #1 on FG-1-PG-1 on conll2003