no code implementations • EMNLP 2021 • Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, Yinfei Yang
In the context of neural passage retrieval, we study three promising techniques: synthetic data generation, negative sampling, and fusion.
no code implementations • EMNLP 2021 • Jing Lu, Vincent Ng
Despite recent promising results on the application of span-based models for event reference interpretation, there is a lack of understanding of what has been improved.
no code implementations • EMNLP 2020 • Jing Lu, Vincent Ng
Despite the significant progress on entity coreference resolution observed in recent years, there is a general lack of understanding of what has been improved.
no code implementations • 25 Dec 2023 • Yuteng Liu, Haowen Li, Haishan Zou, Jing Lu, Zhibin Lin
Active headrests can reduce low-frequency noise around ears based on active noise control (ANC) system.
1 code implementation • 18 Dec 2023 • Yimeng Bai, Yang Zhang, Jing Lu, Jianxin Chang, Xiaoxue Zang, Yanan Niu, Yang song, Fuli Feng
Through meta-learning techniques, LabelCraft effectively addresses the bi-level optimization hurdle posed by the recommender and labeling models, enabling the automatic acquisition of intricate label generation mechanisms. Extensive experiments on real-world datasets corroborate LabelCraft's excellence across varied operational metrics, encompassing usage time, user engagement, and retention.
no code implementations • 30 Jun 2023 • Yang Zhang, Yimeng Bai, Jianxin Chang, Xiaoxue Zang, Song Lu, Jing Lu, Fuli Feng, Yanan Niu, Yang song
With the proliferation of short video applications, the significance of short video recommendations has vastly increased.
no code implementations • 1 Jun 2023 • Xiaohuai Le, Tong Lei, Li Chen, Yiqing Guo, Chao He, Cheng Chen, Xianjun Xia, Hua Gao, Yijian Xiao, Piao Ding, Shenyi Song, Jing Lu
With fewer feature dimensions, filter banks are often used in light-weight full-band speech enhancement models.
no code implementations • 20 Feb 2023 • Xiaohuai Le, Li Chen, Chao He, Yiqing Guo, Cheng Chen, Xianjun Xia, Jing Lu
Target speaker information can be utilized in speech enhancement (SE) models to more effectively extract the desired speech.
no code implementations • 5 Feb 2023 • Jianxin Chang, Chenbin Zhang, Zhiyi Fu, Xiaoxue Zang, Lin Guan, Jing Lu, Yiqun Hui, Dewei Leng, Yanan Niu, Yang song, Kun Gai
And for the user-item cross features, we compress each into a one-dimentional bias term in the attention score calculation to save the computational cost.
no code implementations • CVPR 2023 • Beitong Zhou, Jing Lu, Kerui Liu, Yunlu Xu, Zhanzhan Cheng, Yi Niu
Recent developments of the application of Contrastive Learning in Semi-Supervised Learning (SSL) have demonstrated significant advancements, as a result of its exceptional ability to learn class-aware cluster representations and the full exploitation of massive unlabeled data.
1 code implementation • CVPR 2023 • Linglan Zhao, Jing Lu, Yunlu Xu, Zhanzhan Cheng, Dashan Guo, Yi Niu, Xiangzhong Fang
While knowledge distillation, a prevailing technique in CIL, can alleviate the catastrophic forgetting of older classes by regularizing outputs between current and previous model, it fails to consider the overfitting risk of novel classes in FSCIL.
no code implementations • 28 Dec 2022 • Tianyou Li, Hongji Duan, Sipei Zhao, Jing Lu, Ian S. Burnett
Recently, distributed active noise control systems based on diffusion adaptation have attracted significant research interest due to their balance between computational complexity and stability compared to conventional centralized and decentralized adaptation schemes.
no code implementations • 20 Dec 2022 • Jing Lu, Keith Hall, Ji Ma, Jianmo Ni
We present Hybrid Infused Reranking for Passages Retrieval (HYRR), a framework for training rerankers based on a hybrid of BM25 and neural retrieval models.
1 code implementation • 17 Oct 2022 • Sanli Tang, Zhongyu Zhang, Zhanzhan Cheng, Jing Lu, Yunlu Xu, Yi Niu, Fan He
Then, a robust distilling module (RDM) is applied to construct the global knowledge based on the prototypes and filtrate noisy global and local knowledge by measuring the discrepancy of the representations in two feature spaces.
no code implementations • 12 Oct 2022 • Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, Michael Bendersky
Recently, substantial progress has been made in text ranking based on pretrained language models such as BERT.
no code implementations • 23 Sep 2022 • Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang
To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data.
no code implementations • 30 Jul 2022 • Fanrong Shi, Xianguo Tuo, Simon X. Yang, Jing Lu, Huailiang Li
Accurate and fast-convergent time synchronization is very important for wireless sensor networks.
1 code implementation • 4 Jul 2022 • Guoliang Cheng, Lele Liao, Kai Chen, Yuxiang Hu, Changbao Zhu, Jing Lu
The recently proposed semi-blind source separation (SBSS) method for nonlinear acoustic echo cancellation (NAEC) outperforms adaptive NAEC in attenuating the nonlinear acoustic echo.
1 code implementation • 29 Jun 2022 • Qinwen Hu, Zhongshu Hou, Xiaohuai Le, Jing Lu
Deep neural network based full-band speech enhancement systems face challenges of high demand of computational resources and imbalanced frequency distribution.
no code implementations • Findings (ACL) 2022 • Kai Hui, Honglei Zhuang, Tao Chen, Zhen Qin, Jing Lu, Dara Bahri, Ji Ma, Jai Prakash Gupta, Cicero Nogueira dos santos, Yi Tay, Don Metzler
This results in significant inference time speedups since the decoder-only architecture only needs to learn to interpret static encoder embeddings during inference.
no code implementations • 16 Mar 2022 • Jing Lu, Yunxu Xu, Hao Li, Zhanzhan Cheng, Yi Niu
Accordingly, the embedding space can be better optimized to discriminate therein the predefined classes and between known and unknowns.
no code implementations • 25 Jan 2022 • Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, Marc Najork
In this work, we carefully select five datasets, including two in-domain datasets and three out-of-domain datasets with different levels of domain shift, and study the generalization of a deep model in a zero-shot setting.
2 code implementations • 15 Dec 2021 • Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang
With multi-stage training, surprisingly, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization.
Ranked #9 on Zero-shot Text Search on BEIR
no code implementations • 26 Jul 2021 • Zhanzhan Cheng, Jing Lu, Baorui Zou, Shuigeng Zhou, Fei Wu
During the competition period (opened on 1st March, 2021 and closed on 11th April, 2021), a total of 24 teams participated in the three proposed tasks with 46 valid submissions, respectively.
1 code implementation • NAACL 2021 • Jing Lu, Vincent Ng
We propose a neural event coreference model in which event coreference is jointly trained with five tasks: trigger detection, entity coreference, anaphoricity determination, realis detection, and argument extraction.
no code implementations • 11 Jan 2021 • Felix Denzinger, Michael Wels, Christian Hopfgartner, Jing Lu, Max Schöbinger, Andreas Maier, Michael Sühling
However, to enable clinical research with the help of these algorithms, a software solution, which enables manual correction, comprehensive visual feedback and tissue analysis capabilities, is needed.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Jing Lu, Vincent Ng
We present two extensions to a state-of-theart joint model for event coreference resolution, which involve incorporating (1) a supervised topic model for improving trigger detection by providing global context, and (2) a preprocessing module that seeks to improve event coreference by discarding unlikely candidate antecedents of an event mention using discourse contexts computed based on salient entities.
no code implementations • International Conference on Blockchain and Trustworthy Systems 2020 • Hanlei Cheng, Jing Lu, Zhiyu Xiang & Bin Song
Distance education has become an important learning method for students.
1 code implementation • 25 Oct 2020 • Guoliang Cheng, Lele Liao, Hongsheng Chen, Jing Lu
Unlike the commonly utilized adaptive algorithm, the proposed SBSS is based on the independence between the near-end signal and the reference signals, and is less sensitive to the mismatch of nonlinearity between the numerical and actual models.
no code implementations • 23 Oct 2020 • Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, Yinfei Yang
In this paper we explore the effects of negative sampling in dual encoder models used to retrieve passages for automatic question answering.
no code implementations • NeurIPS 2020 • Hu Liu, Jing Lu, Xiwei Zhao, Sulong Xu, Hao Peng, Yutong Liu, Zehua Zhang, Jian Li, Junsheng Jin, Yongjun Bao, Weipeng Yan
First, conventional attentions mostly limit the attention field only to a single user's behaviors, which is not suitable in e-commerce where users often hunt for new demands that are irrelevant to any historical behaviors.
no code implementations • 10 Aug 2020 • Wenzhi Fan, Jing Lu
Recently, a partitioned-block-based frequency-domain Kalman filter (PFKF) has been proposed for acoustic echo cancellation.
no code implementations • 18 Jun 2020 • Hu Liu, Jing Lu, Hao Yang, Xiwei Zhao, Sulong Xu, Hao Peng, Zehua Zhang, Wenjie Niu, Xiaokun Zhu, Yongjun Bao, Weipeng Yan
Existing algorithms usually extract visual features using off-the-shelf Convolutional Neural Networks (CNNs) and late fuse the visual and non-visual features for the finally predicted CTR.
no code implementations • 27 May 2020 • Jing Lu, Baorui Zou, Zhanzhan Cheng, ShiLiang Pu, Shuigeng Zhou, Yi Niu, Fei Wu
In this paper, we define the problem of object quality assessment for the first time and propose an effective approach named Object-QA to assess high-reliable quality scores for object images.
1 code implementation • 27 May 2020 • Peng Zhang, Yunlu Xu, Zhanzhan Cheng, ShiLiang Pu, Jing Lu, Liang Qiao, Yi Niu, Fei Wu
Since real-world ubiquitous documents (e. g., invoices, tickets, resumes and leaflets) contain rich information, automatic document image understanding has become a hot topic.
1 code implementation • 16 May 2020 • Zhaoyi Gu, Lele Liao, Kai Chen, Jing Lu
Extracting the desired speech from a mixture is a meaningful and challenging task.
1 code implementation • 15 May 2020 • Hongsheng Chen, Teng Xiang, Kai Chen, Jing Lu
Acoustic echo cannot be entirely removed by linear adaptive filters due to the nonlinear relationship between the echo and far-end signal.
no code implementations • ICCV 2019 • Jing Lu, Chaofan Xu, Wei Zhang, Ling-Yu Duan, Tao Mei
Consequently, gradient descent direction on the training loss is mostly inconsistent with the direction of optimizing the concerned evaluation metric.
1 code implementation • 23 Jul 2019 • Yaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan
Then, an attention mechanism is proposed to model the relations between the image region and blocks and generate the valuable position feature, which will be further utilized to enhance the region expression and model a more reliable relationship between the visual image and the textual sentence.
no code implementations • NAACL 2019 • Yin Jou Huang, Jing Lu, Sadao Kurohashi, Vincent Ng
Argument compatibility is a linguistic condition that is frequently incorporated into modern event coreference resolution systems.
1 code implementation • 8 Mar 2019 • Zhanzhan Cheng, Jing Lu, Yi Niu, ShiLiang Pu, Fei Wu, Shuigeng Zhou
Video text spotting is still an important research topic due to its various real-applications.
no code implementations • 8 Feb 2018 • Steven C. H. Hoi, Doyen Sahoo, Jing Lu, Peilin Zhao
Online learning represents an important family of machine learning algorithms, in which a learner attempts to resolve an online prediction (or any type of decision-making) task by learning a model/hypothesis from a sequence of data instances one at a time.
4 code implementations • 10 Nov 2017 • Doyen Sahoo, Quang Pham, Jing Lu, Steven C. H. Hoi
Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task.
no code implementations • ACL 2017 • Jing Lu, Vincent Ng
While joint models have been developed for many NLP tasks, the vast majority of event coreference resolvers, including the top-performing resolvers competing in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, are pipeline-based, where the propagation of errors from the trigger detection component to the event coreference component is a major performance limiting factor.
no code implementations • COLING 2016 • Jing Lu, Deepak Venugopal, Vibhav Gogate, Vincent Ng
Event coreference resolution is a challenging problem since it relies on several components of the information extraction pipeline that typically yield noisy outputs.
1 code implementation • 28 Oct 2016 • Yue Wu, Steven C. H. Hoi, Chenghao Liu, Jing Lu, Doyen Sahoo, Nenghai Yu
SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data.
no code implementations • LREC 2016 • Jing Lu, Vincent Ng
Multi-pass sieve approaches have been successfully applied to entity coreference resolution and many other tasks in natural language processing (NLP), owing in part to the ease of designing high-precision rules for these tasks.
no code implementations • 9 Feb 2016 • Jing Lu, Jan Egger, Andreas Wimmer, Stefan Großkopf, Bernd Freisleben
The aneurysm segmentation includes two steps: first, the inner boundary is segmented based on a grey level model with two thresholds; then, an adapted active contour model approach is applied to the more complicated outer boundary segmentation, with its initialization based on the available inner boundary segmentation.
no code implementations • 16 Nov 2015 • Jing Lu, Steven C. H. Hoi, Doyen Sahoo, Peilin Zhao
To overcome this drawback, we present a novel framework of Budget Online Multiple Kernel Learning (BOMKL) and propose a new Sparse Passive Aggressive learning to perform effective budget online learning.