1 code implementation • COLING 2022 • Minyu Chen, Guoqiang Li, Chen Ma, Jingyang Li, Hongfei Fu
Open-source platforms such as GitHub and Stack Overflow both play significant roles in current software ecosystems.
no code implementations • 26 Apr 2024 • Congyuan Duan, Jingyang Li, Dong Xia
Making online decisions can be challenging when features are sparse and orthogonal to historical ones, especially when the optimal policy is learned through collaborative filtering.
1 code implementation • 21 Dec 2023 • Chengen Lai, Shengli Song, Shiqi Meng, Jingyang Li, Sitong Yan, GuangNeng Hu
To address the above issues, we propose a novel self-supervised \textbf{M}ulti-level \textbf{C}ontrastive \textbf{L}earning based natural language \textbf{E}xplanation model (MCLE) for VQA with semantic-level, image-level, and instance-level factual and counterfactual samples.
no code implementations • 26 Jun 2023 • Kuangyu Ding, Jingyang Li, Kim-Chuan Toh
Experimental results on representative benchmarks demonstrate the effectiveness and robustness of MSBPG in training neural networks.
no code implementations • 6 Jun 2023 • Jian-Feng Cai, Jingyang Li, Dong Xia
Under the fixed step size regime, a fascinating trilemma concerning the convergence rate, statistical error rate, and regret is observed.
no code implementations • 10 May 2023 • Yinan Shen, Jingyang Li, Jian-Feng Cai, Dong Xia
The algorithm is not only computationally efficient with linear convergence but also statistically optimal, be the noise Gaussian or heavy-tailed with a finite 1 + epsilon moment.
no code implementations • 29 Nov 2022 • Bowen Yu, Zhenyu Zhang, Jingyang Li, Haiyang Yu, Tingwen Liu, Jian Sun, Yongbin Li, Bin Wang
Open Information Extraction (OpenIE) facilitates the open-domain discovery of textual facts.
no code implementations • 14 Jul 2022 • Zhenyu Zhang, Bowen Yu, Haiyang Yu, Tingwen Liu, Cheng Fu, Jingyang Li, Chengguang Tang, Jian Sun, Yongbin Li
In this paper, we propose a Layout-aware document-level Information Extraction dataset, LIE, to facilitate the study of extracting both structural and semantic knowledge from visually rich documents (VRDs), so as to generate accurate responses in dialogue systems.
no code implementations • 24 May 2022 • Shaowen Zhou, Bowen Yu, Aixin Sun, Cheng Long, Jingyang Li, Haiyang Yu, Jian Sun, Yongbin Li
Open Information Extraction (OpenIE) facilitates domain-independent discovery of relational facts from large corpora.
Ranked #1 on Open Information Extraction on CaRB
Natural Language Understanding Open-Domain Question Answering +1
no code implementations • 2 Mar 2022 • Yinan Shen, Jingyang Li, Jian-Feng Cai, Dong Xia
Lastly, RsGrad is applicable for low-rank tensor estimation under heavy-tailed noise where a statistically optimal rate is attainable with the same phenomenon of dual-phase convergence, and a novel shrinkage-based second-order moment method is guaranteed to deliver a warm initialization.
no code implementations • 27 Aug 2021 • Jian-Feng Cai, Jingyang Li, Dong Xia
In this paper, we provide, to our best knowledge, the first theoretical guarantees of the convergence of RGrad algorithm for TT-format tensor completion, under a nearly optimal sample size condition.
no code implementations • 6 Oct 2020 • Guanglin Niu, Bo Li, Yongfei Zhang, Yongpan Sheng, Chuan Shi, Jingyang Li, ShiLiang Pu
Inference on a large-scale knowledge graph (KG) is of great importance for KG applications like question answering.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Guanglin Niu, Bo Li, Yongfei Zhang, ShiLiang Pu, Jingyang Li
Recent advances in Knowledge Graph Embedding (KGE) allow for representing entities and relations in continuous vector spaces.
1 code implementation • 20 Nov 2019 • Guanglin Niu, Yongfei Zhang, Bo Li, Peng Cui, Si Liu, Jingyang Li, Xiaowei Zhang
Representation learning on a knowledge graph (KG) is to embed entities and relations of a KG into low-dimensional continuous vector spaces.