no code implementations • 26 May 2023 • Tao Yang, Cuize Han, Chen Luo, Parth Gupta, Jeff M. Phillips, Qingyao Ai
While previous studies have demonstrated the effectiveness of using user behavior signals (e. g., clicks) as both features and labels of LTR algorithms, we argue that existing LTR algorithms that indiscriminately treat behavior and non-behavior signals in input features could lead to suboptimal performance in practice.
no code implementations • 14 Jan 2021 • Tommaso Dreossi, Giorgio Ballardin, Parth Gupta, Jan Bakus, Yu-Hsiang Lin, Vamsi Salaka
The timed position of documents retrieved by learning to rank models can be seen as signals.
no code implementations • LREC 2014 • Ajay Dubey, Parth Gupta, Vasudeva Varma, Paolo Rosso
Many time the language pair does not have large bilingual comparable corpora and in such cases the best automatic dictionary is upper bounded by the quality and coverage of such corpora.
no code implementations • 13 Feb 2014 • Parth Gupta, Rafael E. Banchs, Paolo Rosso
We present a comprehensive study on the use of autoencoders for modelling text data, in which (differently from previous studies) we focus our attention on the following issues: i) we explore the suitability of two different models bDA and rsDA for constructing deep autoencoders for text data at the sentence level; ii) we propose and evaluate two novel metrics for better assessing the text-reconstruction capabilities of autoencoders; and iii) we propose an automatic method to find the critical bottleneck dimensionality for text language representations (below which structural information is lost).