no code implementations • WAT 2022 • Sudhansu Bala Das, Atharv Biradar, Tapas Kumar Mishra, Bidyut Kumar Patra
Previous studies on multilingual translation reveal that multilingual training is effective for languages with limited corpus.
no code implementations • 12 Jan 2024 • Sudhansu Bala Das, Leo Raphael Rodrigues, Tapas Kumar Mishra, Bidyut Kr. Patra
A baseline NMT system is built for these two ILs, and the effect of different dataset sizes is also investigated.
no code implementations • 22 Jun 2023 • Sudhansu Bala Das, Divyajyoti Panda, Tapas Kumar Mishra, Bidyut Kr. Patra, Asif Ekbal
To achieve this, English- Indic (EN-IL) models are also developed, with and without the usage of related languages.
no code implementations • 2 Jan 2023 • Sudhansu Bala Das, Divyajoti Panda, Tapas Kumar Mishra, Bidyut Kr. Patra
Among various NLP methods, Statistical Machine Translation(SMT).
no code implementations • 27 Sep 2022 • Sudhansu Bala Das, Atharv Biradar, Tapas Kumar Mishra, Bidyut Kumar Patra
In this paper, we propose a MNMT system to address the issues related to low-resource language translation.
no code implementations • 7 Jul 2022 • Abir Sen, Tapas Kumar Mishra, Ratnakar Dash
In our work, five pre-trained CNN (VGG16, VGG19, ResNet50, ResNet101, and Inception-V1) models and ViT have been employed to classify hand gesture images.
no code implementations • 7 Jul 2022 • Tusarkanta Dalai, Tapas Kumar Mishra, Pankaj K Sa
The deep learning-based model includes Bi-LSTM network, CNN network, CRF layer, character sequence information, and pre-trained word vector.
no code implementations • 25 Feb 2022 • Abir Sen, Tapas Kumar Mishra, Ratnakar Dash
In the last part, the output scores of CNN models are averaged to construct an optimal ensemble model for the final prediction.