Search Results for author: Farhan Dhanani

Found 2 papers, 2 papers with code

Attention Transformer Model for Translation of Similar Languages

1 code implementation WMT (EMNLP) 2020 Farhan Dhanani, Muhammad Rafi

With the introduction of Recurrent Attention, it allows the decoder to focus effectively on order of the source sequence at different decoding steps.

Decoder Machine Translation +1

Artificial Interrogation for Attributing Language Models

1 code implementation20 Nov 2022 Farhan Dhanani, Muhammad Rafi

And then perform one-to-many pairing between them based on similarities in their generated responses, where more than one fine-tuned model can pair with a base model but not vice-versa.

Machine Translation Multi Class Text Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.