Search Results for author: Barath Raj Kandur Raja

Found 7 papers, 0 papers with code

COPS: A Compact On-device Pipeline for real-time Smishing detection

no code implementations6 Feb 2024 Harichandana B S S, Sumit Kumar, Manjunath Bhimappa Ujjinakoppa, Barath Raj Kandur Raja

Smartphones have become indispensable in our daily lives and can do almost everything, from communication to online shopping.

PrivPAS: A real time Privacy-Preserving AI System and applied ethics

no code implementations5 Feb 2022 Harichandana B S S, Vibhav Agarwal, Sourav Ghosh, Gopi Ramena, Sumit Kumar, Barath Raj Kandur Raja

This motivates us to work towards a solution to generate privacy-conscious cues for raising awareness in smartphone users of any sensitivity in their viewfinder content.

Ethics Privacy Preserving

edATLAS: An Efficient Disambiguation Algorithm for Texting in Languages with Abugida Scripts

no code implementations5 Jan 2021 Sourav Ghosh, Sourabh Vasant Gothe, Chandramouli Sanchi, Barath Raj Kandur Raja

To this end, we propose a disambiguation algorithm and showcase its usefulness in two novel mutually non-exclusive input methods for languages natively using the abugida writing system: (a) disambiguation of ambiguous input for abugida scripts, and (b) disambiguation of word variants in romanized scripts.

Language Modelling

EmpLite: A Lightweight Sequence Labeling Model for Emphasis Selection of Short Texts

no code implementations ICON 2020 Vibhav Agarwal, Sourav Ghosh, Kranti Chalamalasetti, Bharath Challa, Sonal Kumari, Harshavardhana, Barath Raj Kandur Raja

To the best of our knowledge, this work presents the first lightweight deep learning approach for smartphone deployment of emphasis selection.

LiteMuL: A Lightweight On-Device Sequence Tagger using Multi-task Learning

no code implementations15 Dec 2020 Sonal Kumari, Vibhav Agarwal, Bharath Challa, Kranti Chalamalasetti, Sourav Ghosh, Harshavardhana, Barath Raj Kandur Raja

The proposed LiteMuL not only outperforms the current state of the art results but also surpasses the results of our proposed on-device task-specific models, with accuracy gains of up to 11% and model-size reduction by 50%-56%.

Multi-Task Learning NER +1

Cannot find the paper you are looking for? You can Submit a new open access paper.