no code implementations • 19 Apr 2024 • Avinash Anand, Janak Kapuriya, Chhavi Kirtani, Apoorv Singh, Jay Saraf, Naman Lal, Jatin Kumar, Adarsh Raj Shivam, Astha Verma, Rajiv Ratn Shah, Roger Zimmermann
We employ the LLaVA open-source model to answer multimodal physics MCQs and compare the performance with and without using RLHF.
1 code implementation • 19 Apr 2024 • Avinash Anand, Mohit Gupta, Kritarth Prasad, Navya Singla, Sanjana Sanjeev, Jatin Kumar, Adarsh Raj Shivam, Rajiv Ratn Shah
Our experiments reveal that among the three models, MAmmoTH-13B emerges as the most proficient, achieving the highest level of competence in solving the presented mathematical problems.
no code implementations • 30 Nov 2021 • Jatin Kumar, Indra Deep Mastan, Shanmuganathan Raman
With the help of MobileNet based architecture that consists of depthwise separable convolution, we reduce the model size and inference time, without losing the quality of the images.