no code implementations • 9 Apr 2024 • Omid Ghahroodi, Marzia Nouri, Mohammad Vali Sanian, Alireza Sahebi, Doratossadat Dastgheib, Ehsaneddin Asgari, Mahdieh Soleymani Baghshah, Mohammad Hossein Rohban
Evaluating Large Language Models (LLMs) is challenging due to their generative nature, necessitating precise evaluation methodologies.
no code implementations • 27 Mar 2024 • Reza Abbasi, Mohammad Samiei, Mohammad Hossein Rohban, Mahdieh Soleymani Baghshah
Vision-language models, such as CLIP, have shown promising Out-of-Distribution (OoD) generalization under various types of distribution shifts.
no code implementations • 5 Mar 2024 • Mohammad Rostami, Amin Ghariyazi, Hamed Dashti, Mohammad Hossein Rohban, Hamid R. Rabiee
This is because most existing methods are trained on separate datasets with different genes and cells, which limits their generalizability.
no code implementations • 8 Dec 2023 • Mahdi Ghaznavi, Hesam Asadollahzadeh, HamidReza Yaghoubi Araghi, Fahimeh Hosseini Noohdani, Mohammad Hossein Rohban, Mahdieh Soleymani Baghshah
In order to provide group robustness without such annotations, we propose a new method, called loss-based feature re-weighting (LFR), in which we infer a grouping of the data by evaluating an ERM-pre-trained model on a small left-out split of the training data.
no code implementations • 31 Oct 2023 • Mohammad Azizmalayeri, Reza Abbasi, Amir Hosein Haji Mohammad rezaie, Reihaneh Zohrabi, Mahdi Amiri, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
A promising solution to this problem is last-layer retraining, which involves retraining the linear classifier head on a small subset of data without spurious cues.
no code implementations • 29 Oct 2023 • Mahdi Salmani, Alireza Dehghanpour Farashah, Mohammad Azizmalayeri, Mahdi Amiri, Navid Eslami, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
Despite the remarkable success achieved by deep learning algorithms in various domains, such as computer vision, they remain vulnerable to adversarial perturbations.
no code implementations • 15 Oct 2023 • Arshia Soltani Moakhar, Mohammad Azizmalayeri, Hossein Mirzaei, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
Despite considerable theoretical progress in the training of neural networks viewed as a multi-agent system of neurons, particularly concerning biological plausibility and decentralized training, their applicability to real-world problems remains limited due to scalability issues.
no code implementations • 8 Jul 2023 • Amirhossein Askari-Farsangi, Ali Sharifi-Zarchi, Mohammad Hossein Rohban
We introduced a novel pipeline for diagnosing ALL that approximates the process used by hematologists, is sensitive to disease biomarkers, and achieves an accuracy of 96. 15%, an F1-score of 94. 24%, a sensitivity of 97. 56%, and a specificity of 90. 91% on ALL IDB 1.
no code implementations • 14 Apr 2023 • Sina Abdous, Reza Abdollahzadeh, Mohammad Hossein Rohban
To follow this vision, we developed KS-GNNExplainer, the first instance-level graph neural network explainer that leverages current instance-level approaches in an effective manner to provide more informative and reliable explainable outputs, which are crucial for applied AI in the health domain.
no code implementations • 25 Jan 2023 • Mohammad Azizmalayeri, Arman Zarei, Alireza Isavand, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
For this purpose, we first demonstrate that the existing model-based methods can be equivalent to applying smaller perturbation or optimization weights to the hard training examples.
1 code implementation • 25 Oct 2022 • Ali Garjani, Atoosa Malemir Chegini, Mohammadreza Salehi, Alireza Tabibzadeh, Parastoo Yousefi, Mohammad Hossein Razizadeh, Moein Esghaei, Maryam Esghaei, Mohammad Hossein Rohban
This helps the model to learn a shared unique representation between normal training samples as much as possible, which improves the discernibility and detectability of mutated samples from the unmutated ones at the test time.
1 code implementation • 30 Sep 2022 • Mohammad Azizmalayeri, Arshia Soltani Moakhar, Arman Zarei, Reihaneh Zohrabi, Mohammad Taghi Manzuri, Mohammad Hossein Rohban
Therefore, unlike OOD detection in the standard setting, access to OOD, as well as in-distribution, samples sounds necessary in the adversarial training setup.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 14 Jul 2022 • Simin Shekarpaz, Mohammad Azizmalayeri, Mohammad Hossein Rohban
In this paper, we propose the physics informed adversarial training (PIAT) of neural networks for solving nonlinear differential equations (NDE).
no code implementations • 9 Jun 2022 • Mohammad Azizmalayeri, Mohammad Hossein Rohban
Despite advances in image classification methods, detecting the samples not belonging to the training classes is still a challenging problem.
1 code implementation • 9 Jun 2022 • Sina Taslimi, Soroush Taslimi, Nima Fathi, Mohammadreza Salehi, Mohammad Hossein Rohban
Our model has been tested with several number of MLP layers for the head setting, each achieves a competitive AUC score on all classes.
1 code implementation • 28 May 2022 • Hossein Mirzaei, Mohammadreza Salehi, Sajjad Shahabi, Efstratios Gavves, Cees G. M. Snoek, Mohammad Sabokrou, Mohammad Hossein Rohban
Effectiveness of our method for both the near-distribution and standard novelty detection is assessed through extensive experiments on datasets in diverse applications such as medical images, object classification, and quality control.
Ranked #2 on Anomaly Detection on One-class CIFAR-10 (using extra training data)
1 code implementation • 26 Oct 2021 • Mohammadreza Salehi, Hossein Mirzaei, Dan Hendrycks, Yixuan Li, Mohammad Hossein Rohban, Mohammad Sabokrou
To date, several research domains tackle the problem of detecting unfamiliar samples, including anomaly detection, novelty detection, one-class learning, open set recognition, and out-of-distribution detection.
no code implementations • ICML Workshop AML 2021 • Alireza Mousavi Hosseini, Amir Mohammad Abouei, Mohammad Hossein Rohban
Adversarial training tends to result in models that are less accurate on natural (unperturbed) examples compared to standard models.
1 code implementation • 29 Mar 2021 • Zeinab Golgooni, Mehrdad Saberi, Masih Eskandar, Mohammad Hossein Rohban
Making deep neural networks robust to small adversarial noises has recently been sought in many applications.
1 code implementation • 29 Mar 2021 • Mohammad Azizmalayeri, Mohammad Hossein Rohban
However, it usually fails against other attacks, i. e. the model overfits to the training attack scheme.
3 code implementations • CVPR 2021 • Mohammadreza Salehi, Niousha Sadjadi, Soroosh Baselizadeh, Mohammad Hossein Rohban, Hamid R. Rabiee
Unsupervised representation learning has proved to be a critical component of anomaly detection/localization in images.
1 code implementation • 29 Aug 2020 • Mohammadreza Salehi, Ainaz Eftekhar, Niousha Sadjadi, Mohammad Hossein Rohban, Hamid R. Rabiee
Puzzle-solving, as a pretext task of self-supervised learning (SSL) methods, has earlier proved its ability in learning semantically meaningful features.
1 code implementation • 30 Mar 2020 • Amirreza Shaeiri, Rozhin Nobahari, Mohammad Hossein Rohban
Adversarial robustness has proven to be a required property of machine learning algorithms.
1 code implementation • 12 Mar 2020 • Mohammadreza Salehi, Atrin Arya, Barbod Pajoum, Mohammad Otoofi, Amirreza Shaeiri, Mohammad Hossein Rohban, Hamid R. Rabiee
To address this problem, we propose a novel AE that can learn more semantically meaningful features.
no code implementations • 29 Jan 2013 • Mohammad Hossein Rohban, Prakash Ishwar, Birant Orten, William C. Karl, Venkatesh Saligrama
We study high-dimensional asymptotic performance limits of binary supervised classification problems where the class conditional densities are Gaussian with unknown means and covariances and the number of signal dimensions scales faster than the number of labeled training samples.