Search Results for author: Umar Khalid

Found 13 papers, 8 papers with code

Free-Editor: Zero-shot Text-driven 3D Scene Editing

no code implementations21 Dec 2023 Nazmul Karim, Umar Khalid, Hasan Iqbal, Jing Hua, Chen Chen

To date, editing 3D scenes requires either re-training the model to adapt to various 3D edited scenes or design-specific methods for each special editing type.

3D scene Editing Style Transfer +1

LatentEditor: Text Driven Local Editing of 3D Scenes

1 code implementation14 Dec 2023 Umar Khalid, Hasan Iqbal, Nazmul Karim, Jing Hua, Chen Chen

Our approach achieves faster editing speeds and superior output quality compared to existing 3D editing models, bridging the gap between textual instructions and high-quality 3D scene editing in latent space.

3D scene Editing Denoising

Efficient Backdoor Removal Through Natural Gradient Fine-tuning

1 code implementation30 Jun 2023 Nazmul Karim, Abdullah Al Arafat, Umar Khalid, Zhishan Guo, Naznin Rahnavard

Extensive experiments show that the proposed method achieves state-of-the-art performance on a wide range of backdoor defense benchmarks: four different datasets- CIFAR10, GTSRB, Tiny-ImageNet, and ImageNet; 13 recent backdoor attacks, e. g.

backdoor defense

Unsupervised Anomaly Detection in Medical Images Using Masked Diffusion Model

1 code implementation31 May 2023 Hasan Iqbal, Umar Khalid, Jing Hua, Chen Chen

It can be challenging to identify brain MRI anomalies using supervised deep-learning techniques due to anatomical heterogeneity and the requirement for pixel-level labeling.

Anatomy Unsupervised Anomaly Detection

SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-driven Video Editing

1 code implementation30 May 2023 Nazmul Karim, Umar Khalid, Mohsen Joneidi, Chen Chen, Nazanin Rahnavard

Text-to-Image (T2I) diffusion models have achieved remarkable success in synthesizing high-quality images conditioned on text prompts.

Style Transfer Video Editing

Conquering the Communication Constraints to Enable Large Pre-Trained Models in Federated Learning

no code implementations4 Oct 2022 Guangyu Sun, Umar Khalid, Matias Mendieta, Taojiannan Yang, Chen Chen

Recently, the use of small pre-trained models has been shown effective in federated learning optimization and improving convergence.

Federated Learning

CNLL: A Semi-supervised Approach For Continual Noisy Label Learning

1 code implementation21 Apr 2022 Nazmul Karim, Umar Khalid, Ashkan Esmaeili, Nazanin Rahnavard

After purification, we perform fine-tuning in a semi-supervised fashion that ensures the participation of all available samples.

Continual Learning

Detect-and-describe: Joint learning framework for detection and description of objects

no code implementations19 Apr 2022 Addel Zafar, Umar Khalid

We also show qualitative results for object attribute prediction on unseen objects, which demonstrate the effectiveness of our approach for describing unknown objects.

Attribute Object +2

RF Signal Transformation and Classification using Deep Neural Networks

1 code implementation6 Apr 2022 Umar Khalid, Nazmul Karim, Nazanin Rahnavard

Deep neural networks (DNNs) designed for computer vision and natural language processing tasks cannot be directly applied to the radio frequency (RF) datasets.

Classification

RODD: A Self-Supervised Approach for Robust Out-of-Distribution Detection

1 code implementation6 Apr 2022 Umar Khalid, Ashkan Esmaeili, Nazmul Karim, Nazanin Rahnavard

The method proposed in this work referred to as RODD outperforms SOTA detection performance on an extensive suite of benchmark datasets on OOD detection tasks.

 Ranked #1 on Out-of-Distribution Detection on cifar100 (using extra training data)

Contrastive Learning Out-of-Distribution Detection +1

Adversarial Training for Face Recognition Systems using Contrastive Adversarial Learning and Triplet Loss Fine-tuning

no code implementations9 Oct 2021 Nazmul Karim, Umar Khalid, Nick Meeker, Sarinda Samarasinghe

Through comparing adversarial robustness achieved without adversarial training, with triplet loss adversarial training, and our contrastive pre-training combined with triplet loss adversarial fine-tuning, we find that our method achieves comparable results with far fewer epochs re-quired during fine-tuning.

Adversarial Robustness Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.