Search Results for author: Ziqing Fan

Found 8 papers, 7 papers with code

Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization

1 code implementation29 May 2024 Ziqing Fan, Shengchao Hu, Jiangchao Yao, Gang Niu, Ya zhang, Masashi Sugiyama, Yanfeng Wang

However, the local loss landscapes may not accurately reflect the flatness of global loss landscape in heterogeneous environments; as a result, minimizing local sharpness and calculating perturbations on client data might not align the efficacy of SAM in FL with centralized training.

Federated Learning

Domain-Inspired Sharpness-Aware Minimization Under Domain Shifts

1 code implementation29 May 2024 Ruipeng Zhang, Ziqing Fan, Jiangchao Yao, Ya zhang, Yanfeng Wang

This paper presents a Domain-Inspired Sharpness-Aware Minimization (DISAM) algorithm for optimization under domain shifts.

Domain Generalization

Federated Learning under Partially Class-Disjoint Data via Manifold Reshaping

1 code implementation29 May 2024 Ziqing Fan, Jiangchao Yao, Ruipeng Zhang, Lingjuan Lyu, Ya zhang, Yanfeng Wang

Statistical heterogeneity severely limits the performance of federated learning (FL), motivating several explorations e. g., FedProx, MOON and FedDyn, to alleviate this problem.

Federated Learning

Federated Learning with Bilateral Curation for Partially Class-Disjoint Data

1 code implementation NeurIPS 2023 Ziqing Fan, Ruipeng Zhang, Jiangchao Yao, Bo Han, Ya zhang, Yanfeng Wang

Partially class-disjoint data (PCDD), a common yet under-explored data formation where each client contributes a part of classes (instead of all classes) of samples, severely challenges the performance of federated algorithms.

Federated Learning

HarmoDT: Harmony Multi-Task Decision Transformer for Offline Reinforcement Learning

1 code implementation28 May 2024 Shengchao Hu, Ziqing Fan, Li Shen, Ya zhang, Yanfeng Wang, DaCheng Tao

However, variations in task content and complexity pose significant challenges in policy formulation, necessitating judicious parameter sharing and management of conflicting gradients for optimal policy performance.

Management Meta-Learning +1

Q-value Regularized Transformer for Offline Reinforcement Learning

1 code implementation27 May 2024 Shengchao Hu, Ziqing Fan, Chaoqin Huang, Li Shen, Ya zhang, Yanfeng Wang, DaCheng Tao

Recent advancements in offline reinforcement learning (RL) have underscored the capabilities of Conditional Sequence Modeling (CSM), a paradigm that learns the action distribution based on history trajectory and target returns for each state.

D4RL Offline RL +3

FedSkip: Combatting Statistical Heterogeneity with Federated Skip Aggregation

1 code implementation14 Dec 2022 Ziqing Fan, Yanfeng Wang, Jiangchao Yao, Lingjuan Lyu, Ya zhang, Qi Tian

However, in addition to previous explorations for improvement in federated averaging, our analysis shows that another critical bottleneck is the poorer optima of client models in more heterogeneous conditions.

Federated Learning

Disentangling Hate in Online Memes

no code implementations9 Aug 2021 Rui Cao, Ziqing Fan, Roy Ka-Wei Lee, Wen-Haw Chong, Jing Jiang

Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task.

Classification Hateful Meme Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.