no code implementations • 13 Apr 2024 • Munachiso Nwadike, Jialin Li, Hanan Salam
This paper presents our work on an enhanced personalisation strategy, that leverages data augmentation to develop tailored models for continuous valence and arousal prediction.
1 code implementation • 7 Mar 2024 • Jialin Li, Qiang Nie, WeiFu Fu, Yuhuan Lin, Guangpin Tao, Yong liu, Chengjie Wang
Deep learning models, particularly those based on transformers, often employ numerous stacked structures, which possess identical architectures and perform similar functions.
no code implementations • 17 Jan 2024 • Yao Lu, Song Bian, Lequn Chen, Yongjun He, Yulong Hui, Matthew Lentz, Beibin Li, Fei Liu, Jialin Li, Qi Liu, Rui Liu, Xiaoxuan Liu, Lin Ma, Kexin Rong, Jianguo Wang, Yingjun Wu, Yongji Wu, Huanchen Zhang, Minjia Zhang, Qizhen Zhang, Tianyi Zhou, Danyang Zhuo
In this paper, we investigate the intersection of large generative AI models and cloud-native computing architectures.
no code implementations • 30 Sep 2023 • Xiang Liu, Liangxi Liu, Feiyang Ye, Yunheng Shen, Xia Li, Linshan Jiang, Jialin Li
Efficiently aggregating trained neural networks from local clients into a global model on a server is a widely researched topic in federated learning.
no code implementations • 28 Sep 2023 • Jialin Li, WeiFu Fu, Yuhuan Lin, Qiang Nie, Yong liu
Query-based object detectors have made significant advancements since the publication of DETR.
no code implementations • 30 Aug 2023 • WeiFu Fu, Qiang Nie, Jialin Li, Yuhuan Lin, Kai Wu, Jian Li, Yabiao Wang, Yong liu, Chengjie Wang
In this paper, we highlight the significance of exploiting the intra-domain information between the labeled target data and unlabeled target data.
no code implementations • 23 Aug 2023 • Donghao Zhou, Jialin Li, Jinpeng Li, Jiancheng Huang, Qiang Nie, Yong liu, Bin-Bin Gao, Qiong Wang, Pheng-Ann Heng, Guangyong Chen
Large-scale well-annotated datasets are of great importance for training an effective object detector.
no code implementations • 1 Apr 2023 • Jialin Li, Alia Waleed, Hanan Salam
In this paper, we discuss the need for personalization in affective and personality computing (hereinafter referred to as affective computing).
no code implementations • 1 Feb 2023 • Ziji Shi, Le Jiang, Ang Wang, Jie Zhang, Xianyan Jia, Yong Li, Chencan Wu, Jialin Li, Wei Lin
However, finding a suitable model parallel schedule for an arbitrary neural network is a non-trivial task due to the exploding search space.
1 code implementation • 15 Nov 2022 • Shuaichen Chang, David Palzer, Jialin Li, Eric Fosler-Lussier, Ningchuan Xiao
Our experimental results show that V-MODEQA has better overall performance and robustness on MapQA than the state-of-the-art ChartQA and VQA algorithms by capturing the unique properties in map question answering.
no code implementations • 18 Apr 2019 • Hang Zhu, Zhihao Bai, Jialin Li, Ellis Michael, Dan Ports, Ion Stoica, Xin Jin
Experimental results show that Harmonia improves the throughput of these protocols by up to 10X for a replication factor of 10, providing near-linear scalability up to the limit of our testbed.
Distributed, Parallel, and Cluster Computing
no code implementations • 25 May 2018 • Furong Huang, Jialin Li, Xuchen You
We propose a Slicing Initialized Alternating Subspace Iteration (s-ASI) method that is guaranteed to recover top $r$ components ($\epsilon$-close) simultaneously for (a)symmetric tensors almost surely under the noiseless case (with high probability for a bounded noise) using $O(\log(\log \frac{1}{\epsilon}))$ steps of tensor subspace iterations.