no code implementations • 23 May 2024 • Ting Liu, Xuyang Liu, Liangtao Shi, Zunnan Xu, Siteng Huang, Yi Xin, Quanjun Yin
Sparse-Tuning efficiently fine-tunes the pre-trained ViT by sparsely preserving the informative tokens and merging redundant ones, enabling the ViT to focus on the foreground while reducing computational costs on background regions in the images.
no code implementations • 15 May 2024 • Xinying Lin, Xuyang Liu, Hong Yang, Xiaohai He, Honggang Chen
In this letter, we attempt to evaluate the perceptual quality and reconstruction fidelity of SR images considering LR images and scale factors.
1 code implementation • 10 May 2024 • Ting Liu, Xuyang Liu, Siteng Huang, Honggang Chen, Quanjun Yin, Long Qin, Donglin Wang, Yue Hu
Specifically, we propose \textbf{DARA}, a novel PETL method comprising \underline{\textbf{D}}omain-aware \underline{\textbf{A}}dapters (DA Adapters) and \underline{\textbf{R}}elation-aware \underline{\textbf{A}}dapters (RA Adapters) for VG.
1 code implementation • 3 Sep 2023 • Xuyang Liu, Siteng Huang, Yachen Kang, Honggang Chen, Donglin Wang
Large-scale text-to-image diffusion models have shown impressive capabilities for generative tasks by leveraging strong vision-language alignment from pre-training.
no code implementations • 31 Dec 2020 • Zheng Zhao, Kai Xu, Attaphon Kaewsnod, Xuyang Liu, Ayut Limphirat, Yupeng Yan
The masses of tetraquark states of all $qc\bar q \bar c$ and $cc\bar c \bar c$ quark configurations are evaluated in a constituent quark model, where the Cornell-like potential and one-gluon exchange spin-spin coupling are employed.
High Energy Physics - Phenomenology