no code implementations • 10 Apr 2024 • Murong Yue, Wijdane Mifdal, Yixuan Zhang, Jennifer Suh, Ziyu Yao
Mathematical modeling (MM) is considered a fundamental skill for students in STEM disciplines.
no code implementations • 9 Apr 2024 • Yixuan Zhang, Dongyan Huo, Yudong Chen, Qiaomin Xie
Motivated by Q-learning, we study nonsmooth contractive stochastic approximation (SA) with constant stepsize.
no code implementations • 31 Mar 2024 • Lizhi Lin, Honglin Mu, Zenan Zhai, Minghan Wang, Yuxia Wang, Renxi Wang, Junjie Gao, Yixuan Zhang, Wanxiang Che, Timothy Baldwin, Xudong Han, Haonan Li
Generative models are rapidly gaining popularity and being integrated into everyday applications, raising concerns over their safety issues as various vulnerabilities are exposed.
no code implementations • 1 Mar 2024 • Yixuan Zhang, Feng Zhou
Fine-tuning pre-trained models is a widely employed technique in numerous real-world applications.
1 code implementation • 19 Feb 2024 • Loka Li, Guangyi Chen, Yusheng Su, Zhenhao Chen, Yixuan Zhang, Eric Xing, Kun Zhang
We have experimentally observed that LLMs possess the capability to understand the "confidence" in their own responses.
1 code implementation • 18 Feb 2024 • Renxi Wang, Haonan Li, Xudong Han, Yixuan Zhang, Timothy Baldwin
However, LLMs are optimized for language generation instead of tool use during training or alignment, limiting their effectiveness as agents.
no code implementations • 5 Feb 2024 • Zenan Ling, Longbo Li, Zhanbo Feng, Yixuan Zhang, Feng Zhou, Robert C. Qiu, Zhenyu Liao
Deep equilibrium models (DEQs), as a typical implicit neural network, have demonstrated remarkable success on various tasks.
no code implementations • 25 Jan 2024 • Yixuan Zhang, Qiaomin Xie
By connecting the constant stepsize Q-learning to a time-homogeneous Markov chain, we show the distributional convergence of the iterates in Wasserstein distance and establish its exponential convergence rate.
1 code implementation • 10 Jan 2024 • Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
no code implementations • 18 Dec 2023 • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
Through extensive experiments involving language and multi-modal models on semantic understanding, logical reasoning, and generation tasks, we demonstrate that both textual and visual EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
no code implementations • 14 Dec 2023 • Yixuan Zhang, Boyu Li, Zenan Ling, Feng Zhou
In this paper, we demonstrate that despite only having access to the biased labels, it is possible to eliminate bias by filtering the fairest instances within the framework of confident learning.
1 code implementation • 5 Dec 2023 • Yixuan Zhang, Heming Wang, DeLiang Wang
Accurately detecting voiced intervals in speech signals is a critical step in pitch tracking and has numerous applications.
1 code implementation • 14 Nov 2023 • Zhilin Zhao, Longbing Cao, Yixuan Zhang
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available, given a standard network.
no code implementations • 26 Oct 2023 • Qinlin Zhao, Jindong Wang, Yixuan Zhang, Yiqiao Jin, Kaijie Zhu, Hao Chen, Xing Xie
Large language models (LLMs) have been widely used as agents to complete different tasks, such as personal assistance or event planning.
1 code implementation • 14 Oct 2023 • Yixuan Zhang, Haonan Li
To bridge this gap, we present ACLUE, an evaluation benchmark designed to assess the capability of language models in comprehending ancient Chinese.
1 code implementation • 10 Oct 2023 • Yuan Li, Yixuan Zhang, Lichao Sun
We propose a novel framework that equips collaborative generative agents with human-like reasoning abilities and specialized skills.
no code implementations • 27 Sep 2023 • Hao Zhang, Yixuan Zhang, Meng Yu, Dong Yu
In this paper, we introduce a novel training framework designed to comprehensively address the acoustic howling issue by examining its fundamental formation process.
no code implementations • 27 Sep 2023 • Yixuan Zhang, Hao Zhang, Meng Yu, Dong Yu
Acoustic howling suppression (AHS) is a critical challenge in audio communication systems.
no code implementations • 16 Sep 2023 • Heming Wang, Meng Yu, Hao Zhang, Chunlei Zhang, Zhongweiyang Xu, Muqiao Yang, Yixuan Zhang, Dong Yu
Enhancing speech signal quality in adverse acoustic environments is a persistent challenge in speech processing.
no code implementations • 14 Jul 2023 • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
In addition to those deterministic tasks that can be automatically evaluated using existing metrics, we conducted a human study with 106 participants to assess the quality of generative tasks using both vanilla and emotional prompts.
1 code implementation • 15 Jun 2023 • Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging.
no code implementations • 11 Feb 2023 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
In recent years, by utilizing optimization techniques to formulate the propagation of deep model, a variety of so-called Optimization-Derived Learning (ODL) approaches have been proposed to address diverse learning and vision tasks.
no code implementations • 29 Jan 2023 • Yixuan Zhang, Meng Yu, Hao Zhang, Dong Yu, DeLiang Wang
The robustness of the Kalman filter to double talk and its rapid convergence make it a popular approach for addressing acoustic echo cancellation (AEC) challenges.
no code implementations • 1 Aug 2022 • Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
In other words, the fair pre-processing methods ignore the discrimination encoded in the labels either during the learning procedure or the evaluation stage.
no code implementations • 16 Jun 2022 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
Recently, Optimization-Derived Learning (ODL) has attracted attention from learning and vision areas, which designs learning models from the perspective of optimization.
no code implementations • 28 Oct 2021 • Yixuan Zhang, Zhuo Chen, Jian Wu, Takuya Yoshioka, Peidong Wang, Zhong Meng, Jinyu Li
In this paper, we propose to apply recurrent selective attention network (RSAN) to CSS, which generates a variable number of output channels based on active speaker counting.
1 code implementation • 11 Oct 2021 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
We also extend BVFSM to address BLO with additional functional constraints.
no code implementations • 7 Jul 2021 • Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
Therefore, we propose a Bias-TolerantFAirRegularizedLoss (B-FARL), which tries to regain the benefits using data affected by label bias and selection bias.
no code implementations • 9 Jun 2021 • Feng Zhou, Quyu Kong, Yixuan Zhang, Cheng Feng, Jun Zhu
Hawkes processes are a class of point processes that have the ability to model the self- and mutual-exciting phenomena.
no code implementations • ICLR 2021 • Feng Zhou, Yixuan Zhang, Jun Zhu
Hawkes process provides an effective statistical framework for analyzing the time-dependent interaction of neuronal spiking activities.
no code implementations • 3 Mar 2020 • Aditeya Pandey, Yixuan Zhang, John A. Guerra-Gomez, Andrea G. Parker, Michelle A. Borkin
In the task abstraction phase of the visualization design process, including in "design studies", a practitioner maps the observed domain goals to generalizable abstract tasks using visualization theory in order to better understand and address the users needs.
no code implementations • CVPR 2020 • Wei Xiong, Yutong He, Yixuan Zhang, Wenhan Luo, Lin Ma, Jiebo Luo
In this paper, we aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image, which can thereby benefit the subsequent fine-grained image recognition and few-shot learning tasks.
no code implementations • CVPR 2020 • Haoye Dong, Xiaodan Liang, Yixuan Zhang, Xujie Zhang, Zhenyu Xie, Bowen Wu, Ziqi Zhang, Xiaohui Shen, Jian Yin
Interactive fashion image manipulation, which enables users to edit images with sketches and color strokes, is an interesting research problem with great application value.
1 code implementation • 27 Apr 2019 • Zhengyuan Yang, Yixuan Zhang, Jiebo Luo
The framework consists of a facial attention module and a hierarchical segment temporal module.
1 code implementation • 20 Jan 2018 • Zhongping Zhang, Yixuan Zhang, Zheng Zhou, Jiebo Luo
In this paper, we substantiate that Fast SCNN can detect drastic change of chroma and saturation.
1 code implementation • 20 Jan 2018 • Zhengyuan Yang, Yixuan Zhang, Jerry Yu, Junjie Cai, Jiebo Luo
In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner.