no code implementations • 14 May 2024 • Zichen Wang, Xi Deng, Ziyi Zhang, Wenzel Jakob, Steve Marschner
We present a simple algorithm for differentiable rendering of surfaces represented by Signed Distance Fields (SDF), which makes it easy to integrate rendering into gradient-based optimization pipelines.
1 code implementation • 15 Jan 2024 • Zichen Wang, Bo Yang, Haonan Yue, Zhenghao Ma
However, the class-level prototypes are difficult to precisely generate, and they also lack detailed information, leading to instability in performance. New methods are required to capture the distinctive local context for more robust novel object detection.
no code implementations • 12 Jan 2024 • Bowen Shi, Peisen Zhao, Zichen Wang, Yuhang Zhang, Yaoming Wang, Jin Li, Wenrui Dai, Junni Zou, Hongkai Xiong, Qi Tian, Xiaopeng Zhang
Vision-language foundation models, represented by Contrastive language-image pre-training (CLIP), have gained increasing attention for jointly understanding both vision and textual tasks.
1 code implementation • 10 Dec 2023 • Aditya Chetan, Guandao Yang, Zichen Wang, Steve Marschner, Bharath Hariharan
Yet in many applications like rendering and simulation, hybrid neural fields can cause noticeable and unreasonable artifacts.
no code implementations • 17 Oct 2023 • Zichen Wang, Chuanhao Li, Chenyu Song, Lianghui Wang, Quanquan Gu, Huazheng Wang
We study the federated pure exploration problem of multi-armed bandits and linear bandits, where $M$ agents cooperatively identify the best arm via communicating with the central server.
1 code implementation • 5 Oct 2023 • Zifeng Wang, Zichen Wang, Balasubramaniam Srinivasan, Vassilis N. Ioannidis, Huzefa Rangwala, Rishita Anubhai
Foundation models (FMs) are able to leverage large volumes of unlabeled data to demonstrate superior performance across a wide range of tasks.
no code implementations • 2 Oct 2023 • Omid Bazgir, Zichen Wang, Ji Won Park, Marc Hafner, James Lu
Additionally, we show that the graph encoder is able to effectively utilize multimodal data to enhance tumor predictions.
1 code implementation • 27 Sep 2023 • Yijun Tian, Huan Song, Zichen Wang, Haozhu Wang, Ziqing Hu, Fang Wang, Nitesh V. Chawla, Panpan Xu
While existing work has explored utilizing knowledge graphs (KGs) to enhance language modeling via joint training and customized model architectures, applying this to LLMs is problematic owing to their large number of parameters and high computational cost.
no code implementations • 18 Sep 2023 • Jinsheng Pan, Zichen Wang, Weihong Qi, Hanjia Lyu, Jiebo Luo
Understanding the framing of political issues is of paramount importance as it significantly shapes how individuals perceive, interpret, and engage with these matters.
1 code implementation • 7 Jun 2023 • Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Yue Zhang, Neil Zhenqiang Gong, Xing Xie
The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts.
Cross-Lingual Paraphrase Identification Machine Translation +5
no code implementations • 30 May 2023 • Zichen Wang, Rishab Balasubramanian, Hui Yuan, Chenyu Song, Mengdi Wang, Huazheng Wang
We propose the first study of adversarial attacks on online learning to rank.
no code implementations • 28 Mar 2023 • Jinsheng Pan, Weihong Qi, Zichen Wang, Hanjia Lyu, Jiebo Luo
There is a broad consensus that news media outlets incorporate ideological biases in their news articles.
1 code implementation • 16 Jan 2023 • Hanjia Lyu, Jinsheng Pan, Zichen Wang, Jiebo Luo
We first adopt a human-guided machine learning framework to develop a new dataset for hyperpartisan news title detection with 2, 200 manually labeled and 1. 8 million machine-labeled titles that were posted from 2014 to the present by nine representative media organizations across three media bias groups - Left, Central, and Right in an active learning manner.
no code implementations • 9 Nov 2022 • Gil Sadeh, Zichen Wang, Jasleen Grewal, Huzefa Rangwala, Layne Price
In this paper, we propose a new peptide data augmentation scheme, where we train peptide language models on artificially constructed peptides that are small contiguous subsets of longer, wild-type proteins; we refer to the training peptides as "chopped proteins".
1 code implementation • 30 Sep 2022 • Yulun Wu, Robert A. Barton, Zichen Wang, Vassilis N. Ioannidis, Carlo De Donno, Layne C. Price, Luis F. Voloch, George Karypis
Predicting the responses of a cell under perturbations may bring important benefits to drug discovery and personalized therapeutics.
2 code implementations • 13 Sep 2022 • Yulun Wu, Layne C. Price, Zichen Wang, Vassilis N. Ioannidis, Robert A. Barton, George Karypis
Estimating an individual's potential outcomes under counterfactual treatments is a challenging task for traditional causal inference and supervised learning approaches when the outcome is high-dimensional (e. g. gene expressions, impulse responses, human faces) and covariates are relatively limited.
no code implementations • 17 Feb 2022 • Kexin Ding, Mu Zhou, Zichen Wang, Qiao Liu, Corey W. Arnold, Shaoting Zhang, Dimitri N. Metaxas
Image-based characterization and disease understanding involve integrative analysis of morphological, spatial, and topological information across biological scales.
1 code implementation • Bioinformatics, Volume 36, Issue Supplement_1 2020 • Zichen Wang, Mu Zhou, Corey Arnold
Unlike conventional graph convolution networks always assuming the same node attributes in a global graph, our approach models interdomain information fusion with bipartite graph convolution operation.
no code implementations • 18 Oct 2019 • Wenyuan Li, Zichen Wang, Yuguang Yue, Jiayun Li, William Speier, Mingyuan Zhou, Corey W. Arnold
In this work, we investigate semi-supervised learning (SSL) for image classification using adversarial training.
no code implementations • 16 May 2019 • Wenyuan Li, Zichen Wang, Jiayun Li, Jennifer Polson, William Speier, Corey Arnold
Recently, semi-supervised learning methods based on generative adversarial networks (GANs) have received much attention.