no code implementations • CONSTRAINT (ACL) 2022 • Jason Lucas, Limeng Cui, Thai Le, Dongwon Lee
The COVID-19 pandemic has created threats to global health control.
no code implementations • EMNLP 2020 • Adaku Uchendu, Thai Le, Kai Shu, Dongwon Lee
In recent years, the task of generating realistic short and long texts have made tremendous advancements.
1 code implementation • 15 Apr 2024 • Aashish Anantha Ramakrishnan, Sharon X. Huang, Dongwon Lee
With Large Language Models (LLM) achieving success in language and commonsense reasoning tasks, we explore the ability of different LLMs to identify and understand key subjects from abstractive captions.
no code implementations • 4 Apr 2024 • Mahjabin Nahar, Haeseung Seo, Eun-Ju Lee, Aiping Xiong, Dongwon Lee
This research aims to understand the human perception of LLM hallucinations by systematically varying the degree of hallucination (genuine, minor hallucination, major hallucination) and examining its interaction with warning (i. e., a warning of potential inaccuracies: absent vs. present).
1 code implementation • 1 Feb 2024 • Eric Xing, Saranya Venkatraman, Thai Le, Dongwon Lee
AO is the corresponding adversarial task, aiming to modify a text in such a way that its semantics are preserved, yet an AA model cannot correctly infer its authorship.
no code implementations • 15 Jan 2024 • Dominik Macko, Robert Moro, Adaku Uchendu, Ivan Srba, Jason Samuel Lucas, Michiharu Yamashita, Nafis Irtiza Tripto, Dongwon Lee, Jakub Simko, Maria Bielikova
However, it is susceptible to authorship obfuscation (AO) methods, such as paraphrasing, which can cause MGTs to evade detection.
no code implementations • 14 Nov 2023 • Nafis Irtiza Tripto, Saranya Venkatraman, Dominik Macko, Robert Moro, Ivan Srba, Adaku Uchendu, Thai Le, Dongwon Lee
In the realm of text manipulation and linguistic transformation, the question of authorship has always been a subject of fascination and philosophical inquiry.
no code implementations • 25 Oct 2023 • Nafis Irtiza Tripto, Adaku Uchendu, Thai Le, Mattia Setzu, Fosca Giannotti, Dongwon Lee
Thus, we introduce the largest benchmark for spoken texts - HANSEN (Human ANd ai Spoken tExt beNchmark).
1 code implementation • 24 Oct 2023 • Jason Lucas, Adaku Uchendu, Michiharu Yamashita, Jooyoung Lee, Shaurya Rohatgi, Dongwon Lee
Recent ubiquity and disruptive impacts of large language models (LLMs) have raised concerns about their potential to be misused (. i. e, generating large-scale harmful and misleading content).
1 code implementation • 20 Oct 2023 • Dominik Macko, Robert Moro, Adaku Uchendu, Jason Samuel Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, Maria Bielikova
There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings.
1 code implementation • 9 Oct 2023 • Saranya Venkatraman, Adaku Uchendu, Dongwon Lee
We examine if this UID principle can help capture differences between Large Language Models (LLMs)-generated and human-generated texts.
no code implementations • 22 Sep 2023 • Adaku Uchendu, Thai Le, Dongwon Lee
We propose TopFormer to improve existing AA solutions by capturing more linguistic patterns in deepfake texts by including a Topological Data Analysis (TDA) layer in the Transformer-based model.
2 code implementations • 3 Apr 2023 • Adaku Uchendu, Jooyoung Lee, Hua Shen, Thai Le, Ting-Hao 'Kenneth' Huang, Dongwon Lee
Advances in Large Language Models (e. g., GPT-4, LLaMA) have improved the generation of coherent sentences resembling human writing on a large scale, resulting in the creation of so-called deepfake texts.
no code implementations • 18 Mar 2023 • Yiran Ye, Thai Le, Dongwon Lee
In this paper, we introduce a benchmark test set containing human-written perturbations online for toxic speech detection models.
no code implementations • 24 Feb 2023 • Jia Tracy Shen, Dongwon Lee
The paper finally compare the model performance between training the original data and training the data imputed with generated data from non-subject based model VAE-NS and subject-based training models (i. e., VAE and LVAE).
no code implementations • 16 Jan 2023 • Thai Le, Ye Yiran, Yifan Hu, Dongwon Lee
CRYPTEXT is a data-intensive application that provides the users with a database and several tools to extract and interact with human-written perturbations.
1 code implementation • 5 Jan 2023 • Aashish Anantha Ramakrishnan, Sharon X. Huang, Dongwon Lee
Advancements in Text-to-Image synthesis over recent years have focused more on improving the quality of generated samples on datasets with descriptive captions.
no code implementations • 19 Oct 2022 • Adaku Uchendu, Thai Le, Dongwon Lee
Two interlocking research questions of growing interest and importance in privacy research are Authorship Attribution (AA) and Authorship Obfuscation (AO).
1 code implementation • Findings (ACL) 2022 • Thai Le, Jooyoung Lee, Kevin Yen, Yifan Hu, Dongwon Lee
We find that adversarial texts generated by ANTHRO achieve the best trade-off between (1) attack success rate, (2) semantic preservation of the original text, and (3) stealthiness--i. e. indistinguishable from human writings hence harder to be flagged as suspicious.
1 code implementation • 15 Mar 2022 • Jooyoung Lee, Thai Le, Jinghui Chen, Dongwon Lee
Our results suggest that (1) three types of plagiarism widely exist in LMs beyond memorization, (2) both size and decoding methods of LMs are strongly associated with the degrees of plagiarism they exhibit, and (3) fine-tuned LMs' plagiarism patterns vary based on their corpus similarity and homogeneity.
no code implementations • 22 Feb 2022 • Michiharu Yamashita, Jia Tracy Shen, Thanh Tran, Hamoon Ekhtiari, Dongwon Lee
In online job marketplaces, it is important to establish a well-defined job title taxonomy for various downstream tasks (e. g., job recommendation, users' career analysis, and turnover prediction).
no code implementations • 20 Oct 2021 • Thai Le, Long Tran-Thanh, Dongwon Lee
To this question, we successfully demonstrate that indeed it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.
3 code implementations • Findings (EMNLP) 2021 • Adaku Uchendu, Zeyu Ma, Thai Le, Rui Zhang, Dongwon Lee
Recent progress in generative language models has enabled machines to generate astonishingly realistic texts.
1 code implementation • 2 Jun 2021 • Jia Tracy Shen, Michiharu Yamashita, Ethan Prihar, Neil Heffernan, Xintao Wu, Ben Graff, Dongwon Lee
Due to the nature of mathematical texts, which often use domain specific vocabulary along with equations and math symbols, we posit that the development of a new BERT model for mathematics would be useful for many mathematical downstream tasks.
no code implementations • 31 May 2021 • Duanshun Li, Jing Liu, Jinsung Jeon, Seoyoung Hong, Thai Le, Dongwon Lee, Noseong Park
On top of the prediction models, we define a budget-constrained flight frequency optimization problem to maximize the market influence over 2, 262 routes.
no code implementations • 24 May 2021 • Jia Tracy Shen, Michiharu Yamashita, Ethan Prihar, Neil Heffernan, Xintao Wu, Sean McGrew, Dongwon Lee
Educational content labeled with proper knowledge components (KCs) are particularly useful to teachers or content organizers.
1 code implementation • NAACL 2021 • Minjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee, Dongwon Lee, Jongwuk Lee
Automated metaphor detection is a challenging task to identify metaphorical expressions of words in a sentence.
no code implementations • 23 Mar 2021 • Dongwon Lee, Nikolaos Karadimitriou, Matthias Ruf, Holger Steeb
The segmentation results from all five methods are compared to each other in terms of segmentation quality and time efficiency.
no code implementations • ACL 2021 • Thai Le, Noseong Park, Dongwon Lee
The Universal Trigger (UniTrigger) is a recently-proposed powerful adversarial textual attack method.
1 code implementation • ACL 2022 • Thai Le, Noseong Park, Dongwon Lee
Even though several methods have proposed to defend textual neural network (NN) models against black-box adversarial attacks, they often defend against a specific text perturbation strategy and/or require re-training the models from scratch.
no code implementations • 22 Oct 2020 • Wen Huang, Kevin Labille, Xintao Wu, Dongwon Lee, Neil Heffernan
Personalized recommendation based on multi-arm bandit (MAB) algorithms has shown to lead to high utility and efficiency as it can dynamically adapt the recommendation strategy based on feedback.
1 code implementation • 1 Sep 2020 • Thai Le, Suhang Wang, Dongwon Lee
In recent years, the proliferation of so-called "fake news" has caused much disruptions in society and weakened the news ecosystem.
2 code implementations • 22 May 2020 • Limeng Cui, Dongwon Lee
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire.
1 code implementation • 2 Jan 2020 • Kai Shu, Suhang Wang, Dongwon Lee, Huan Liu
In recent years, disinformation including fake news, has became a global phenomenon due to its explosive growth, particularly on social media.
1 code implementation • 5 Nov 2019 • Thai Le, Suhang Wang, Dongwon Lee
Despite the recent development in the topic of explainable AI/ML for image and text data, the majority of current solutions are not suitable to explain the prediction of neural network models when the datasets are tabular and their features are in high-dimensional vectorized formats.
no code implementations • 26 Jul 2019 • Jason, Zhang, Junming Yin, Dongwon Lee, Linhong Zhu
In recent years, \emph{search story}, a combined display with other organic channels, has become a major source of user traffic on platforms such as e-commerce search platforms, news feed platforms and web and image search platforms.
7 code implementations • 5 Sep 2018 • Kai Shu, Deepak Mahudeswaran, Suhang Wang, Dongwon Lee, Huan Liu
However, fake news detection is a non-trivial task, which requires multi-source information such as news content, social context, and dynamic information.
Social and Information Networks
2 code implementations • 31 Aug 2018 • Thanh Tran, Kyumin Lee, Yiming Liao, Dongwon Lee
Following recent successes in exploiting both latent factor and word embedding models in recommendation, we propose a novel Regularized Multi-Embedding (RME) based recommendation model that simultaneously encapsulates the following ideas via decomposition: (1) which items a user likes, (2) which two users co-like the same items, (3) which two items users often co-liked, and (4) which two items users often co-disliked.