Exploring Euphemism Detection in Few-Shot and Zero-Shot Settings
This work builds upon the Euphemism Detection Shared Task proposed in the EMNLP 2022 FigLang Workshop, and extends it to few-shot and zero-shot settings. We demonstrate a few-shot and zero-shot formulation using the dataset from the shared task, and we conduct experiments in these settings using RoBERTa and GPT-3. Our results show that language models are able to classify euphemistic terms relatively well even on new terms unseen during training, indicating that it is able to capture higher-level concepts related to euphemisms.
PDF AbstractTasks
Datasets
Add Datasets
introduced or used in this paper
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
Adam •
Attention Dropout •
BERT •
BPE •
Cosine Annealing •
Dense Connections •
Dropout •
Fixed Factorized Attention •
GELU •
GPT-3 •
Layer Normalization •
Linear Layer •
Linear Warmup With Cosine Annealing •
Linear Warmup With Linear Decay •
Multi-Head Attention •
Residual Connection •
RoBERTa •
Scaled Dot-Product Attention •
Softmax •
Strided Attention •
Weight Decay •
WordPiece