CoLA
29 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in CoLA
Most implemented papers
Neural Network Acceptability Judgments
This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence.
Contrastive Learning of General-Purpose Audio Representations
We introduce COLA, a self-supervised pre-training approach for learning a general-purpose representation of audio.
COLA-Net: Collaborative Attention Network for Image Restoration
Local and non-local attention-based methods have been well studied in various image restoration tasks while leading to promising performance.
Can BERT eat RuCoLA? Topological Data Analysis to Explain
Our results contribute to understanding the behavior of monolingual LMs in the acceptability classification task, provide insights into the functional roles of attention heads, and highlight the advantages of TDA-based approaches for analyzing LMs.
COLA: Decentralized Linear Learning
Decentralized machine learning is a promising emerging paradigm in view of global challenges of data ownership and privacy.
An LSTM Adaptation Study of (Un)grammaticality
We propose a novel approach to the study of how artificial neural network perceive the distinction between grammatical and ungrammatical sentences, a crucial task in the growing field of synthetic linguistics.
Do Attention Heads in BERT Track Syntactic Dependencies?
We investigate the extent to which individual attention heads in pretrained transformer language models, such as BERT and RoBERTa, implicitly capture syntactic dependency relations.
CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning
In this paper, we argue that learning by comparing helps identify these hard snippets and we propose to utilize snippet Contrastive learning to Localize Actions, CoLA for short.
Efficient Sequence Packing without Cross-contamination: Accelerating Large Language Models without Impacting Performance
We show in this paper that the variation in sequence lengths in common NLP datasets is such that up to 50% of all tokens can be padding.
SupCL-Seq: Supervised Contrastive Learning for Downstream Optimized Sequence Representations
This paper introduces SupCL-Seq, which extends the supervised contrastive learning from computer vision to the optimization of sequence representations in NLP.