Realizing Disentanglement in LM Latent Space via Vocabulary-Defined Semantics

29 Jan 2024  ·  Jian Gu, Aldeida Aleti, Chunyang Chen, Hongyu Zhang ·

Understanding the latent space of language models (LMs) is important for improving the performance and interpretability of LMs. Existing analyses often fail to provide insights that take advantage of the semantic properties of language models and often overlook crucial aspects of language model adaptation. In response, we introduce a pioneering approach called vocabulary-defined semantics, which establishes a reference frame grounded in LM vocabulary within the LM latent space. We propose a novel technique to compute disentangled logits and gradients in latent space, not entangled ones on vocabulary. Further, we perform semantical clustering on data representations as a novel way of LM adaptation. Through extensive experiments across diverse text understanding datasets, our approach outperforms state-of-the-art methods of retrieval-augmented generation and parameter-efficient finetuning, showcasing its effectiveness and efficiency.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here