Near-Term Advances in Quantum Natural Language Processing

5 Jun 2022  ·  Dominic Widdows, Aaranya Alexander, Daiwei Zhu, Chase Zimmerman, Arunava Majumder ·

This paper describes experiments showing that some tasks in natural language processing (NLP) can already be performed using quantum computers, though so far only with small datasets. We demonstrate various approaches to topic classification. The first uses an explicit word-based approach, in which word-topic scoring weights are implemented as fractional rotations of individual qubit, and a new phrase is classified based on the accumulation of these weights in a scoring qubit using entangling controlled-NOT gates. This is compared with more scalable quantum encodings of word embedding vectors, which are used in the computation of kernel values in a quantum support vector machine: this approach achieved an average of 62% accuracy on classification tasks involving over 10000 words, which is the largest such quantum computing experiment to date. We describe a quantum probability approach to bigram modeling that can be applied to sequences of words and formal concepts, investigating a generative approximation to these distributions using a quantum circuit Born machine, and an approach to ambiguity resolution in verb-noun composition using single-qubit rotations for simple nouns and 2-qubit controlled-NOT gates for simple verbs. The smaller systems described have been run successfully on physical quantum computers, and the larger ones have been simulated. We show that statistically meaningful results can be obtained using real datasets, but this is much more difficult to predict than with easier artificial language examples used previously in developing quantum NLP systems. Other approaches to quantum NLP are compared, partly with respect to contemporary issues including informal language, fluency, and truthfulness.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here