no code implementations • 29 Nov 2023 • Jonathon Liu, Razin A. Shaikh, Benjamin Rodatz, Richie Yeung, Bob Coecke
DisCoCirc represents natural language text as a `circuit' that captures the core semantic information of the text.
no code implementations • 6 Nov 2023 • Sean Tull, Razin A. Shaikh, Sara Sabrina Zemljic, Stephen Clark
We show how concepts from the domains of shape, colour, size and position can be learned from images of simple shapes, where concepts are represented as Gaussians in the classical implementation, and quantum effects in the quantum one.
no code implementations • 7 Feb 2023 • Sean Tull, Razin A. Shaikh, Sara Sabrina Zemljic, Stephen Clark
Our approach builds upon Gardenfors' classical framework of conceptual spaces, in which cognition is modelled geometrically through the use of convex spaces, which in turn factorise in terms of simpler spaces called domains.
no code implementations • 21 Mar 2022 • Razin A. Shaikh, Sara Sabrina Zemljic, Sean Tull, Stephen Clark
In this report we present a new model of concepts, based on the framework of variational autoencoders, which is designed to have attractive properties such as factored conceptual domains, and at the same time be learnable from data.
1 code implementation • 1 Mar 2022 • Can Zhou, Razin A. Shaikh, Yiran Li, Amin Farjudian
A domain-theoretic framework is presented for validated robustness analysis of neural networks.
no code implementations • 14 Jul 2021 • Razin A. Shaikh, Lia Yeh, Benjamin Rodatz, Bob Coecke
Negation in natural language does not follow Boolean logic and is therefore inherently difficult to model.
no code implementations • ACL (SemSpace, IWCS) 2021 • Benjamin Rodatz, Razin A. Shaikh, Lia Yeh
We propose a framework to model an operational conversational negation by applying worldly context (prior knowledge) to logical negation in compositional distributional semantics.