Search Results for author: Frederik Schmitt

Found 8 papers, 5 papers with code

Learning Better Representations From Less Data For Propositional Satisfiability

no code implementations13 Feb 2024 Mohamed Ghanem, Frederik Schmitt, Julian Siber, Bernd Finkbeiner

By combining certificate-driven training and expert iteration, our model learns better representations than models trained for classification only, with a much higher data efficiency -- requiring orders of magnitude less training data.

NeuroSynt: A Neuro-symbolic Portfolio Solver for Reactive Synthesis

1 code implementation22 Jan 2024 Matthias Cosler, Christopher Hahn, Ayham Omar, Frederik Schmitt

At the core of the solver lies a seamless integration of neural and symbolic approaches to solving the reactive synthesis problem.

Iterative Circuit Repair Against Formal Specifications

1 code implementation2 Mar 2023 Matthias Cosler, Frederik Schmitt, Christopher Hahn, Bernd Finkbeiner

We propose a separated hierarchical Transformer for multimodal representation learning of the formal specification and the circuit.

Representation Learning

Formal Specifications from Natural Language

no code implementations4 Jun 2022 Christopher Hahn, Frederik Schmitt, Julia J. Tillman, Niklas Metzger, Julian Siber, Bernd Finkbeiner

We study the generalization abilities of language models when translating natural language into formal specifications with complex semantics.

Automated Theorem Proving

Attention Flows for General Transformers

1 code implementation30 May 2022 Niklas Metzger, Christopher Hahn, Julian Siber, Frederik Schmitt, Bernd Finkbeiner

In this paper, we study the computation of how much an input token in a Transformer model influences its prediction.

Decoder

Neural Circuit Synthesis from Specification Patterns

1 code implementation NeurIPS 2021 Frederik Schmitt, Christopher Hahn, Markus N. Rabe, Bernd Finkbeiner

We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications in linear-time temporal logic (LTL).

Teaching Temporal Logics to Neural Networks

2 code implementations ICLR 2021 Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, Bernd Finkbeiner

We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics.

Cannot find the paper you are looking for? You can Submit a new open access paper.