Search Results for author: Samuel Belkadi

Found 4 papers, 3 papers with code

Exploration of Masked and Causal Language Modelling for Text Generation

no code implementations21 May 2024 Nicolo Micheletti, Samuel Belkadi, Lifeng Han, Goran Nenadic

In addition, we evaluate the usefulness of the generated texts by using them in three different downstream tasks: 1) Entity Recognition, 2) Text Classification, and 3) Authorship Verification.

Authorship Verification Language Modelling +3

Generating Medical Prescriptions with Conditional Transformer

1 code implementation30 Oct 2023 Samuel Belkadi, Nicolo Micheletti, Lifeng Han, Warren Del-Pinto, Goran Nenadic

LT3 is trained on a set of around 2K lines of medication prescriptions extracted from the MIMIC-III database, allowing the model to produce valuable synthetic medication prescriptions.

2k Language Modelling +3

Investigating Large Language Models and Control Mechanisms to Improve Text Readability of Biomedical Abstracts

1 code implementation22 Sep 2023 Zihao Li, Samuel Belkadi, Nicolo Micheletti, Lifeng Han, Matthew Shardlow, Goran Nenadic

In this work, we investigate the ability of state-of-the-art large language models (LLMs) on the task of biomedical abstract simplification, using the publicly available dataset for plain language adaptation of biomedical abstracts (\textbf{PLABA}).

Decoder Text Simplification

Exploring the Value of Pre-trained Language Models for Clinical Named Entity Recognition

2 code implementations23 Oct 2022 Samuel Belkadi, Lifeng Han, Yuping Wu, Goran Nenadic

The experimental outcomes show that 1) CRF layers improved all language models; 2) referring to BIO-strict span level evaluation using macro-average F1 score, although the fine-tuned LLMs achieved 0. 83+ scores, the TransformerCRF model trained from scratch achieved 0. 78+, demonstrating comparable performances with much lower cost - e. g. with 39. 80\% less training parameters; 3) referring to BIO-strict span-level evaluation using weighted-average F1 score, ClinicalBERT-CRF, BERT-CRF, and TransformerCRF exhibited lower score differences, with 97. 59\%/97. 44\%/96. 84\% respectively.

Language Modelling named-entity-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.