no code implementations • 4 Sep 2023 • Luca Sestini, Benoit Rosa, Elena De Momi, Giancarlo Ferrigno, Nicolas Padoy
In this work, we develop a framework for instance segmentation not relying on spatial annotations for training.
no code implementations • 28 Apr 2023 • Ziheng Wang, Andrea Mariani, Arianna Menciassi, Elena De Momi, Ann Majewicz Fey
In this paper, we propose a novel approach for skill assessment by transferring domain knowledge from labeled kinematic data to unlabeled data.
no code implementations • 31 Mar 2023 • Luca Fortini, Mattia Leonori, Juan M. Gandarias, Elena De Momi, Arash Ajoudani
Tracking 3D human motion in real-time is crucial for numerous applications across many fields.
no code implementations • 19 Jan 2023 • Edoardo Lamon, Fabio Fusaro, Elena De Momi, Arash Ajoudani
The growing deployment of human-robot collaborative processes in several industrial applications, such as handling, welding, and assembly, unfolds the pursuit of systems which are able to manage large heterogeneous teams and, at the same time, monitor the execution of complex tasks.
no code implementations • 21 Dec 2022 • Jorge F. Lazo, Benoit Rosa, Michele Catellani, Matteo Fontana, Francesco A. Mistretta, Gennaro Musi, Ottavio De Cobelli, Michel de Mathelin, Elena De Momi
We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i. e. there is no exact equivalent for every image in both NBI and WLI domains.
no code implementations • 26 Jul 2022 • Alessandro Casella, Sophia Bano, Francisco Vasconcelos, Anna L. David, Dario Paladini, Jan Deprest, Elena De Momi, Leonardo S. Mattos, Sara Moccia, Danail Stoyanov
This surgery is minimally invasive and relies on fetoscopy.
no code implementations • 1 Jul 2022 • Jorge F. Lazo, Chun-Feng Lai, Sara Moccia, Benoit Rosa, Michele Catellani, Michel de Mathelin, Giancarlo Ferrigno, Paul Breedveld, Jenny Dankelman, Elena De Momi
Navigation inside luminal organs is an arduous task that requires non-intuitive coordination between the movement of the operator's hand and the information obtained from the endoscopic video.
1 code implementation • 24 Jun 2022 • Sophia Bano, Alessandro Casella, Francisco Vasconcelos, Abdul Qayyum, Abdesslam Benzinou, Moona Mazher, Fabrice Meriaudeau, Chiara Lena, Ilaria Anita Cintorrino, Gaia Romana De Paolis, Jessica Biagioli, Daria Grechishnikova, Jing Jiao, Bizhe Bai, Yanyan Qiao, Binod Bhattarai, Rebati Raman Gaire, Ronast Subedi, Eduard Vazquez, Szymon Płotka, Aneta Lisowska, Arkadiusz Sitek, George Attilakos, Ruwan Wimalasundera, Anna L David, Dario Paladini, Jan Deprest, Elena De Momi, Leonardo S Mattos, Sara Moccia, Danail Stoyanov
For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips.
no code implementations • 28 Mar 2022 • Luca Fortini, Mattia Leonori, Juan M. Gandarias, Elena De Momi, Arash Ajoudani
Simulation tools are essential for robotics research, especially for those domains in which safety is crucial, such as Human-Robot Collaboration (HRC).
no code implementations • 16 Feb 2022 • Luca Sestini, Benoit Rosa, Elena De Momi, Giancarlo Ferrigno, Nicolas Padoy
We then use the obtained instrument masks as pseudo-labels in order to train a per-frame segmentation model; to this aim, we develop a learning-from-noisy-labels architecture, designed to extract a clean supervision signal from these pseudo-labels, leveraging their peculiar noise properties.
1 code implementation • 10 Jun 2021 • Sophia Bano, Alessandro Casella, Francisco Vasconcelos, Sara Moccia, George Attilakos, Ruwan Wimalasundera, Anna L. David, Dario Paladini, Jan Deprest, Elena De Momi, Leonardo S. Mattos, Danail Stoyanov
Through the \textit{Fetoscopic Placental Vessel Segmentation and Registration (FetReg)} challenge, we present a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms for the fetal environment with a focus on creating drift-free mosaics from long duration fetoscopy videos.
no code implementations • 25 May 2021 • Fabio Fusaro, Edoardo Lamon, Elena De Momi, Arash Ajoudani
This paper proposes a novel integrated dynamic method based on Behavior Trees for planning and allocating tasks in mixed human robot teams, suitable for manufacturing environments.
no code implementations • 8 Apr 2021 • Jorge F. Lazo, Sara Moccia, Aldo Marzullo, Michele Catellani, Ottavio De Cobelli, Benoit Rosa, Michel de Mathelin, Elena De Momi
In this work we study the implementation of 3 different Convolutional Neural Networks (CNNs), using a 2-steps training strategy, to classify images from the urinary tract with and without lesions.
no code implementations • 5 Apr 2021 • Jorge F. Lazo, Aldo Marzullo, Sara Moccia, Michele Catellani, Benoit Rosa, Michel de Mathelin, Elena De Momi
Of these, two architectures are taken as core-models, namely U-Net based in residual blocks($m_1$) and Mask-RCNN($m_2$), which are fed with single still-frames $I(t)$.
no code implementations • 2 Apr 2021 • Pedro Henrique Suruagy Perrusi, Anna Cazzaniga, Paul Baksic, Eleonora Tagliabue, Elena De Momi, Hadrien Courtecuisse
Control strategies for robotic needle steering in soft tissues must account for complex interactions between the needle and the tissue to achieve accurate needle tip positioning.
no code implementations • 28 Feb 2021 • Luca Sestini, Benoit Rosa, Elena De Momi, Giancarlo Ferrigno, Nicolas Padoy
3-D pose estimation of instruments is a crucial step towards automatic scene understanding in robotic minimally invasive surgery.
no code implementations • 13 Jan 2021 • Jorge F. Lazo, Aldo Marzullo, Sara Moccia, Michele Catellani, Benoit Rosa, Michel de Mathelin, Elena De Momi
For the training of these networks, we analyze the use of two different color spaces: gray-scale and RGB data images.
no code implementations • 28 Dec 2020 • Jorge F. Lazo, Sara Moccia, Emanuele Frontoni, Elena De Momi
The best performance was obtained by fine tuning VGG-16, with an accuracy of 0. 919 and an AUC of 0. 934.
no code implementations • 25 Jul 2019 • Beatrice van Amsterdam, Hirenkumar Nakawala, Elena De Momi, Danail Stoyanov
Hence, the potential of weak supervision could be to improve unsupervised learning while avoiding manual annotation of large datasets.