1 code implementation • 5 Jun 2023 • Muhammad Usman Akbar, Måns Larsson, Anders Eklund
Our results show that segmentation networks trained on synthetic images reach Dice scores that are 80% - 90% of Dice scores when training with real images, but that memorization of the training images can be a problem for diffusion models if the original dataset is too small.
no code implementations • 8 Nov 2022 • Måns Larsson, Muhammad Usman Akbar, Anders Eklund
Large annotated datasets are required to train segmentation networks.
2 code implementations • CVPR 2021 • Paul-Edouard Sarlin, Ajaykumar Unagar, Måns Larsson, Hugo Germain, Carl Toft, Viktor Larsson, Marc Pollefeys, Vincent Lepetit, Lars Hammarstrand, Fredrik Kahl, Torsten Sattler
In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms.
1 code implementation • 18 Aug 2019 • Måns Larsson, Erik Stenborg, Carl Toft, Lars Hammarstrand, Torsten Sattler, Fredrik Kahl
In this paper, we propose a new neural network, the Fine-Grained Segmentation Network (FGSN), that can be used to provide image segmentations with a larger number of labels and can be trained in a self-supervised fashion.
1 code implementation • 16 Mar 2019 • Måns Larsson, Erik Stenborg, Lars Hammarstrand, Torsten Sattler, Mark Pollefeys, Fredrik Kahl
We show that adding the correspondences as extra supervision during training improves the segmentation performance of the convolutional neural network, making it more robust to seasonal changes and weather conditions.
no code implementations • 24 Jan 2017 • Måns Larsson, Anurag Arnab, Fredrik Kahl, Shuai Zheng, Philip Torr
It is empirically demonstrated that such learned potentials can improve segmentation accuracy and that certain label class interactions are indeed better modelled by a non-Gaussian potential.