Schedule
Titles and abstracts are available in the conference booklet (read-only color version)(printable version)
You can also click on the titles below to see the abstracts.
Monday, January 13th
10:00 a.m. | Welcome |
---|---|
11:00 a.m. | Silvia Bonettini : Linesearch-enhanced forward–backward methods for inexact nonconvex scenarios (talk)In the last decades, optimization techniques have been successfully applied to several imaging problems and more and more sophisticated variational models have been proposed in the recent research. In particular, significant improvements of the previous state-of-the-art have been obtained by adopting nonconvex settings and combining variational techniques with machine learning approaches. Solving the related optimization problems requires new numerical tools, able to handle their intrinsic difficulties. Indeed, nonconvexity requires specific theoretical and numerical techniques. Moreover, in machine learning settings some elements of the corresponding optimization problem, e.g. the gradient, are often available only as an approximation. In this talk we describe an optimization framework able to handle both nonconvexity and partial knowledge of the variational model. Our approach consists in a forward–backward method with line–search based on approximated values of the objective function and its gradient. As a special case of our general scheme, we derive two algorithms: a line–search based FISTA-like algorithm and a specific inexact method for bilevel optimization problems. The numerical experiments on deblurring and blind deconvolution problems show that the proposed methods are competitive with existing approaches. |
11:30 a.m. | Johannes Hertrich: Importance corrected neural JKO sampling (talk)In order to sample from an unnormalized probability density function, we propose to combine continuous normalizing flows (CNFs) with rejection-resampling steps based on importance weights. We relate the iterative training of CNFs with regularized velocity fields to a proximal mappings in the Wasserstein space. The alternation of local flow steps and non-local rejection-resampling steps allows to overcome local minima and mode collapse for multimodal distributions. The arising model can be trained iteratively, reduces the reverse Kulback-Leibler (KL) loss function in each step, allows to generate iid samples and moreover allows for evaluations of the generated underlying density. Numerical examples demonstrate the efficiency of our approach. |
12:00 – 2:00 p.m. | Lunch break |
2:00 p.m. | 2025 MIA Ph.D. prizeFlorian Beier: A Geometric Optimal Transport Framework for 3D Shape InterpolationThe Gromov-Wasserstein (GW) transport problem is a generalization of the classic optimal transport problem, which seeks a relaxed correspondence between two measures while preserving their internal ge ometry. Due to meeting this theoretical underpinning, it is a valuable tool for the analysis of objects that do not possess a natural embedding or should be studied independently of it. Prime applications can thus be found in e.g. shape matching, classification and interpolation tasks. To tackle the latter, one theoretically justified approach is the employment of GW barycenters, which are generalized Fréchet means with respect to the GW distance. After giving a gentle and illustrative introduction to the GW transport problem, we turn our attention to GW barycenters. Motivated by obtaining a numerically tractable method for their computation, we study the geometry of the induced GW space. Our theoretical results in this context allow us to lift a known fixpoint iteration for the computation of Fréchet means in Riemannian manifolds to the GW setting. The lifted iteration is simple to implement in practice and monotonically improves the quality of the barycenter. We provide numerical evidence of the potential of this method, including multi 3d shape interpolations. |
2:30 p.m. | Mame Diarra Fall: Regularization by denoising: Bayesian model and Langevin-within-split Gibbs sampling (talk)We propose a Bayesian framework for image inversion by deriving a probabilistic counterpart to the regularization-by-denoising (RED) paradigm. It additionally implements a Monte Carlo algorithm specifically tailored for sampling from the resulting posterior distribution, based on an asymptotically exact data augmentation (AXDA). The proposed algorithm is an approximate instance of split Gibbs sampling (SGS) which embeds one Langevin Monte Carlo step. The proposed method is applied to common imaging tasks such as deblurring, inpainting and super-resolution, demonstrating its efficacy through extensive numerical experiments. |
3:00 p.m. | Marcelo Pereyra: Uncertainty quantification in statistical imaging sciences: 40 years of muddling through (talk) |
3:30 p.m. | Break |
4:00 p.m. | Benedikt Wirth: The shape space of Sobolev diffeomorphisms and its discretization (talk)A by now classical framework for image registration, computational anatomy and similar applications of mathematical imaging is the large deformation diffeomorphic metric mapping (LDDMM). It is based on equipping a space of diffeomorphisms of chosen regularity with a Riemannian structure that is invariant under composition with diffeomorphisms. The particular setting of Sobolev diffeomorphisms has been quite thoroughly understood during the past decade. This now allows to analyse and devise corresponding numerical discretization schemes and their convergence. I will introduce into the theory of Sobolev LDDMM and present convergence results on its numerics. |
4:30 p.m. | Elena di Bernardino : Curvature measures for random excursion sets: theoretical and computational developments (talk)The excursion set of a smooth random field carries relevant information in its various geometric measures. Geometric properties of these exceedance regions above a given level provide meaningful theoretical and statistical characterizations for random fields defined on Euclidean domains. Many theoretical results have been obtained for excursions of Gaussian processes and include expected values of the so-called Lipschitz-Killing curvatures (LKCs), such as the area, perimeter and Euler characteristic in two-dimensional Euclidean space. In this talk we will describe a recent series of theoretical and computational contributions in this field. Our aim is to provide answers to questions like: i) How the geometric measures of an excursion set can be inferred from a discrete sample of the excursion set; ii) How these measures can be related back to the distributional properties of the random field from which the excursion set was obtained; iii) How the excursion set geometry can be used to infer the extremal behavior of random fields. |
7:30 p.m. | Dinner for speakers |
Tuesday, January 14th
9:00 a.m. | Audrey Repetti: Analysis and synthesis approximated denoisers for forward-backward plug-and-play algorithms (talk)In this presentation we will study the behaviour of the forward-backward (FB) algorithm when the proximity operator is replaced by a sub-iterative procedure to approximate a Gaussian denoiser, in a Plug-and-Play (PnP) fashion. Specifically, we consider both analysis and synthesis Gaussian denoisers within a dictionary framework, obtained by unrolling dual-FB iterations or FB iterations, respectively. We analyse the associated global minimization problems as well as asymptotic behaviour of the resulting FB-PnP iterations. For each case, analysis and synthesis, we show that the FB-PnP algorithms solve the same problem whether we use only one or an infinite number of sub-iteration to solve the denoising problem at each iteration. We will illustrate our theoretical results on numerical simulations, considering an image restoration problem in a deep dictionary framework. Joint work with Matthieu Kowalski, Benoit Malezieux and Thomas Moreau. |
9:30 a.m. | Barbara Pascal: Bilevel optimization for automated data-driven inverse problem resolution (talk)Most inverse problems in signal and image processing are ill-posed. To remove the ambiguity about the solution and design noise-robust estimators, a priori properties, e.g., smoothness or sparsity, can be imposed to the solution through regularization. The main bottleneck to use the obtained variational regularized estimators in practice, i.e., without access to ground truth, is that the quality of the estimates strongly depends on the fine-tuning of the level of regularization. A classical approach to automated and data-driven selection of regularization parameter consists in designing a data-dependent unbiased estimator of the error, the minimization of which provides an approximate of the optimal parameters. The resulting overall procedure can be formulated as a bilevel optimization problem, the inner loop computing the variational regularized estimator and the outer loop selecting hyperparameters. The design of a fully automated data-driven procedure adapted to inverse problems corrupted with highly correlated noise will be described in detail and exemplified on a texture segmentation problem. Its applicability to other inverse problems will be demonstrated through numerical simulations on both synthetic and real-world data. |
10:00 a.m. | Elisa Riccietti: Exploiting multiple resolutions to accelerate inverse problems in imaging (talk)Solving large scale optimization problems is a challenging task and alleviating its computational cost is an open research problem. In this talk we propose a method to face this challenge, which is based on an idea that is at the core of multilevel optimization methods: exploiting the structure of the problem to define coarse approximations of the objective function, representing the problem at different resolutions. We present IML FISTA, a multilevel inertial proximal algorithm to tackle non-smooth problems that draws ideas from the multilevel setting for smooth optimization. IML FISTA is able to handle state-of-the-art regularization techniques such as total variation and non-local total-variation, while providing a relatively simple construction of coarse approximations. We demonstrate the effectiveness of the approach on color and hyperspectral images reconstruction problems. |
10:30 a.m. | Break |
11:00 a.m. | Kimia Nadjahi: Scalable unbalanced optimal transport by slicing (talk)Substantial advances have been made in designing optimal transport (OT) variants which are either computationally and statistically more efficient or robust. Among them, sliced OT distances have been extensively used to mitigate the cubic algorithmic complexity and curse of dimensionality of OT. In parallel, unbalanced OT was designed to allow comparisons of more general positive measures, while being robust to outliers. In this talk, we bridge the gap between those two concepts and develop a general framework for efficiently comparing positive measures. We formulate two different versions of “sliced unbalanced OT” and study the associated topology and statistical properties. We then develop a GPU-friendly Frank-Wolfe algorithm to compute the two corresponding loss functions. The resulting methodology is modular, as it encompasses and extends prior related work, and brings the cubical computational complexity down to almost linear O(n log(n)) when comparing two discrete measures supported on n points. We finally conduct an empirical analysis of our methodology on both synthetic and real datasets, to illustrate its computational efficiency, relevance and applicability to real-world scenarios, including color transfer and barycenters of geophysical data. |
11:30 a.m. | Matthew Thorpe: How many labels do you need in semi-supervised learning? (talk)Semi-supervised learning (SSL) is the problem of finding missing labels from a partially labelled data set. The heuristic one uses is that “similar feature vectors should have similar labels”. The notion of similarity between feature vectors explored in this talk comes from a graph-based geometry where an edge is placed between feature vectors that are closer than some connectivity radius. A natural variational solution to the SSL is to minimise a Dirichlet energy built from the graph topology. And a natural question is to ask what happens as the number of feature vectors goes to infinity? In this talk I will give results on the asymptotics of graph-based SSL using an optimal transport topology. The results will include a lower bound on the number of labels needed for consistency. |
12:00 – 2:00 p.m. | Lunch break |
2:00 p.m. | Antonin Chambolle: Some properties of the solutions of Total-Variation regularized inverse problems (talk)The total variation has been successful as a regularizer for inverse problems in imaging, thanks to its ability to preserve discontinuities (edges) and its relative simplicity (convexity). Even if largely outdated by deep learning based method, it still can be useful in some regimes (low noise, large scale images). This talk is about the preservation of edges in total-variation based denoising. We revisit old proofs which show in some settings that no spurious edges are created by this approach. Our new approach, based on works of T. Valkonen, is natural and simple and applies to more settings (color/multispectral data, some higher order models, non-local variants). This is joint work with Michał Łasica from Warsaw and Konstantinos Bessas (Pavia). |
2:30 p.m. | Clarice Poon: Hadamard Langevin dynamics for sampling sparse priors (talk)Priors with non-smooth log densities have been widely used in Bayesian inverse problems, particularly in imaging, due to their sparsity inducing properties. To date, the majority of algorithms for handling such densities are based on proximal Langevin dynamics where one replaces the non-smooth part by a smooth approximation known as the Moreau envelope. In this work, we introduce a novel approach for sampling densities with l1-priors based on a Hadamard product parameterization. This builds upon the idea that the Laplace prior has a Gaussian mixture representation and our method can be seen as a form of overparametrization: by increasing the number of variables, we construct a density from which one can directly recover the original density. This is fundamentally different from proximal-type approaches since our resolution is exact, while proximal-based methods introduce additional bias due to the Moreau-envelope smoothing. For our new density, we present its Langevin dynamics in continuous time and establish well-posedness and geometric ergodicity. We also present a discretization scheme for the continuous dynamics and prove convergence as the time-step diminishes. |
3:00 p.m. | Matthias J. Ehrhardt: Inexact algorithms for bilevel learningVariational regularization techniques are dominant in the field of inverse problems. A drawback of these techniques is that they are dependent on a number of parameters which have to be set by the user. This issue can be approached by machine learning where we estimate these parameters from data. This is known as “Bilevel Learning” and has been successfully applied to many tasks, some as small-dimensional as learning a regularization parameter, others as high-dimensional as learning parameters of neural networks. While mathematically appealing, this strategy leads to a nested optimization problem which is practically challenging since function values and gradients cannot be computed to a high-enough precision. In this talk we discuss new computational approaches for this problem which do not assume exact knowledge of these. It turns out that a clever choice on the accuracy leads to much faster yet stable and robust solutions. |
3:30 p.m. | Break |
4:00 – 6:00 p.m. | Poster session |
Wednesday, January 15th
9:00 a.m. | Jonathan Dong: Random phase retrieval: theory and implementationPhase retrieval consists in the recovery of a complex-valued signal from intensity-only measurements. Strong results have recently been obtained for random models with i.i.d. sensing matrix components. In this presentation, we will review phase retrieval under a unifying framework for its different applications and focus on the random case. We will describe the latest theoretical results and limitations in practice, to conclude with structured random models combining the efficiency of fast Fourier transforms and the robustness of random phase retrieval reconstructions. |
9:30 a.m. | Anne Wald: Nano-CT imaging as an inverse problem with inexact forward operator (talk)Tomographic X-ray imaging on the nano-scale is an important tool to visualize the structure of materials such as alloys or biological tissue. Due to the small scale on which the data acquisition takes place, small perturbances caused by the environment become significant and cause a motion of the object relative to the scanner during the scan. Since this motion is hard to estimate and its incorporation into the reconstruction process strongly increases the numerical effort, we aim at a different approach for a stable reconstruction: We interpret the object motion as a modelling inexactness in comparison to the model in the static case and aim at compensating this modelling error. Apart from an unknown object motion, there are further properties of the physical setup that cause a deviation from the standard Radon-type forward operator. We discuss recent advances in compensating different kinds of modelling errors in nano-CT and present reconstructions from measurement data. |
10:00 a.m. | Luca Calatroni: Deep equilibrium mirror descent for learning image regularisation in Poisson inverse problemsWe consider the framework of deep equilibrium models to learn image regularisers in the context of imaging data corrupted by Poisson noise. In this framework, the Kullback-Leibler divergence is usually considered as a data term, which poses some difficulties due to its lack of Lipschitz-smoothness around zero. By using a mirror descent algorithm and enforcing a Lipschitz-like condition to guarantee convergence of the scheme even in non L-smooth settings, we propose a training strategy that learns efficiently the regulariser using limited data and reduced computational times. The key ingredients of the proposed approach are an efficient forward step obtained by means of an efficient backtracking strategy and a cheap backward step relying on Jacobian-free approximations. Numerical results on exemplar image denoising/deblurring problems and some open questions are presented. This is joint work with C. Daniele, S. Villa (MaLGa, University of Genoa, IT) and S. Vaiter (LJAD, CNRS, FR). |
10:30 a.m. | Break |
11:00 a.m. | Quentin Bertrand: Some challenges around retraining generative models on their own data (talk)Deep generative models have made tremendous progress in modeling complex data, often exhibiting generation quality that surpasses a typical human’s ability to discern the authenticity of samples. Undeniably, a key driver of this success is enabled by the massive amounts of web-scale data consumed by these models. Due to these models' striking performance and ease of availability, the web will inevitably be increasingly populated with synthetic content. Such a fact directly implies that future iterations of generative models will be trained on both clean and artificially generated data from past models. In addition, in practice, synthetic data is often subject to human feedback and curated by users before being used and uploaded online. For instance, many interfaces of popular text-to-image generative models, such as Stable Diffusion or Midjourney, produce several variations of an image for a given query which can eventually be curated by the users. In this talk we will discuss the impact of training generative models on mixed datasets—from classical training on real data to self-consuming generative models trained on purely synthetic curated data. |
11:30 a.m. | Julie Digne: Pointwise tools for the analysis of geometric shapes (talk)Digitized geometric shapes are omnipresent in issues of heritage preservation, or for the digital creation of industrial parts. They are the result of acquisition processes producing a set of 3d points possibly noisy or only partially covering the shape. In this talk, I will present new tools for the analysis of shapes allowing us to highlight interesting local differential and frequency properties. This was previously used as a way to exaggerate details of a shape. In this talk, I will present how this framework allows for an elegant formulation of shape characteristic lines. |
12:00 – 2:00 p.m. | Lunch break |
2:00 p.m. | Xiaoqun Zhang: Flow based generative models for medical image synthesis (talk)The synthesis of high-quality medical images is critical for enhancing clinical decision-making, diagnostic accuracy, and treatment planning, as well as for applications such as data augmentation and image quality improvement. Flow-based generative models have demonstrated significant potential in modeling complex data distributions and generating realistic synthetic images. This talk presents two novel approaches that contribute to advancements in flow-based generative modeling for medical image synthesis. The first approach introduces SyMOT-Flow, an invertible transformation model that minimizes the symmetric maximum mean discrepancy between samples from two unknown distributions, incorporating an optimal transport cost as regularization. This ensures short-distance and interpretable mappings, leading to more stable and accurate sample generation. The model is validated through low-dimensional illustrative examples and high-dimensional bi-modality medical image generation tasks. The second approach proposes Bi-DPM (Bi-directional Discrete Process Matching), a novel model for bi-modality image synthesis. Unlike traditional flow-based methods that rely on computationally intensive ordinary differential equation (ODE) solvers, Bi-DPM utilizes forward and backward flows with enhanced consistency over discrete time steps. This results in efficient and high-quality image synthesis guided by paired data. Experimental results on MRI T1/T2 and CT/MRI datasets show that Bi-DPM achieves superior image quality and accurately synthesizes anatomical regions compared to existing methods. These contributions offer practical advancements in flow-based medical image synthesis, addressing computational efficiency and image fidelity while providing tools that can support improved clinical workflows and outcomes. |
2:30 p.m. | Tuomo Valkonen: Differential estimates for fast first-order multilevel nonconvex optimisation (talk)PDE constraints appear in inverse imaging problems as physical models for measurements, while bilevel optimisation can be used for optimal experimental design and parameter learning. Such problems have been traditionally very expensive to solve, but recently, effective single-loop approaches have been introduced, both in our work, as well as in the machine learning community. In this talk, we discuss a simple gradient estimation formalisation for very general single-loop methods that include primal-dual methods for the inner problem, and conventional iterative solvers (Jacobi, Gauss–Seidel, conjugate gradients) for the adjoint problem and PDE constraints. This talk is based on joint work with Neil Dizon, Bjørn Jensen, and Ensio Suonperä. |