Studying disentangled representations from unlabelled knowledge is a elementary problem in machine studying. Fixing it might unlock different issues, equivalent to generalization, interpretability, or equity. Though remarkably difficult to unravel in concept, disentanglement is commonly achieved in observe by prior matching. Moreover, latest works have proven that prior matching approaches may be enhanced by leveraging geometrical concerns, e.g., by studying representations that protect geometric options of the info, equivalent to distances or angles between factors. Nevertheless, matching the prior whereas preserving geometric options is difficult, as a mapping that absolutely preserves these options whereas aligning the info distribution with the prior doesn’t exist generally. To deal with these challenges, we introduce a novel strategy to disentangled illustration studying based mostly on quadratic optimum transport. We formulate the issue utilizing Gromov-Monge maps that transport one distribution onto one other with minimal distortion of predefined geometric options, preserving them as a lot as may be achieved. To compute such maps, we suggest the Gromov-Monge-Hole (GMG), a regularizer quantifying whether or not a map strikes a reference distribution with minimal geometry distortion. We show the effectiveness of our strategy for disentanglement throughout 4 customary benchmarks, outperforming different strategies leveraging geometric concerns.
*Equal contribution
**Equal advising
†CREST-ENSAE
‡Helmholtz Munich
§TU Munich
¶MCML
††Tubingen AI Middle