Learning Unsupervised Shape Recovery from Images
The thesis presents an innovative approach to 3D shape reconstruction from images, exploiting the capabilities of a Generative Adversarial Network (GAN) called ShapeTexGAN. The research focuses on generating high quality 3D models from unordered point clouds or 2D images without requiring detailed camera information, with the aim of improving the efficiency and quality of 3D reconstruction processes where limited input data is available.
Details
Type of Work: Master Thesis
Main Author: Jan Petrik
Affiliation: ETH Zurich
Supervisors: Radek Danecek, Markus Gross
Date: 20th October 2020
Journal: None
Online: On Demand
Gallery
-
The interpolation outcomes in the ShapeTexGAN latent space between two 3D models are demonstrated. Additionally, the dots highlight specific polygons on the polygon mesh, and it is evident that these polygons remain unchanged throughout the interpolation process. Therefore, the generated 3D models maintain registration, signifying a stable topological structure throughout the transition. -
Reconstruction results of ShapeTexGAN in the most challenging scenario, where the input to this AI-driven architecture is a single image silhouette of an object - a car, a chair and a human torso - and the output is a 3D model. It should be emphasised that the reconstruction is performed solely on the basis of this information, without any additional parameters.