Improving self-supervised 3D face reconstruction with few-shot transfer learning
Résumé
While self-supervised models for 3D face reconstruction from single monocular images have improved over the years, because their training loss is mainly based on the photometric loss, they struggle to predict a 3D face with a correct head pose, which can be critical for some applications. On the other hand, supervised methods can predict more accurate head pose but require a lot of annotated data. In this paper we use transfer learning to adapt a pre-trained face autoencoder to predict from a face image its Projected Normalized Coordinate Code (PNCC), which encodes head pose and geometry information into a 2D image. Our PNCC predictor can be trained using only a few annotated training samples. We then improve a self-supervised 3D face reconstruction method by incorporating the predicted PNCC into the architecture. Compared to the original self-supervised architecture, our method predicts better head pose.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|