Detailed, parameterised models for human bodies, hands and faces have been developed thoroughly. However, the foot has not been explored in detail - available data is extremely limited, and existing models are low fidelity, and do not capture articulation.
We improve on this by developing FIND, an implicit foot model, which models shape, pose and texture as deformation fields along the surface of a template foot. We improve the available data by collecting Foot3D - a dataset made available to the research community.
We construct a coordinate based model, which defines, for every point along the surface of a template mesh, a deformation and a colour value.
This model takes as input a 3D point on the surface, and latent codes describing the shape, articulation (pose) and texture of the foot.
At inference time, we sample every point along the surface of a chosen template mesh to produce a target foot.
GT | FIND | PCA | SUPR |
We fit our model to a selection of 3D validation scans, and compare the reconstruction quality to a baseline PCA model (produced from FoldingNet), and to a rigged generative foot model SUPR (released at ECCV 2022, after publication of this paper). We show significant qualitative and quantitative improvements.
We acknowledge the collaboration and financial support of Trya Srl.
@inproceedings{boyne2022find, title={FIND: An Unsupervised Implicit 3D Model of Articulated Human Feet}, author={Boyne, Oliver and Charles, James and Cipolla, Roberto}, booktitle={British Machine Vision Conference (BMVC)}, year={2022} }