Deep Geometric Texture Synthesis
š” Research Summary
The paper introduces a novel generative framework for learning and transferring geometric textures on triangle meshes directly from a single reference model. Unlike prior approaches that rely on surface parameterization, normalāonly displacement maps, or genusārestricted templates, the proposed method operates on the intrinsic structure of meshes and is oblivious to topology. The core idea is to treat each triangular face together with its three oneāring neighboring faces as a fixedāsize receptive field. For every edge of a face, a rotationā and translationāinvariant 4ādimensional feature (edge length and the opposite vertex projected onto a local coordinate system defined by the face normal, edge direction, and their cross product) is extracted. These features are first processed by a 1Ć1 convolution to obtain an orderāinvariant face embedding, after which a symmetric face convolution aggregates information from the three neighboring faces using permutationāinvariant operations (e.g., averaging).
Training proceeds hierarchically across multiple resolutions. Starting from a lowāresolution template (e.g., an icosahedron), the reference mesh is repeatedly subdivided and optimized so that each level provides a multiāscale training set. At each scale, Gaussian noise is added to vertex positions and fed into the generator. The generatorās final layer predicts a displacement vector for each vertex of a face; the perāvertex displacement is obtained by averaging the contributions of all incident faces. The displaced mesh is then subdivided and passed to the next, finer scale. A discriminator at each scale, built from the same faceāconvolution layers, learns to distinguish real patches (taken from the multiāscale reference) from synthesized ones, thereby enforcing that local statistics match the reference distribution.
Because the network predicts full 3āD vertex displacements, it can move vertices not only along the normal but also tangentially, enabling synthesis of complex geometric textures that cannot be expressed by 2āD displacement maps. The hierarchical GAN architecture allows the model to first capture coarse shape variations and then progressively add fineāgrained details, similar to multiāscale texture synthesis in images.
Experiments demonstrate successful transfer of a variety of geometric texturesāspikes, cubic stylizations, waveālike deformationsāfrom a single source mesh to target meshes with different triangulations and genera (including genusāÆ1 to genusāÆ4). The method produces results that are visually indistinguishable from the reference statistics, while requiring no UV mapping or explicit surface parameterization. Moreover, by sampling different latent codes, the model can generate diverse variations of the learned texture, confirming its probabilistic nature.
The paperās contributions are threefold: (1) a rotationā and translationāinvariant face feature representation coupled with symmetric face convolutions, (2) a hierarchical GAN that learns multiāscale geometric texture distributions from a single mesh, and (3) a genusāoblivious texture transfer pipeline that synthesizes new geometry rather than copying patches. Limitations include reliance on local patches, which may lead to global consistency issues for largeāscale deformations, and the fact that only a single reference limits style diversity. Future work is suggested in integrating global context, training on multiple exemplars, and extending the framework to modify mesh topology (e.g., creating holes) in addition to vertex displacement. This research opens new avenues for automatic, highāfidelity mesh detailing in graphics, game asset creation, and scientific visualization.
Comments & Academic Discussion
Loading comments...
Leave a Comment