The recent technological progress in acquisition, modeling and processing of 3D data leads to the proliferation of a large number of 3D objects databases. Consequently, the techniques used for content based 3D retrieval has become necessary. In this paper, we introduce a new method for 3D objects recognition and retrieval by using a set of binary images CLI (Characteristic level images). We propose a 3D indexing and search approach based on the similarity between characteristic level images using Hu moments for it indexing. To measure the similarity between 3D objects we compute the Hausdorff distance between a vectors descriptor. The performance of this new approach is evaluated at set of 3D object of well known database, is NTU (National Taiwan University) database.
A number of 3D shape retrieval methods have been proposed. The reader can refer to [1] [2] for a survey of methods. Several methods have been used to characterize the intrinsic attributes, such as the distances to the center [3][4] [5], and the curvature [6] of 3D shapes, and to project them onto a sphere to form spherical functions. The spherical harmonics are first introduced in the 3D model retrieval by Vranic et al. in [3] [5]. Tony Tung et al [7] introduce the technique relies upon matching graph representations of 3D-objects, the multiresolutional Reeb graph (MRG) is introduced and used for representing 3D-mesh models and used as a descriptor for 3D object. Vandeborre et al. [8] propose to use full three-dimensional information, the 3D objects are represented as mesh surfaces and 3D shape descriptors are used, the results obtained show the limitation of the approach when the mesh is not regular.
Recent investigations [9] [10] [11] illustrate that view-based methods with pose normalization preprocessing get better performance in retrieving rigid models than other approaches and more importantly. 2D view-based methods [12] [13] consider the 3D shape as a collection of 2D projections taken from canonical viewpoints. Each projection is then described by standard 2D image descriptors like Fourier descriptors [13] or Zernike moments [12]. Chen et al. [14] [15] defend the intuitive idea that two 3D models are similar if they also look similar from different angles. Therefore, they use 100 orthogonal projections of an object and encode them by Zernike moments and Fourier descriptors. They also point out that they obtain better results than other well-known descriptors as the MPEG-7 3D Shape Descriptor. Filali et al. [16] proposed a framework for 3D models indexing based on 2D views. The goal of their framework is to provide a method for optimal selection of 2D views from a 3D model, and a probabilistic Bayesian method for 3D models indexing from these views.
In the present paper, inspired by the work presented in [9][16] [17][18], we introduce a new descriptor for 3D shape. We generate a set of images given by the intersection of the paralleled, specific plans and the 3D shape. We first normalize this shape to guarantee the invariance of affine transformations, then we extract a number of images called LI (Level Images), after using the K-means method, the number of images is reduced to CLI set (characteristic level images). Each image has been indexed with Hu moments descriptor. The similarity measure between 3D objects returns to measure the similarity between the set of CLI. We will compare the proposed descriptor to two well known descriptors named 3D Zernike moments and 3D surface moments invariants. Then, we will analyze the performance of the proposed descriptor. This paper is organized as follow: section 2 presents two descriptors with them we will compare our descriptor. The proposed descriptor is presented in the section 3. In section 4, we will see the experimental results. Finally, conclusion summarizes the whole ideas of the present work.
The 3D Zernike functions are written in Cartesian coordinates [19] using the harmonic polynomials :
(
While restricting so that and be an even number, . And the coefficients are determined to guarantee the orthonormality. We are now able to define the 3D Zernike moments of a 3D object defined by f as Not that the coefficients can be written in a more compact form as a linear combination of monomials of order up to n Finally, the 3D Zernike moments of an object can be written as a linear combination of geometrical moments of order up to n:
Where denotes the geometrical moment of the object scaled to fit in the unit ball:
Where denotes the vector . The collect of the moments into (2l +1)dimensional vectors define the 3D Zernike descriptors as norms of vectors
Dong Xu and Hua Li [21] had used a 3-D surface moment invariants as shape descriptors for the representation of free-form surfaces. We consider a 3D surface triangulation consisting of triangles ,
. The centroid of the 3-D surface can be determined from the zero and the first-order moments by , , , then central moment are defined as So the central surface moments are invariant under translation. Then, we normalize the surface moments by , they also became invariant under scaling and can be defined as . To construct the surface moments invariant under rotation, D. Xu [21] use four geometric primitives for constructing six invariants consist of 3 fourth order, 2 third order and 1 mixed order surface moment invariants.
We normalize the 3D shape into a canonical coordinate frame, and characterize the shape by a set of characteristic level images noted CLI set. Each one of the CLI set was described by Hu’s moments [20]. The final set of moment invariants are used to be the feature vector.
A 3D object is generally given in arbitrary orientation, scale and position in the 3D space. As most of the 2D/3D s
This content is AI-processed based on open access ArXiv data.