Lotfi Hermi
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lotfi Hermi.
Pattern Recognition | 2007
Mohamed A. Khabou; Lotfi Hermi; Mohamed Ben Hadj Rhouma
The eigenvalues of the Dirichlet Laplacian are used to generate three different sets of features for shape recognition and classification in binary images. The generated features are rotation-, translation-, and size-invariant. The features are also shown to be tolerant of noise and boundary deformation. These features are used to classify hand-drawn, synthetic, and natural shapes with correct classification rates ranging from 88.9% to 99.2%. The classification was done using few features (only two features in some cases) and simple feedforward neural networks or minimum Euclidian distance.
Communications in Partial Differential Equations | 2011
Evans M. Harrell; Lotfi Hermi
In this article we prove the equivalence of certain inequalities for Riesz means of eigenvalues of the Dirichlet Laplacian with a classical inequality of Kac. Connections are made via integral transforms including those of Laplace, Legendre, Weyl, and Mellin, and the Riemann–Liouville fractional transform. We also prove new universal eigenvalue inequalities and monotonicity principles for Dirichlet Laplacians as well as certain Schrödinger operators. At the heart of these inequalities are calculations of commutators of operators, sum rules, and monotonic properties of Riesz means. In the course of developing these inequalities we prove new bounds for the partition function and the spectral zeta function (cf. Corollaries 3.5–3.7) and conjecture about additional bounds.
Transactions of the American Mathematical Society | 2008
Lotfi Hermi
In this paper, we prove two new Weyl-type upper estimates for the eigenvalues of the Dirichlet Laplacian. As a consequence, we obtain the following lower bounds for its counting function. For λ> λ 1 , one has N(λ) > 2 n+2 1 H n (λ -λ 1 ) n/2 λ 1 -n/2 and N(λ)>(n+2/n+4) n/2 1 H n (λ-(1+4/n)λ 1 ) n/2 λ 1 -n/2 , where H n = 2n j 2 n/2-1,1J 2 n/2(jn/2-1,1) is a constant which depends on n, the dimension of the underlying space, and Bessel functions and their zeros.
Rocky Mountain Journal of Mathematics | 2008
Mark S. Ashbaugh; Lotfi Hermi
Using spherical harmonics, rearrangement techniques, the Sobolev in- equality, and Chitis reverse Holder inequality, we obtain extensions of a classical result of Payne, Polya, and Weinberger bounding the gap between consecutive eigen- values of the Dirichlet Laplacian in terms of moments of the preceding ones. The extensions yield domain-dependent inequalities.
southeastcon | 2007
Mohamed A. Khabou; Mohamed Ben Haj Rhouma; Lotfi Hermi
The eigenvalues of the Neumann Laplacian are used to generate three different sets of features for shape recognition and classification in binary images. The generated features are rotation, translation, and size invariant and are shown to be tolerant of boundary deformation. The effectiveness of these features is demonstrated by using them to classify 5 types of computer generated and hand drawn shapes. The classification was done using 4 to 20 features fed to a simple feedforward neural network. Correct classification rates ranging from 94.4% to 100% were obtained on computer generated shapes and 67.5% to 95.5% on hand drawn shapes.
Advances in Imaging and Electron Physics | 2011
Mohamed Ben Haj Rhouma; Mohamed A. Khabou; Lotfi Hermi
Abstract Recently there has been a surge in the use of the eigenvalues of linear operators in problems of pattern recognition. In this chapter, we discuss the theoretical, numerical, and experimental aspects of using four wellknown linear operators and their eigenvalues for shape recognition. In particular, the eigenvalues of the Laplacian operator under Dirichlet and Neumann boundary conditions, as well as those of the clamped plate and buckling of a clamped plate, are examined. Since the ratios of eigenvalues for each of these operators are translation, rotation, and scale invariant, four feature vectors are extracted for the purpose of shape recognition. These feature vectors are then fed into a basic neural network for training and measuring the performance of each of the feature vectors, which in turn were all shown to be reliable features for shape recognition. We also offer a review of the literature on finite difference schemes for these operators and summarize key facts about their eigenvalues that are of relevance in image recognition.
Pacific Journal of Mathematics | 2004
Mark S. Ashbaugh; Lotfi Hermi
Journal of Functional Analysis | 2008
Evans M. Harrell; Lotfi Hermi
arXiv: Spectral Theory | 2007
Mark S. Ashbaugh; Lotfi Hermi
Archive | 2003
Mark S. Ashbaugh; Lotfi Hermi