ALGORITHM FOR DYNAMIC UNLIMITED COMPARISON OF FEATURES FOR FACE RECOGNITION

АЛГОРИТМ ДИНАМИЧЕСКОГО НЕОГРАНИЧЕННОГО СРАВНЕНИЯ ПРИЗНАКОВ ДЛЯ РАСПОЗНАВАНИЯ ЛИЦ
Arzieva J.T.
Цитировать:
Arzieva J.T. ALGORITHM FOR DYNAMIC UNLIMITED COMPARISON OF FEATURES FOR FACE RECOGNITION // Universum: технические науки : электрон. научн. журн. 2023. 4(109). URL: https://7universum.com/ru/tech/archive/item/15325 (дата обращения: 04.05.2024).
Прочитать статью:

 

ABSTRACT

Since the last three decades, face detection and recognition have become very active and a huge part of image processing research. Front view/direction face recognition has proved promising results with many constraints. This partial face recognition can also be termed as Unconstrained Dynamic Feature Matching (U-DFM); it does not require prior knowledge of angle, direction, and view.

АННОТАЦИЯ

За последние три десятилетия обнаружение и распознавание лиц стали очень активными и стали огромной частью исследований в области обработки изображений. Распознавание лиц вида спереди / направления показало многообещающие результаты со многими ограничениями. Это частичное распознавание лиц можно также назвать неограниченным динамическим сопоставлением признаков (U-DFM); для этого не требуется предварительное знание угла, направления и обзора.

 

Keywords: Unconstrained dynamic feature matching, fully convolutional network, ambiguity sensitive matching classifier partial face recognition.

Ключевые слова: неограниченное динамическое сопоставление признаков, полностью сверточная сеть, классификатор соответствия, чувствительный к неоднозначности, частичное распознавание лиц.

 

I. Introduction  

Face detection and recognition is the mechanism of correctly identifying face by using extracted features of a face from the image database. Face recognition has become a trendy area of research due to its wide range of applications in day to days' life. The most popular applications are video surveillance, identification systems, access control, and pervasive computing [1].

The purpose of image processing has separated into five steps. These steps are as follows:

  1. Visualization- Look at the different objects from an input image; those are not observable. 
  2. Image sharpening and restoration- From an input image to make a quality image.
  3. Image recover- To look at the interest of an image from an input image.
  4. Measurement/Enumeration of pattern- From an input image, measure the various objects.
  5. Image Recognition- From an input image, distinguish the various objects.

 

Figure 1. Partial face images are produced in unconstrained environments

 

The face recognition methods can group into four parts [4].

  1. Holistic Matching
  2. Feature-based 
  3. Model-Based 
  4. Hybrid

1. Holistic matching: In this methodology, the face catching system [5] takes a whole face image as input. The prominent examples of the holistic method are Principal Component Analysis (PCA), Eigenfaces, Linear Discriminant Analysis (LDA) [6], and Independent Component Analysis.

2. Feature-based methods: The methodology presents extractions of features like eyes, nose, eye-brows, and marks on face [7].  Such features called local features. These local features and local statistics together used for face recognition. Structural classifier takes local features as input and yields better face recognition results. The difficult task of feature extraction is feature restoration. Feature extracted at different angles makes it challenging to face recognition. Feature extraction methods are as follows 1) Template-based features 2) Based on edges, marks, and curves called a Generic method. 3) Geometrical constraints like angles, direction.

3. Model-Based method: 3-dimensional and 2dimensional face recognition is considered as modelbased methods. The purpose of algorithms is to create a face. The 3-dimensional methodologies are more complex. These 3-dimensional methods capture the 3dimensional nature of the human face.

4. Hybrid methods: As the name suggests, this method contains combinations of both holistic and feature extraction methods. For complex methodologies like a 3dimensional face recognition system, hybrid methods are used, which produces better noticeable results. The face captured image is 3D; it allows the system to capture curves of eyes, nose, and mouth. It will enable noting the mole, marks of the face.

The process consists of four operations Detection, Position, Measurement, and Representation.Detection – This phase consists of scanning a photograph or capturing the image of a person in realtime.   Position – This phase consists of identifying, finding the location, size, and angle of the forehead. It also includes the detection of relation position of one curve/mark from others. Measurement – This phase consists of noting/calculating the relative distance of face organs from other organs. These organs are eyes, ears, mouth, mustache, chin, etc.  Representation – This phase consists of systematic representation where a template converted into code. Where input features compared with database features.

A. Issues in Face Recognition Techniques.

The constraints or issues of face recognition systems are facial expression, age variation, dust, blur images, distance, obstacles, etc.

Age changing: The human face does not remain the same overtimes.

Occlusion in faces: It is not possible to input well captured whole image to face recognition system. The occlusion can be distance, dust, beard, mustache, illumination, which to encumber to face recognition systems. In the real world, it is common to have faced with scarves, hats, glasses. The occlusion can affect the accuracy of the face recognition system.

Similar faces: It is challenging to identify the same faces person. Besides these similarities, twins and similar facial views can result in false-positive results. For more security fingerprint, iris-based authentication can be used.

The image quality: The vital requirement of the excellent face recognition system is a good quality input image and a good quality image database. The issue that degrades image quality can be camera quality, lens, environment changes, dust, sunlight, fog, etc. The images captured maybe with different moods, angles, distances, etc. A good quality image can give better features for face recognition [9]. Low resolution: If the captured image as a lower resolution than 16 × 16, it is considered as low-resolution images. The resolution of a picture or an image describes the information an image holds. The higher resolution image includes more information than a lower resolution image. The information translates to “pixels,” which have different colored dots that makes an image. The lowresolution image pixels look like squares all joined together. The problem may occur if the quality of the camera, the lens is lower. The low-resolution image can be due to the effect of environmental changes like rain, fog, dust, etc.  The problem can be frequent with CCTV, video surveillance systems. As the person's face is not always equidistant from the camera, low-resolution images can be produced. Different parameters like the distance of the person from a camera, crowd, and different angles can also create low-resolution images. If such low-resolution images can be input to face recognition systems, the accuracy of the face recognition system can be decreased.

B. Limitations in Face Recognition Methods:

Since 1988 face recognition has gained much accuracy in face recognition methods. The performance of face recognition methods depends on efficiency, speed of recognition, false positive, and true negative ratio. The primary goal of the face recognition system is to obtain a high percentage of accuracy with less response time. To till date, a face recognition system doesn't entirely work. Before face recognition, feature-based methods analyze the features from the face images. The features extracted are less delicate to alignment and less delicate to variations such as imaging conditions or face orientation, such as scale, resolution, illumination, etc. The main challenge of feature-based methods is the meaningful description of features. If the feature is not discriminative, even the hearty machine learning approaches cannot achieve excellent face recognition execution.

One challenge is to deal with the feature selection is automatic. In the unconstrained face recognition system, among the several features extracted using different methods, absolute threshold values are mapped, and probability evaluated. The probability value is used to recognize the face. This article is organized as follows: Section II presents previous related works of partial face recognition, Section III describes the proposed approach for face recognition using two combined algorithms, and Section IV provides a study of conclusion.

II. Related work

Nowadays, image recognition is the most intensively studied technology in computer vision. To improve many applications by using the techniques of image recognition. The proposed algorithm is a combination of Fully Convolutional Network (FCN) and Ambiguity Sensitive Matching Classifier (AMC). We discuss some work in this section.

A. Fully Convolutional Network

B. Sparse Representation Classification

C . Ambiguity Sensitive Matching Classifier

III. Proposed approach

To deal with the drawbacks of traditional face detection and face recognition, a new novel approach proposed. An innovative mechanism named Dynamic Feature Matching + Ambiguity Sensitive Matching Classifier (DFM+AMC) proposed. It combines Fully Convolutional Networks (FCNs) and Sparse Representation Classification (SRC). This innovative mechanism involves partial face recognition regardless of the different dimensioned face. This mechanism proves to be an efficient unconstrained face detection and recognition system. The flow chart of our approach is shown in Fig. 2.

 

Figure 2. Flowchart of partial face detection and recognition

 

A. Fully Convolutional Network

A Fully Convolutional Network (FCN) is a prevalent and robust method of recognition method to produce hierarchies of features. A three-dimensional array of size h x w x d in each layer of convnet data, where d is the feature or channel dimension, and h and w are spatial dimensions. In the image, the first layer with pixel size h x w, and d color channels. A position of location in upper layers belongs to the path-connected to that location in the image.

B. Dynamic Feature Matching

The angle, distance of the face is unknown in advance in the input image; it is a difficult task to match input features with database image features [1]. Characteristics/features related to input and database image may be different. It is not possible to match the exact feature weights of the input image with database feature weights. The issue can be resolved by keeping a certain threshold with each feature [1]

C. Ambiguity Sensitive Matching Classifier

The AMC has two matching models: local-to-local and global-to-local model.  The local-to-local model is used for calculating patchbased matching. In this model, first, extract the features of multi-patch for each image from the gallery. To compute the ambiguity sensitive coding of each probe with respect to the gallery dictionary. The problem is the probe assumed unknown. To avoid this problem, select combined patches from the gallery dictionary rather than the single patch, still they produce minimal reconstruction error. To prevent error reconstruction, calculate the ambiguity score of probe patch and each galley patches.

Algorithm: Framework of  DFM + AMC Input: 1: A probe face image and gallery face images; Output: Identify the probe face image;1: Extract feature of each gallery images and calcu- late the ambiguity score; 2: Extract feature of probe patch: p and calculate the ambiguity score; 3: Construct dynamic gallery dictionary of C sub-jects: G={G1, G2, ......,GC}; 4: Compare probe patch with gallery images (feature and ambiguity score); 5:  Result;

The algorithm will be testing on MATLAB software. The algorithm dynamic partial face recognition will be testing using three face databases, including NIR-mobile, LFW, and NIR-Distance. Single-shot and multi-shot: Single-shot test [1] implies that a single image (N=1) is utilized as the gallery image for each person. The multi-shot test [1] implies that different (N>1) images are utilized as the gallery images for every person. Table I shows the results of single-shot DFM along with the existing face recognition algorithms [1]. Now, we are suggesting a new approach, which is a combination of DFM+AMC. This proposed approach will produce better results than algorithms in Table I. As per the table, the proposed method is to combined two techniques to obtain better results with more efficiency, which is our proposed research work.

IV. Applications of face recognition

Many distributed works notice various applications in which face recognition technology is already utilized including egress and entry to secured risk spaces such as, military bases, nuclear power plants, and border crossings, as well as access to restricted resources like trading terminals, computers, networks banking transactions, personal devices, and medical records [9][10]. Face recognition can be a vital part of information security; it has not been used with its full strength. Some of the famous applications of face recognition are as follows.

Automated surveillance: This is a fundamental mechanism used to keep watch on specific people for security reasons.

Monitoring Closed-Circuit Television (CCTV): The capability of facial recognition can be embedded into existing CCTV networks, for known criminals or drug offenders are looked at.

V. Challenges and opportunities in face recognition

The challenges in face recognition are blur image, a bad quality input image, environment effected images, occlusion, facial expression, distortion, less training dataset, etc.

VI. Conclusion

The DFM+AMC approach combines Fully Convolutional Networks (FCNs) and Sparse Representation Classification (SRC). The DFM+AMC address various face sizes problem of partial face recognition. The algorithm will give better results than traditional algorithms. The DFM+AMC approach is an unconstrained face detection and recognition system. The algorithm will prove to have better efficiency than existing algorithms.

 

References:

  1. L. He, H. Li, Q. Zhang, and Z. Sun, “Dynamic feature matching for partial face recognition,” IEEE Transactions on Image Processing, vol. 28, no. 2, pp. 791-802, Feb. 2019.
  2. J. Galbally, C. McCool, J. Fierrez, S. Marcel, and J. Ortega-Garcia, “On the vulnerability of face verification systems to hill-climbing attacks,” Pattern Recognition, vol. 43, no. 3, pp. 1027-1038, 2010.
  3. R. Weng, J. Lu, J. Hu, G. Yang, and Y. P. Tan, “Robust feature set matching for partial face recognition,” in Proc. the IEEE International Conference on Computer Vision, 2013, pp. 601-608.
  4. D. N. Parmar and B. B. Mehta, “Face recognition methods & applications,” arXiv preprint arXiv:1403.0485, 2014.
  5. M. A Turk and A. P. Pentland, “Face recognition using eigenfaces,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991, pp. 586-591.
  6. S. Satonkar, B. K. Ajay, and B. P. Khanale, “Face recognition using principal component analysis and linear discriminant analysis on holistic approach in facial images database,” Int. Organ. Sci. Res., vol. 2, no. 12, pp. 15-23, 2012.
  7. W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Computing Surveys, vol. 35, no. 4, pp. 399-458, 2003.
  8. K. M. Malikovich, I. S. Z. Ugli, and D. L. O'ktamovna, “Problems in face recognition systems and their solving ways,” in Proc. International Conference on Information Science and Communications Technologies, 2017, pp. 1-4.
  9. L. He, H. Li, Q. Zhang, Z. Sun, and Z. He, “Multiscale representation for partial face recognition under near infrared illumination,” in Proc. IEEE Int. Conf. Biometrics Theory, Appl. Syst., Sep. 2016, pp. 1-7.
  10. P. J. Phillips, et al., “Overview of the face recognition grand challenge,” IEEE Conf. Computer Vision and Pattern Recognition, San Diego, CA, USA, 2005, pp. 947-954.
Информация об авторах

PhD, assistant professor, Karakalpak state University named after Berdakh, Republic of Uzbekistan, Nukus

PhD, доц., Каракалпакский государственный университет имени Бердаха, Республика Узбекистан, г. Нукус

Журнал зарегистрирован Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор), регистрационный номер ЭЛ №ФС77-54434 от 17.06.2013
Учредитель журнала - ООО «МЦНО»
Главный редактор - Ахметов Сайранбек Махсутович.
Top