Free Essay

I.R.I.S- Iris Recognition & Information System

In:

Submitted By kharet
Words 2531
Pages 11
I.R.I.S- Iris Recognition & Information System
TUHINA KHARE, MEDI-CAPS INSTITUTE OF TECHNOLOGY & MANAGEMENT, INDORE, INDIA; Email: kharetuhina@gmail.com

Abstract. In computer systems, there is an urgent need for accurate authentication techniques to prevent unauthorized access. Only biometrics, the authentication of individuals using biological identifiers, can offer true proof of identity. This paper presents software for recognition and identification of people by using iris patterns. The system has been implemented using MATLAB for its ease in image manipulation and wavelet applications. The system also provides features for calculating the technical details of the iris image (Centre & Radius, Color Recognition). The system is based on an empirical analysis of the iris image and it is split in several steps using local image properties. Graphical user interface (GUI) has been introduced for easier application of the system. The system was tested and segmentation result came out to be 100% correct segmentation. The experimental results showed that the proposed system could be used for personal identification in an efficient and effective manner.

Keyword: iris recognition, authentication, biometrics, Haar wavelet, GUI, MATLAB, image processing. 1. INTRODUCTION
To control the access to secure areas or materials, a reliable personal identification infrastructure is required. Conventional methods of recognizing the identity of a person by using passwords or cards are not altogether reliable, because they can be forgotten or stolen. Biometric technology, which is based on physical and behavioral features of human body such as face, fingerprints, hand shape, eyes, signature and voice, has now been considered as an alternative to existing systems in a great deal of application domains. Such application domains include entrance management for specified areas, and airport security checking system. Among various physical characteristics, iris patterns have attracted a lot of attention for the last few decades in biometric technology because they have stable and distinctive features for personal identification. That is because every iris has fine and unique patterns and does not change over time since two or three years after the birth. Figure 1 shows an image of human iris pattern.

Figure1. Human Eye

Iris recognition consists of the iris capturing, pre-processing and recognition of the iris region in a digital eye image. Iris image pre-processing includes iris localization, normalization, and enhancement. Each of these steps uses different algorithms. In iris localization step, the determination of the inner and outer circles of the iris and the determination of the upper and lower bound of the eyelids are performed. The inner circle is located between the iris and pupil boundary, the outer circle is located between the sclera and iris boundary. After the localization, segmentation and normalization processes are applied. At this stage, we can extract the texture of the iris using Haar Wavelets. Finally, we compare the coded image with the already coded iris in order to find a match or detect an imposter.

2. IRIS RECOGNITION
Figure2 illustrates the typical stages of iris recognition systems.

Figure2. Typical stages of the iris recognition.

2.1 Image Acquisition:
The emphasis has been made only on the software for performing recognition, and not hardware for capturing an eye image.

2.2 Preprocessing Stage:
In the preprocessing stage, I transformed the images from RGB to gray level and from eight-bit to double precision thus facilitating the manipulation of the images in subsequent steps. In this stage, we should determine an iris part of the image by localizing the portion of the image derived from inside the limbus (outer boundary) and outside the pupil (inner boundary), and finally convert the iris part into a suitable representation. To localize an iris, we should find the center of the pupil at first, and then determine the inner and outer boundaries. Because there is some obvious difference in the intensity around each boundary, an edge detection method is easily applied to acquire the edge information. Next, we apply a Circular summation which consists of summing the intensities over all circles, by using three nested loops to pass over all possible radii and center coordinates. The circle with the biggest radius and highest summation corresponds to the outer boundary. The center and radius of the iris in the original image are determined by rescaling the obtained results. The localized iris part from the image is transformed into polar coordination system in an efficient way so as to facilitate the next process, the feature extraction process. The portion of the pupil is excluded from the conversion process because it has no biological characteristics at all. The distance between the inner boundary and the outer boundary is normalized into [0, 60] according to the radius r. By increasing the angle θ by 0.8° for an arbitrary radius r, we obtain 450 values. We, therefore, can get a 450×60 iris image for the plane (θ, r). Figure3 shows the process of converting the Cartesian coordinate system into the polar coordinate system for the iris part.

Figure3. Example of results in the preprocessing stage.

2.3 Feature Extraction Stage:
Gabor transform and wavelet transform are typically used for analyzing the human iris patterns and extracting feature points from them. In this paper, a wavelet transform is used to extract features from iris images. Among the mother wavelets, I used Haar wavelet illustrated in Figure4 as a basis function.

Figure4. Haar Mother Wavelet

Figure 5 shows the conceptual process of obtaining the feature vectors with the optimized dimension. Here, H and L mean the high-pass filter and the low-pass filter, respectively, and HH indicates that the high-pass filter is applied to the signals of both axes. For the 450×60 iris image obtained from the preprocessing stage, we apply wavelet transform four times in order to get the 28×3 sub-images. Finally, we organize a feature vector by combining 84 features in the HH sub-image of the high-pass filter of the fourth transform (HH4 of Fig. 5) and each average value for the three remaining high-pass filter areas (HH1, HH2, and HH3 in Fig. 5). The dimension of the resulting feature vector is 87.

Figure5. Conceptual diagram for organizing a feature vector

Each value of 87 dimensions has a real value between –1.0 and 1.0. To reduce space and computational time for manipulating the feature vector, we quantize each real value into binary value by simply converting the positive value into 1 and the negative value into 0. Therefore, we can represent an iris image with only 87 bits.

2.4 Test of statistical independence:
This test enables the comparison of two iris patterns. This test is based on the idea that the greater the Hamming distance between two feature vectors, the greater the difference between them. Two similar irises will fail this test since the distance between them will be small. In fact, any two different irises are statistically “guaranteed” to pass this test as already proven. The Hamming distance (HD) between two Boolean vectors is defined as follows:

where, CA and CB are the coefficients of two iris images and N is the size of the feature vector (in our case N = 87). The is the known Boolean operator that gives a binary 1 if the bits at position j in CA and CB are different and 0 if they are similar. I was not able to access any large eyes database and was only able to collect 60 images. Threshold value is 0.3. Thus, when comparing two iris images, their corresponding binary feature vectors are passed to a function responsible of calculating the Hamming distance between the two. The decision of whether these two images belong to the same person depends upon the following result: · If HD 0.3 decide that it is different person (or left and right eyes of the same person)

3. GRAPHICAL USER INTERFACE
To easily manipulate the images in our database I built an interface that allows the user to choose between different options. The program’s main window is presented below:

Figure6. The application’s main window

3.1 Load Image:
“Load Image” loads a pre-existing image of the iris from the database using the command “uigetfile”. Pressing on the first pushbutton “Load Image”, a window like the one in the next figure (7, 8) opens.

Figure7. Open dialogue window

Figure8. Loaded Image

3.2 Centre and Radius:
“Centre and radius” calculates the centre and radius of the loaded image and converts it the image into histogram and then finds the centre of the histogram image. Pressing on the second pushbutton open a window as shown in Figure9.

Figure9. Centre and Radius of the loaded image

3.3 Color of the Eye:
“Color of the Eye” finds the color of the eye using the command of “impixel” and “colormap” as shown in Figure10.

Figure10. Color of the Eye

3.4 Segment iris: This pushbutton segment the iris part from the eye and filters the image.

Figure11. Segmented iris

3.5 Normalize Iris: This pushbutton converts the segmented iris into polar coordinate system.

Figure12. Normalized Image

3.6 Wavelet Transform:
Figure13 shows the result of Haar transform. For the 450x60 iris image in polar coordinates, I apply wavelet transform 4-times in order to get the 28x3 sub-images.

Figure13. Haar Transformed Image

3.7 Code Generation:
This represents an iris image with 87 bits code.

3.8 Verification:
One-to-one comparison is performed by the verification system. Two cases have been shown below: Verification is True (Images match), Figure14. Verification is False (Images does not match), Figure15.

Figure14. Verification true

Figure15. Verification False

3.9 Identification:
The system performs a one-to-many comparison of images in case of identification. Two cases have been shown below: Identification True (Image Found in Database), Figure16. Identification False (Image not Found in Database), Figure17.

Figure16. Identification True

Figure17. Identification False

3.10 Quit:
This pushbutton closes all figure window and exits from the GUI. Figure18 shows the process.

Figure18. Quit Window

4. RESULTS AND PERFORMANCE
The system we developed was implemented on MATLAB 6.5, and run on a 1.70GHz Intel (R) Pentium (R) M processor. I developed a graphical user interface for our system. Our system was tested for accuracy against the database; the database consists of 60 images taken from different database available on the internet. The database includes partially occluded images, noisy images, blurred images, and images with lenses, images with light reflections and black and white images.

Figure19. Example Images from the database.

4.1Segmentation Results:
Iris segmentation or localization is of significant relevance to the success of any iris recognition system; correct localization of the iris highly reflects on the encoding of the true patterns that distinguish the person. Our system was tested for segmentation accuracy against the selected database; the simulations result was 100% true segmentation.

4.2 Identification Results:
I obtained an average of correct recognition of 97.25%, with an average computing time of 12.65s. The main reason of the failures I encountered is due to the quality of the pictures. Some of these problems are bad lighting, noises or inappropriate eye positioning.

4.3 Verification Results:
I obtained an average of correct verification of 98.63% with an average computing time of 9.075s. When the system is run, a username and a password are needed for authorized access to the system. Figures 20 clearly illustrate the concept.

Figure20. Logon Window

Figure21. Error

If the username and password does not match, the user will be prompted “Invalid User Name or Password”. Figure21 shows this process. If the user was authenticated, the user will be able to investigate the main page illustrated above under “Graphical user Interface”. Figure6 shows the main page.

5. CONCLUSION
The final year project comes to be the beginning step of our own in the real field of research. I was able, for the last few months, to master the topic of iris recognition, and get heavily introduced to the different approaches involved in the topic from eye anatomy to an attempt to combine multiple biometrics, to image acquisition, to localizing both of the iris and the pupil, to normalizing and unwrapping the iris region, to finally encoding it to an iris code ready to the matching process. Besides, and beyond learning about image processing and wavelets and getting familiar with the use of the image processing toolbox available in MATLAB. The final outcome of this was the design and implementation of a novel iris recognition system. The system is a combination of a novel gradient-based edge detectors of our design and simple circular summation with extraction method for the iris localization, Daugman’s Rubber Sheet model for iris normalization and unwrapping, Haar wavelets for feature encoding - I proposed encoding into 87 bits iris code, and hamming distance as a matching algorithm. The system was implemented in MATLAB, and tested on a selected database from the internet; segmentation results came out to be 100% correct segmentation, and identification results came out to be 97.25% with an average timing of 12.65s and verification results came out to be 98.63% with an average timing of 9.075s. From the experimental results, I am convinced that the proposed system is optimized enough to be applied to various real applications.

6. REFERENCE
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. John G. Daugman. How Iris Recognition Works. Proceedings of 2002 International Conference on Image Processing, Vol. 1, (2002). Richard P. Wildes. “Iris recognition: an emerging biometric technology”. In Proceedings of the IEEE, vol. 85, no.9, pages 1348–1363, U.S.A., (1997). Libor Masek. Recognition of human iris patterns for biometric identification. (2003). Available at: http://www.csse.uwa.edu.au/˜pk/studentprojects/labor W. Boles, B. Boashash. A human identification technique using images of the iris and wavelet transform. IEEE Transactions on Signal Processing, Vol. 46, No. 4, (1998). Zhu, Y., Tan, T., Wang, Y. Biometric Personal Identification Based On Iris Patterns. Proceedings of the 15th International Conference on Pattern Recognition, Spain, Vol. 2, (2000). R.Wildes, J.Asmuth, G.Green, S.Hsu, R.Kolczynski, J.Matey, and S.McBride .“A Machine-Vision System for Iris Recognition” Machine Vision and Applications, vol.9, pp.1-8, (1996). Li Ma, Tieniu Tan, Yunhong Wang and Dexin Zhang. Personal Identification Based on Iris Texture Analysis. CASIA iris database. Institute of Automation, Chinese Academy of Sciences http://sinobiometrics.com/casiairis.h Julian, “Biometrics: Advanced Identity Verification” The Complete Guide, Springer- Verlag publishers, (2000). G.O. Williams, “Iris Recognition Technology” IEE Aerospace and Electronics Systems Magazine, vol. 12, no. 4, pp. 23-29, (1997). J.G Daugman, “Recognizing Persons by their Iris Patterns” In Biometrics: Personal Identification in Networked Society, Kluwer, pp.103-121, (1998). P. Jablonski, R. Szewczyk, Z. Kulesza, A. Napieralski, J. Cabestany, M. Moreno, “People Identification on the Basis of Iris Pattern – Image Processing and Preliminary Analysis” International Conference MIEL'(2002). S. Lim, K. Lee, O. Byeon, and T. Kim, “Efficient Iris Recognition through Improvement of Feature Vector and Classifier” ETRI Journal, volume 23, no.2, June (2001). O’Gorman. (2003) "Comparing Passwords, Tokens, and Biometrics for User Authentication."

15.
16.

Similar Documents