From Wikipedia, the free encyclopedia - View original article
Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the irides of an individual's eyes, whose complex random patterns are unique and can be seen from some distance.
Not to be confused with another, less prevalent, ocular-based technology, retina scanning, iris recognition uses camera technology with subtle infrared illumination to acquire images of the detail-rich, intricate structures of the iris externally visible at the front of the eye. Digital templates encoded from these patterns by mathematical and statistical algorithms allow the identification of an individual or someone pretending to be that individual. Databases of enrolled templates are searched by matcher engines at speeds measured in the millions of templates per second per (single-core) CPU, and with infinitesimally small False Match rates.
Many millions of persons in several countries around the world have been enrolled in iris recognition systems, for convenience purposes such as passport-free automated border-crossings, and some national ID systems based on this technology are being deployed. A key advantage of iris recognition, besides its speed of matching and its extreme resistance to False Matches, is the stability of the iris as an internal, protected, yet externally visible organ of the eye.
Although John Daugman developed and patented the first actual algorithms to perform iris recognition, published the first papers about it and gave the first live demonstrations, the concept behind this invention has a much longer history and today it benefits from many other active scientific contributors.
In a 1953 clinical textbook, F.H. Adler wrote: "In fact, the markings of the iris are so distinctive that it has been proposed to use photographs as a means of identification, instead of fingerprints." Adler referred to comments by the British ophthalmologist J.H. Doggart, who in 1949 had written that: "Just as every human being has different fingerprints, so does the minute architecture of the iris exhibit variations in every subject examined. [Its features] represent a series of variable factors whose conceivable permutations and combinations are almost infinite." Later in the 1980s, two American ophthalmologists, L. Flom and A. Safir managed to patent Adler's and Doggart's conjecture that the iris could serve as a human identifier, but they had no actual algorithm or implementation to perform it and so their patent remained conjecture. The roots of this conjecture stretch back even further: in 1892 the Frenchman A. Bertillon had documented nuances in "Tableau de l'iris humain". Divination of all sorts of things based on iris patterns goes back to ancient Egypt, to Chaldea in Babylonia, and to ancient Greece, as documented in stone inscriptions, painted ceramic artefacts, and the writings of Hippocrates. (Iris divination persists today, as "iridology.")
The core theoretical idea in Daugman's algorithms is that the failure of a test of statistical independence can be a very strong basis for pattern recognition, if there is sufficiently high entropy (enough degrees-of-freedom of random variation) among samples from different classes. In 1994 he patented this basis for iris recognition and its underlying Computer Vision algorithms for image processing, feature extraction, and matching, and published them in a paper. These algorithms became widely licensed through a series of companies: IriScan (a start-up founded by Flom, Safir, and Daugman), Iridian, Sarnoff, Sensar, LG-Iris, Panasonic, Oki, BI2, IrisGuard, Unisys, Sagem, Enschede, Securimetrics and L-1, now owned by French company Safran Morpho.
With various improvements over the years, these algorithms remain today the basis of all significant public deployments of iris recognition, and they are consistently top performers in NIST tests (implementations submitted by L-1, MorphoTrust and Morpho, for whom Daugman serves as Chief Scientist for Iris Recognition). But research on many aspects of this technology and on alternative methods has exploded, and today there is a rapidly growing academic literature on optics, photonics, sensors, biology, genetics, ergonomics, interfaces, decision theory, coding, compression, protocol, security, mathematical and hardware aspects of this technology. Most flagship deployments of these algorithms have been at airports, in lieu of passport presentation, and for security screening using watch-lists. In the early years of this century, major deployments began at Amsterdam's Schiphol Airport and at 10 UK airport terminals allowing frequent travellers to present their iris instead of their passport, in a programme called IRIS: Iris Recognition Immigration System. Similar systems exist along the US / Canadian border, and many others. In the United Arab Emirates, all 32 air, land, and seaports deploy these algorithms to screen all persons entering the UAE requiring a visa. Because a large watch-list compiled among GCC States is exhaustively searched each time, the number of iris cross-comparison has climbed to 62 trillion in 10 years. But by far the most breathtaking deployment began operation in 2011 in India, whose Government is enrolling the iris patterns (and other biometrics) of all 1.2 billion citizens for the Aadhaar scheme for entitlements distribution, run by the Universal IDentification Authority of India (UIDAI). This vastly ambitious programme enrolls about 1 million persons every day, across 36,000 stations operated by 83 agencies. By late 2013 the number of persons enrolled exceeded 530 million. Its purpose is to issue each citizen a biometrically provable unique entitlement number (Aadhaar) by which benefits may be claimed, and social inclusion enhanced; thus the slogan of UIDAI is: "To give the poor an identity."
All publically deployed iris recognition systems acquire images of an iris in the near infrared wavelength band (NIR: 700 - 900 nm) of the electromagnetic spectrum. The majority of persons worldwide have "dark brown eyes", the dominant phenotype of the human population, revealing less visible texture in the VW band but appearing richly structured, like the cratered surface of the moon, in the NIR band. (Some examples are shown here.) Using the NIR spectrum also enables the blocking of corneal specular reflections from a bright ambient environment, by allowing only those NIR wavelengths from the narrowband illuminator back into the iris camera.
Iris melanin, also known as chromophore, mainly consists of two distinct heterogeneous macromolecules, called eumelanin (brown–black) and pheomelanin (yellow–reddish), whose absorbance at longer wavelengths in the NIR spectrum is negligible. At shorter wavelengths within the VW spectrum, however, these chromophores are excited and can yield rich patterns. Hosseini, et al. provide a comparison between these two imaging modalities. An alternative feature extraction method to encode VW iris images was also introduced, which may offer an alternative approach for multi-modal biometric systems.
|Visible Wavelength Iris Image||Near Infrared (NIR) version||Visible Wavelength Corneal Reflections||NIR Imaging Extracts Structure|
An iris-recognition algorithm first has to localize the inner and outer boundaries of the iris (pupil and limbus) in an image of an eye. Further subroutines detect and exclude eyelids, eyelashes, and specular reflections that often occlude parts of the iris. The set of pixels containing only the iris, normalized by a rubber-sheet model to compensate for pupil dilation or constriction, is then analyzed to extract a bit pattern encoding the information needed to compare two iris images.
In the case of Daugman's algorithms, a Gabor wavelet transform is used. The result is a set of complex numbers that carry local amplitude and phase information about the iris pattern. In Daugman's algorithms, most amplitude information is discarded, and the 2048 bits representing an iris pattern consist of phase information (complex sign bits of the Gabor wavelet projections). Discarding the amplitude information ensures that the template remains largely unaffected by changes in illumination or camera gain (contrast), and contributes to the long-term usability of the biometric template.
For identification (one-to-many template matching) or verification (one-to-one template matching), a template created by imaging an iris is compared to stored template(s) in a database. If the Hamming distance is below the decision threshold, a positive identification has effectively been made because of the statistical extreme improbability that two different persons could agree by chance ("collide") in so many bits, given the high entropy of iris templates.
The iris of the eye has been described as the ideal part of the human body for biometric identification for several reasons:
It is an internal organ that is well protected against damage and wear by a highly transparent and sensitive membrane (the cornea). This distinguishes it from fingerprints, which can be difficult to recognize after years of certain types of manual labor. The iris is mostly flat, and its geometric configuration is only controlled by two complementary muscles (the sphincter pupillae and dilator pupillae) that control the diameter of the pupil. This makes the iris shape far more predictable than, for instance, that of the face.
The iris has a fine texture that—like fingerprints—is determined randomly during embryonic gestation. Like the fingerprint, it is very hard (if not impossible) to prove that the iris is unique. However, there are so many factors that go into the formation of these textures (the iris and fingerprint) that the chance of false matches for either is extremely low. Even genetically identical individuals have completely independent iris textures. An iris scan is similar to taking a photograph and can be performed from about 10 cm to a few meters away. There is no need for the person being identified to touch any equipment that has recently been touched by a stranger, thereby eliminating an objection that has been raised in some cultures against fingerprint scanners, where a finger has to touch a surface, or retinal scanning, where the eye must be brought very close to an eyepiece (like looking into a microscope).
The commercially deployed iris-recognition algorithm, John Daugman's IrisCode, has an unprecedented false match rate (better than 10−11 if a Hamming distance threshold of 0.26 is used, meaning that up to 26% of the bits in two IrisCodes are allowed to disagree due to imaging noise, reflections, etc., while still declaring them to be a match). While there are some medical and surgical procedures that can affect the colour and overall shape of the iris, the fine texture remains remarkably stable over many decades. Some iris identifications have succeeded over a period of about 30 years.
Many commercial iris scanners can be easily fooled by a high quality image of an iris or face in place of the real thing. The scanners are often tough to adjust and can become bothersome for multiple people of different heights to use in succession. The accuracy of scanners can be affected by changes in lighting. Iris scanners are significantly more expensive than some other forms of biometrics, as well as password and proximity card security systems.
Iris scanning is a relatively new technology and is incompatible with the very substantial investment that the law enforcement and immigration authorities of some countries have already made into fingerprint recognition. Iris recognition is very difficult to perform at a distance larger than a few meters and if the person to be identified is not cooperating by holding the head still and looking into the camera. However, several academic institutions and biometric vendors are developing products that claim to be able to identify subjects at distances of up to 10 meters ("Standoff Iris" or "Iris at a Distance" as well as SRI International's "Iris on the Move" for persons walking at speeds up to 1 meter/sec).
As with other photographic biometric technologies, iris recognition is susceptible to poor image quality, with associated failure to enroll rates. As with other identification infrastructure (national residents databases, ID cards, etc.), civil rights activists have voiced concerns that iris-recognition technology might help governments to track individuals beyond their will. Researchers have tricked iris scanners using images generated from digital codes of stored irises. Criminals could exploit this flaw to steal the identities of others.
The first study on surgical patients involved modern cataract surgery and showed that it can change iris texture in such a way that iris pattern recognition is no longer feasible or the probability of falsely rejected subjects is increased.
As with most other biometric identification technology, a still not satisfactorily solved problem with iris recognition is the problem of live-tissue verification. The reliability of any biometric identification depends on ensuring that the signal acquired and compared has actually been recorded from a live body part of the person to be identified and is not a manufactured template. Many commercially available iris-recognition systems are easily fooled by presenting a high-quality photograph of a face instead of a real face, which makes such devices unsuitable for unsupervised applications, such as door access-control systems. The problem of live-tissue verification is less of a concern in supervised applications (e.g., immigration control), where a human operator supervises the process of taking the picture.
Methods that have been suggested to provide some defence against the use of fake eyes and irises include changing ambient lighting during the identification (switching on a bright lamp), such that the pupillary reflex can be verified and the iris image be recorded at several different pupil diameters; analysing the 2D spatial frequency spectrum of the iris image for the peaks caused by the printer dither patterns found on commercially available fake-iris contact lenses; analysing the temporal frequency spectrum of the image for the peaks caused by computer displays.
Other methods include using spectral analysis instead of merely monochromatic cameras to distinguish iris tissue from other material; observing the characteristic natural movement of an eyeball (measuring nystagmus, tracking eye while text is read, etc.); testing for retinal retroreflection (red-eye effect) or for reflections from the eye's four optical surfaces (front and back of both cornea and lens) to verify their presence, position and shape. Another proposed method is to use 3D imaging (e.g., stereo cameras) to verify the position and shape of the iris relative to other eye features.
A 2004 report by the German Federal Office for Information Security noted that none of the iris-recognition systems commercially available at the time implemented any live-tissue verification technology. Like any pattern-recognition technology, live-tissue verifiers will have their own false-reject probability and will therefore further reduce the overall probability that a legitimate user is accepted by the sensor.