With iPhone and laptop fingerprint access, facial scanning and fingerprint border controls, DNA crime scene analysis, biometrics are all-pervasive. In the recent popular BBC series, The Night Manager, Tom Hiddleston uses facial recognition on his phone to access a bank account. As the trend is increasing utilisation of biometrics for identification and authentication, we need to examine security implications.
Biometrics, the factor of “something you are”, has the advantage of always being present – you can never forget to take your fingerprints or face with you. Some unique physical attributes, such as an iris, are less invasive to collect than others such as retina capillaries. Society needs to debate whether and under what conditions it is acceptable to collect biometric data without permission.
The sensitivity of biometric measurements is a variable that should be determined for each application. Only applications where high security is imperative, require high accuracy. High sensitivity leads to false rejection errors, where individuals who should be accepted are rejected. Low sensitivity leads to false acceptance errors, where those who should not be accepted are. We measure these errors as the False Rejection Rate (FRR) and the False Acceptance Rate (FAR), where the Crossover Error Rate (CRR) is the sensitivity level at which the two are equal. iPhone’s Touch ID for example, does not need the highest sensitivity – it is less desirable to have many false rejections rather than the possibility of a false acceptance.
Biometrics can be anatomical or behavioural. Anatomical biometrics include fingerprints, palm prints, palm vein patterns, facial recognition, iris, retina, and ear cavity measurements. Behavioural biometrics include keyboard typing, voice, gait, brain waves, and heartbeats. When phoning the New Zealand tax office recently, I was asked to say my name three times for voice recognition purposes. Next time I call, they can authenticate me from my voice.
Some authentication methods utilise enhancements to the body, however these are not strictly biometrical as they are not “something you are”, but something added later which may be impermanent. Examples include a digital tattoo, and an authentication pill which reacts with stomach acid when swallowed.
Security of a biometric measure is affected where the physical attribute is not private, but on public view, such as measurements from the face. While monitored biometric access systems, such as building access supervised by security personnel, are more difficult to circumvent, online or unmonitored systems can be far less secure. Being on public view, the physical attribute can be stolen and used to replicate the original. Most facial recognition systems are circumvented using a photograph to authenticate. The same applies to iris scanners. Fingerprints can be lifted off surfaces such as glass (with gummi bears, or play-doh). A demonstration of the iPhone Touch ID hack is here. In more extreme instances, thieves have also been known to chop off a finger in order to commit crime, such as these Malaysian car thieves.
Once entered into a system, biometric data is represented in a digital file, and this can be stolen and replayed. A demonstration of intercepting and replaying a digital representation of biometric data is here. In the 2015 Office of Personnel Management (OPM) data theft, digital representations of 5.6 million fingerprints were stolen. Earlier this month we found out about the data breach of 55 million voters in the Philippines – it is believed that biometric information is amongst the stolen data. The implications of biometric data file theft are far more severe than username/password data theft, as with biometrics there is no reset function. You can change your password but you can’t change your fingerprint or iris.
Potential negative consequences are compounded when biometric data is stored centrally on a server rather than only on the endpoint device. The Unique Identification Authority of India for example, collects biometric data on its 1.2b citizens which it stores in a central database. Personal biometric data from different body parts should never be stored on the same database or server.
Society needs to debate where the limits should be regarding the use of biometrics for commercial purposes. Should the car dealership know details of your financial position, the minute facial recognition technology identifies you walking onto the premises? Will we end up wearing sunglasses even on a cloudy day?
Because of the security risks and severe consequences of breach, biometric authentication should only be used where absolutely necessary.
Unfortunately, this very well informed article repeats a common misconception about vulnerabilities of biometric authentication.
Biometric data is, as you say, already public information – stored enrollment measurements from a person to be compared with measurements of a claimant in the future. This is precisely what makes biometric data different from passwords, and why the “I can’t reset my fingerprint” issue is a red herring. Seeing you or knowing your measurements do not let an imposter measure up, so to speak. Thus the real issue is to ensure the *integrity* of the storing, sensing, transport, and analysis of the biometric data, so that imposters can’t make use of the public data that you are worried about.
Rather than worrying about a database of measurements being in the wrong hands, properly implemented biometric solutions are built assuming and defending against a “bad actor” with a fingerprint in hand, trying to insert it at every point in the pipeline from capture to determination, and defend against those attempts being successful. It is true that some biometric systems are implemented with a priority on convenience over security, such as the unlock feature of the iPhone – it was unacceptable to have to try again 5% of the time in order to be able to distinguish a live finger from play doh. Thus the fake finger attack works there. However, not all fingerprint systems are the same, just like not all cars are the same.
If the integrity of the enrollment and authentication process pipelines can be assured, then it no longer matters who has the biometric data for a person – they can’t do anything with it. I still am the only person whose measurements match the enrollment. This can be hard to wrap your head around, so let’s look at real history for an example.
For many years, society has used biometrics, albeit informally, through drivers license ID (a visual match of a drivers license photo is made against a person claiming the identity). This model worked fine for decades, despite the same red herring “problem” of a person’s face being public. Despite our reliance on face matching as our identity verification, we didn’t have to hide our faces, or worry about someone stealing a database of our faces. The vulnerability in drivers license ID turned out to be in the trusted document. Imposters became skilled at creating fake enrollment documents- drivers licenses with *their* face and a victim’s name. Note that the biometric wasn’t compromised – they didn’t present trying to look like the victim. Instead, they defeated the integrity of the credential, making a substitution of the biometric sample. If identity were verified against a central server, which brought up the person’s photo from the DMV, instead of requiring that we trust the drivers license, then this attack vector would be rendered ineffective.
Everyone can see me, but only I can be me.