How does Apple or FIDO know you are the person that is authorized to access that phone? Or, how does the 3rd party applications (apps) that are accessed through the phone know you are the person authorized to have access?
The fingerprint sensor? Well, what if that mobile fingerprint authentication capability isn’t very accurate? What if it’s spoofable because the technology doesn’t use enough data to authenticate the user with a high degree of assurance? Given both Apple and Samsung’s fingerprint sensors were spoofed within a day or two of release, isn’t it clear that the “phone” can’t actually know it’s you putting that finger on the sensor? This, by default, means that the apps and related service providers can’t know it’s you either. So, assume for a moment, that the app is a PayPal app on your iPhone of Galaxy S5. Under this circumstance, PayPal cannot know with certainty that it’s actually the authorized finger on the sensor. In other words, I could potentially use your phone to access your PayPal account and PayPal wouldn’t know. But would they care?
There are two types of errors in biometrics. A “false accept” error allows an unauthorized user to gain access, while a “false reject” error mistakenly fails to allow the authorized user access. There are trade-offs between the two and it should be known that a fingerprint algorithm, driving the fingerprint match on the device, can be tuned toward convenience (low accuracy) so as to avoid a poor user experience. If the fingerprint sensor doesn’t recognize the true user 25% of the time, that user can’t get on the phone via the sensor 25% of the time. This is a major problem for vendors whose product reputation is that “it just works”. With a high false reject rate, user productivity would fall. User satisfaction would fall and that vendors market share would certainly fall. If the algorithm is then tuned to reduce that false reject inaccuracy, it elevates the possibility of an unauthorized access, or false accept, to both the phone and apps associated with the poorly performing fingerprint system.
The “device centric” model, where the identity associated with the phone is presumed to be trusted, is flawed if you can’t trust the technology to guarantee it’s you. And, frankly, it’s not enough for the application vendor, potentially like PayPal or iTunes, to claim that they use other authentication technologies to compensate for potentially inaccurate biometrics, when the entire rationale driving the use of biometrics is that those other authentication measures aren’t really working well in the first place. If you disagree with this, just Google search “identity theft statistics”.
In fact, there are two solutions to this problem. First, it seems obvious that a more accurate algorithm, which can help minimize these errors, would be more appealing to both the sensor manufacturer, the handset manufacturer and the application developer. I suspect these reports of fingerprint spoofing the 5S and S5 will drive demand for more accurate algorithms. Market forces might solve the accuracy problem. There is another solution factor, however. That second potential solution is a concept called a “Trusted Source”.
There is a wonderful line in the movie “JFK”, where a man rhetorically asks the question “How does your Daddy know he’s your Daddy?…Because your Mamma says so.” Well, in that case Mamma is the trusted source. Unfortunately, today there is NO true trusted source in the device centric authentication model. There could be, however. By associating a “trusted source”…a “root source”…we can anchor the user identity data at user enrollment on the phone and app. At this point, both Apple and FIDO would mitigate significant risks associated with a design model that relies on less than stellar, proprietary and sole source fingerprint technology. That trusted source must be outside the device, naturally.
Who or what could be an ideal “trusted source”? There are some good examples out there today. Credit reporting entities and various government services, like the Dept. of Motor Vehicles data systems, are widely accepted as trusted sources. When you apply for a credit card account at a bank, the bank asks various credit reporting entities for your credit report. The bank also requires government credentials. This data is not questioned by the bank, but accepted at face value. Of course, these entities are effectively large databases, access to which is facilitated through the Internet. Thus, those systems are “cloud services”. Are they not? By implementing a biometric matching platform in that cloud, the vendor or govt service provider can scrub the database for fraudulent dual entries, dramatically reducing risk of fraud. Then, by associating a biometric with each entry, potentially a credit report entry, the trusted source can be trusted to provide clean data about the applicant and a high assurance the applicant is who they claim to be. At that point, the bank need only ask for your fingerprint when you apply for a loan and can know you are, in fact, the person associated with that driver’s license and credit report. As importantly, that same anchoring “trusted source” can be used at enrollment when you buy and enroll on your device centric iPhone or FIDO-ready phone. Thus, the Trust Source concept secures the insecure Apple and FIDO model, especially if that trusted source can be optionally used to authenticate you when you try to use that phone and associated PayPal account to buy something. It’s an added level of security that helps offset the flaws associated with the device-centric model. Many would call this “completing the chain of trust”.
Of course, the argument favoring the device centric model points to risks associated with aggregating identity data in a cloud, including biometric data. However, it’s unclear…to me anyway….if the device-centric-only folks understand or care about the inherent risks of that model. Maybe the answer is to adopt both. The Cloud isnt going away guys…
As they stand now, Apple and FIDO really just pass risk onto the 3rd party service provider, who must specifically choose to TRUST that sensor or phone have not been spoofed. This is effectively the same risk transfer the credit card sponsoring banks have enjoyed for years, when they simply raise interest rates for all customers to pay for the cost of the frauds enabled by weak credentialing and user authentication. It’s time to stop passing the fraud buck onto the consumers. Considering the value estimates of identity theft frauds approach $20 billion annually in the payments industry alone, we might consider the economic benefit our society would enjoy by reducing or eliminating all identity theft related frauds in banking, ecommerce, healthcare and elsewhere.
Clearly, what we were doing doesn’t protect against identity theft, data breaches and other frauds. It seems equally clear that not all market participants are thinking about all risks associated with varying architectures and system designs. Passing the risk buck onto consumers and third-party service providers is not acceptable. We should consider a broader solution that includes more accurate technologies and a Trusted Source.